Out of sample testing – the Holy Grail?

Posted by on Jul 6, 2014 in ArbMaker News! | No Comments

Conventional wisdom suggests that weak out-of-sample results mean unreliable in-sample results. Yet that is not necessarily the case: in-sample data is not always biased to detect spurious predictability.

For example, splitting a sample between in and out portions may mean a general loss of information and less predictive power. This is aggravated in samples of small size. Thus an out-of-sample test may end up falsely rejecting valid in-sample results.

There is also the issue, in the context of our software, of reconciling instances where the split period used for the training turns out not to be a period over which the pair is cointegrated whereas the whole period is.

Overall the message might be this: traders could usefully take the view that both in-sample and out-of-sample test results be viewed as informing the pursuit of tradable pairs rather than one being the underwriter of the other.

Check out the screen shots here to see how ArbMaker performs out-of-sample tests.

Leave a Reply