I got a very good question over the weekend about the Ambiguity indicator and the Deep Learning Algorithm (henceforth known as DLA) and I think it deserves to be answered in a wider space as it could be helpful to a lot of you folks to understand the differences between the two of them and how I use them.
Ambiguity and Probability of Negative Returns
I publish daily after the close an updated chart with those two values, but what do they mean? In first place Ambiguity is not an algorithm or heuristic or trading signal at all. Ambiguity is just an statistical quantity that describes the market, very similar to something like realized volatility or even VIX (implied volatility 30 days ahead) those quantities by themselves don’t generate any signals, or actions for trading, they are just helpful to try to understand what is going on with the market at any given time.
In this sense the chart of Ambiguity that I publish here can be described more accurately as the 20 session close to close Ambiguity reading of the market, so it provides a near term view (almost 1 calendar month) about what is going on with the market in terms of unknown uncertainty also known as Knightian Uncertainty. For reference something like VIX measures the known uncertainty of the market so Ambiguity is like a cousin to VIX as it measures the other big part of the picture. By itself neither VIX or Ambiguity can provide actionable information so please don’t see my Ambiguity chart as some kind of system, I only post it to inform you about that is the status of that statistical variable day after day. Now, there is certain use for that information, for instance we have seen historically that the market tends to underperform when Ambiguity is very high AND probability of negative returns is low (less than 0.5) this is a simple rule of thumb that has worked well in the past however during 2017 it has been working less and less and in particular during the last quarter of 2017 the market seems to be immune to this as it has refused to provide us with any meaningful downside at all.
The infamous DLA
The deep learning algorithm, is actually a mathematical Deep Neural Network that I designed last year (early 2016) and that I trained with historical data going back to 2008. This neural net has been designed with the sole purpose of generating a single statistical signal for a very specific pattern of market action, namely a 1% move to the upside in a 7 calendar days window. Surprisingly the pattern has been mind blowing accurate this year, and in fact the neural net has achieved an incredible accuracy rate with live data (higher than the one with the original training and testing data sets). So yes, this is a system that has been working well consistently this year and that for some reason we have neglected in the room as we don’t tend to trade off its signals very frequently. We are getting better at paying attention to it but still I don’t use blindly and sometimes that costs me money, like on Thursday last week when the algo was flashing a gigantic signal all day long, all the way until the close and I was too cheap to enter a trade (it would have made fantastic money on Friday alone). Oh well, at least I’m glad that some of you actually have the discipline of entering. Going forward I plan to create a very small and simple webapp where you folks can check the real-time output of the algo (updated every few minutes) so you check it out and pull my ears if it starts to flash a solid signal and I start to vacillate on it.
Now, there is a very tight connection between Ambiguity and the DLA because both Ambiguity and Probability of negative returns are two features (in a statistical sense) of the deep neural network, in fact from all of the 11 features in total those two tend to have an outsized weight on the output of the trading signal that is why the DLA can fluctuate so wildly during the intra-day session because Ambiguity changes quite a lot as the day progresses and a good signal in the morning can disappear in the afternoon due to that. That is why I tend to wait near the close to get a more solid reading from the algorithm.