Drawing Inspiration From Unlikely Sources: What I Learned from Shazam

The other day, I read an article about how Shazam actually works to recognize music so quickly and efficiently. One would wonder how to pinpoint a single 5-second song snippet within a database of 8 million+ songs. As it turns out, they use a process called ‘Fingerprinting’ to generate condensed ‘fingerprints’ of information about every song. These fingerprints contain numeric interpretations of information about the particular song, such as tempo, bandwidth, and amplitude of sound waves. As it turns out, the fingerprints are really similar in function to embeddings! If you think about it, a data embedding in my project would be a vector containing crucial information about a stock at some time period, including close price, open price, volatility, et cetera… And building off of this analogy, the query_embedding in Shazam would be the 5-second clip of music you play it. Once Shazam receives this input, it parses the clip into several smaller clips (either 0.25, 0.5 or 1.0 seconds), and creates a fingerprint for each of the sub-clips. Once these fingerprints (like query_embeddings) are created, they can be mapped against the whole database of fingerprints, returning the most similar one.
shazam
Here is where the most important reason for me mentioning Shazam appears: Sequencing of fingerprints.
If Shazam matches two audio clips of 0.25 seconds, this is not enough to prove that these two audio clips represent the same songs. Plenty of different songs have short instances which sound the same, especially nowadays, so we need more proof to match two songs. Shazam’s clever approach is through sequencing, wherein the app chooses the data fingerprint that closest matches the query fingerprint, and then compares the following data fingerprints to the following query fingerprints. In essence, if one fingerprint match is found, you would then compare the neighboring fingerprints, and if the neighbors all match as well, then this proves that the two songs are the same. This would mean I first have to find a single event match, then see if the neighbors of this event match as well, and if the second is true, then I can start considering the expected outcomes.
The other beautiful thing about fingerprints or embeddings is that they are static, and therefore can be stored/accessed quickly and even offline. Consider it this way: A song, once released, does not change. This means the fingerprint for this song will always be the same and so it can be stored in some file of easily-accessible data, rather than having to update the fingerprint every time before searching. The same applies to stocks (or pretty much any historical data), as past open/close prices for stocks don’t change spontaneously. If you have wondered how Shazam’s music recognition works so quickly, this feature of fingerprints/embeddings is why.
If you want to learn more about the many intricacies of Shazam’s algorithms, here’s a more in-depth analysis of their music recognition.
Image result for sequence
This idea of sequencing embeddings feels like the final piece I needed for putting together my Word2Vec trading algorithm (though I’m sure there will be many more pieces to come), as I know understand how this code would work:

  1. Train Word2Vec on data set
  2. Create embeddings for all stock data based on findings from Word2Vec training.
  3. Store all of these embeddings into a file.
  4. On the daily trading algorithm, every couple of minutes, create a query_embedding from real-time data. Then, attempt to match
  5. If a match of great accuracy is found, proceed to sequencing (how large of a sequence match is enough, though?)
  6. Once a sequence passes our criteria for a ‘good enough match’, proceed to calculate expected outcome from this sequence (found by looking at what stock price changes followed this sequence of data in the past)
  7. Factor in both % accuracy of match between query_embedding between the sequence of data_embeddings, and % expected gain/loss.
  8. If a large loss is expected, then exit your position. If a large gain is expected, then invest (using a max_exposure function to ensure not too much capital is invested in a single place).
  9. Run code for hedging against large bets.
  10. Keep this process running throughout the trading day, while investing in low-risk stocks between investments based on Word2Vec generated predictions.

*I feel like this process would be better conveyed through a diagram, so I’ll try to get one of those uploaded next time.
As I write this, I already see a potential problem, which is how to treat new daily stock data? Once we train Word2Vec on our data set, we want to keep improving the Word2Vec by giving it new data, so should we just retrain the network every couple of weeks and update our data based on our new findings? To me, it seems reckless to ignore all the new stock data being created every day, but how often should we update our set of data embeddings? This will be a source of thought for the next week, as I am beginning to actually write the algorithm I describe above (keep an eye out for findings and reports coming soon).
 

Chaos

A different type of chaos theory

I ran into another interesting article over the weekend because I was so drawn in by the title: “Why Stock Predicting AI Will Never Take Over the World” by Matt Wright. Right now, this is a highly polarized issue, with some claiming that the entire market will be automatized in a decade or so, while others argue that “the market is an entirely human phenomenon” which cannot be recreated by self-learning bots. Mr. Wright is on the latter front of this debate, claiming that predicting the market is impossible due to Level 2 Chaos Theory (a theory which is much less intense than the name suggests). Level 2 Chaos is the theory that in the stock market, if you do magically come up with a prediction that is 100% accurate, so many people will rush to profit off of this prediction that the prediction will no longer be valid.
I responded to Wright’s story with the following (will update with a response if one arrives):

Mr. Wright, I agree with your claim that many people investing based on the same prediction will cause this prediction to become invalid, but don’t you think it is possible to counter L2CE in the stock market by safeguarding your predictions for yourself (and investing a controlled amount, rather than causing adverse effects by investing too much)? Or, alternatively, if you know many people are going to invest based on a prediction, you can hedge against the influx of investments by betting against the prediction? 

Also, more generally, you claim that this L2CE will occur if people try to predict exact stock prices. Does this mean you think it is senseless to predict broader patterns in the market, rather than predicting what the price of a stock will be tomorrow?

In my opinion, Mr. Wright is correct, but this is not sufficient reason to say that stock predicting AI will never take over the market (In fact, I would say stock predicting AI has already ‘taken over’ the markets, in the sense that the most successful hedge funds use AI to guide their predictions and to shape their portfolios). An algorithm like mine, which works to predict patterns (greater shifts) in the market, would not suffer from Level 2 Chaos because I am not singularly investing enough money to offset a larger market cycle, and I don’t intend to distribute my exact predictions every day. I wonder what others think about L2CE limiting the powers of trading algorithms? Are you on Mr. Wright’s side on this one?
Image result for bear market
In other financial news, stocks have not fared well so far this month, with this being (almost!) the worst start to a December for stocks since 1931. Some claim this apprehensive behavior on the markets is a direct result of uncertainties in foreign policy between the U.S and China, while others attribute the market decline to the limited number of trading sessions left in 2018 (wanting to save money for holidays, not ending year on bad note). However, this is all, in my opinion, the anticipation of the Federal Reserve’s final policy meeting of the year this Wednesday, where the Fed will decide whether or not to raise Federal Funds Rates. A couple of posts ago, I talked about how greater stock market behavior (like the graph of the S&P) is almost inversely proportional to the graph of historical FFR. As interest rates go up, stocks go down (with a buffer of about ~4 months), and vice-versa. You can check this for yourself with the help of this graph:
Image result for federal funds rate vs stock prices
Surely, with interest rates being so low for so long, I expect the Fed to raise them, with most estimates being a raise of +1/4%. A raising of rates this Wednesday would probably mean an immediate negative reaction, a slow start to 2019, followed by some overall growth which is then accompanied by a gradual decline in the markets. At the same time, I highly doubt that this slow end to the year foreshadows a greater, 1929-esque crash coming mid-2019.
 

static.seekingalpha.com/uploads/2016/1/616905_14524761039399_rId15.png
66.media.tumblr.com/ee2dddc9163caf566f2f747e2c05edc2/tumblr_n5r8wbYqFr1tzs5dao1_640.gif
pmcvariety.files.wordpress.com/2017/12/shazam.png?w=871&h=490&crop=1
upload.wikimedia.org/wikipedia/commons/thumb/7/7a/Cauchy_sequence_illustration2.svg/1200px-Cauchy_sequence_illustration2.svg.png
moneyandmarkets.com/wp-content/uploads/2018/12/bear-market2-440×264.jpg

 

One thought on “Drawing Inspiration From Unlikely Sources: What I Learned from Shazam

Leave a Reply

Your email address will not be published. Required fields are marked *