Thursday, January 26, 2023

AI Investing?

 

If "AI" can write essays and create artwork, could it not also predict markets?  Or not?

This "AI" thing is being tossed around and I think it is, like everything else hyped on the Internet, overblown, overstated, and overworked.  First, we are told that it will take over the world.  Then it will take away jobs from artists and writers.  Next, who knows?

What most people are calling "Artificial Intelligence" is actually just neural network programming - programs that "learn" from data input and then adjust themselves to create correct output.  So you can "train" a network to recognize shapes - as a host of people in Buffalo, New York, are doing at disused Tesla Solar factory, by sitting behind a computer screen all day long and clicking on images of road signs and then "training" the neural network that, yes, that is a stop sign, and no, that is not a yield sign.

The problem with neural networks, as I noted before, is that like teenagers, they "learn" things that are different from what you thought you were teaching them.  You set boundaries and a curfew and the teenager doesn't learn to obey these, but how to sneak out the basement window and not get caught.

For example, in one celebrated incident early on in the history of neural networks, the "Maverick" missile used a camera and a neural network to identify and distinguish between American and Russian tanks. They "trained" the network with a series of photos of American and Russian tanks.  The American tank photos were "beauty shots" made by the manufacturers - in broad daylight, all polished up for promotion.  The Russian photos were from spy cameras or telephoto lenses, showing tanks parked under trees or in the shade.  Thus, when the missile was tested, it tended to go after tanks in the shade and not in the sun - which wasn't of much use to anyone.  What the network "learned" was something different than what we thought we were teaching it.

So, hilarity ensues.  In another early test of "AI" a neural network was programmed to chat with people.  Within a few hours, the "AI" was spouting racist and fascist propaganda and insulting people as well. They had to shut it down.

In more recent days, people have complained about "AI Art" created by someone asking a computer to create an artwork in the style of a certain artist, having a certain subject matter.  The resulting "Art" is interesting, but it isn't hard to tell it is AI-generated.  The details are often off - hands with six fingers - and the backgrounds are busy and weird.   You can just sense there is a lack of - for want of a better word - Soul in the "painting."

Similarly, online AI text generators can generate essays but are also easily identified as being fake - they just don't come to the point or have any real message.   Some idiots have tried to use AI-generated essays in academic endeavors - only to be quickly found out.  It is like using Wikipedia as a reference source in an academic paper (being fluid, Wikipedia isn't a reference source itself).  The references cited there might be of some use, but not the page itself.  The "Wiki" editing process itself - where an unknown number of authors and editors create and edit a Wikipedia entry - is akin to how neural networks operate.  And the results can be similar - successive edits often make a page seem odd or off, and also include glaring errors.

Clearly "AI" is not ready for Prime Time.  AI-driven cars have tended to crash, shut down in the middle of a freeway, or run over small children.  AI-generated art just looks weird and "off".  AI-generated text has no real new content to offer, just a rehash of what humans have created.  Without training materials to work from, you cannot program a neural network.   And that raises the question - what would a neural network generate as content if the only training materials was from other neural networks?  Probably nothing.  Or it might devolve to weirdness in short order.

Some have wondered whether "AI" could be used in financial markets to buy and sell stocks, bonds, commodities, trade currencies, or whatever.  The complicated ups and downs of markets could be used to "train" an AI to predict market performance.  If you could do this, you could buy and sell stocks - or derivatives - and make a pile of dough.  It would be the venerated "time machine" I have talked about for ten years. Hey, if that worked, maybe it could predict winning lottery numbers, too!  I am being facetious, of course.

There are two schools of thought here.  If "AI" was used widely to make trading decisions, then markets would become less volatile and become more predictable, as computers would make emotion-less trading decisions based on real data and not hype and fantasy, as we see today.  The problem with this idea is the same problem the folks with the Maverick Missile had - the training materials.  People trade today based on little more than stock price and hype online.   "XYZ stock went up yesterday!  So buy it now!  Trust me - I'm a guy you've never met on the Internet who has no financial interest in this!"

But actual data is hard to come by and often it is intentionally kept secret.  In order to understand the value of a company you need to know more than pricing trends, P/E ratios, EPS, debt-to-equity, and all the "numbers" that analysts like to use.  I wrote some Patents for a company that had a charismatic leader/inventor who came up with all the good ideas.  He died unexpectedly and the entire company sort of evaporated.  How could AI predict that?  How could anyone?  But beyond that, there are things like product quality and reliability as well as labor strife, supply chain problems, government regulations and so on and so forth - each of which is enough to sink a company to oblivion or to raise it up above all others.

Take Apple, for example.  People like to think it is just one success story built upon another.  But such is not the case.  The Apple II computer sold OK, but sales plummeted once the IBM-PC came out.  The original MAC (or indeed the one today) was a curiosity and had a slim market share - and was basically their only product at the time.  The company almost went bust - they fired Steve Jobs - and tried to make the MAC "open architecture" like the ubiquitous PC, but that didn't work, either.

What saved Apple was the iPod.  And what saved the iPod was the fact that Apple  managed to buy out the entire first few years' production of 1" hard drives (which was what drove early iPods) giving them not only a lead against competitors, but a virtual monopoly for a few years.   They were able to leverage this into the iPhone, which put them ahead of competitors once again - at least for a few years.  Whether they can continue to innovate at that rate remains to be seen.  As the smart phone market matures, there isn't a lot of headroom for novel "must have" features.

The point is, could an AI see this all going down based on share price trends or annual reports?   Not even many humans could predict this unlikely chain of events.  Maybe an "AI" could help predict when a speculative bubble is forming - and more importantly, when it will burst.  Maybe.  But could an "AI" predict the price of eggs going up due to bird flu?  Or the price of natural gas going up because Russia invaded Ukraine?

What is interesting is that markets, as they are, are sort of a human AI or neural network, much as I alluded to earlier with Wikipedia.  The market valuation of a commodity or stock or bond represents the average weighted opinion of millions of people based on a number of criteria.  The price of a stock may be pegged by one analyst who looks over the books of the company and carefully studies their business and market.  Another "analyst" may be merely hyping the stock for nefarious reasons.  And small retail investors - the real wildcards - may be buying a share based on emotional needs.  How "AI" can predict all that is something I don't understand.

On the other hand, maybe it could.  I saw a video that simulated evacuations from high-rise buildings.  When the simulation assumed everyone would leave the building in an orderly fashion, the building was emptied out quickly.  Actual tests with real humans, however, produced evacuation times that were twice as long - if not longer.  They changed the simulation to include a certain small percentage of people going the wrong way or just running around in circles.  There is always the one jackass going back for his briefcase when the building is on fire.  When compared to actual tests with real people, the model tracked almost exactly.

So maybe it is possible.  But it would require that the "AI" track not only stock "metrics" but also human emotional aspects.

But again, the share price you see in the market is indeed the result of millions of neural networks - human neural networks - passing judgement.  And in most cases, they are right.  It is no different from the odds on horse racing.  You can look at the horse, the track, his racing record, the jockey, or could just look to see what others are betting on.  In most cases, the odds pretty accurately reflect what is going to happen in the race.  The only way to "win big" at the track is to defy the odds and win a long-shot bet.  You have to hope everyone else is wrong and you are right.  In 2008 we saw this with "The Big Short" - when a few people realized early on that the market for default swaps was not sustainable.   Could AI predict that?

I doubt it, at least not now or in the foreseeable future.  You see, AI makes mediocre art.  It makes mediocre essays.  It  makes mediocre driving decisions.  It likely would make mediocre investment decisions as well.  It probably would just say what most investment advisors say - to put your money in a number of rational things and hold on for the long haul.  I doubt it could systematically find the long-shot bets that would pay off, every time.

Because even if it could, every other trader out there would find the same bet with their own "AI" and as a result, the "odds" would decrease the point where it was no longer a long-shot with a big payoff.  And maybe this is why some claim that AI would stabilize markets - deflating bubbles before they inflate, perhaps.

But frankly, I think all this talk about "AI" taking over the world is a little premature.  For the most part, it seems like a lot of hype - like the Metaverse or the idea that food delivery is "The Next Big Thing!" and that companies selling online taxi services and renting out your spare bedroom are worth billions - even as they lose money.

It seems that the last decade has been one of tech hype.  We are told that "tech" will save us all and make us all so rich we won't have to work.  Robots will do everything and we will all get guaranteed annual income so we can live in tiny homes.  At the dawn of 2023, I think many of these dreams (or nightmares) are evaporating in the cold harsh light of reality.

Maybe we need to step back from this idea that tech will save the world.  Because a lot of this "tech" seems to be little more than hype - or in many instances, outright fraud.