Is AI really this stupid? This full page advertisement recently appeared on the back cover of The Economist magazine. The advertiser claims a leading role in averting train delays because they can help schedule door maintenance.
Maybe I have to point out some of the things that are wrong with this ad:
- Their scheduler is called an “AI-powered predictive maintenance solution”.
- They imply that malfunctioning doors suffering from forgotten maintenance are a significant cause of delays of entire trains, fitting in somewhere with the other 98% of delays that are caused by actual problems like track overcrowding and engine failure.
- They can’t even get a client to testify in support of this display of software prowess, citing only “a global transportation giant”.
In actuality, scheduling door maintenance is a very small part of running a train network. It can be optimized, but it’s not very important. This leads to a trivialization of the entire concept of AI. After seeing this type of promotion, the employees of companies like Fractal.ai probably believe that “AI” consists entirely of this type of minor optimization, or worse, since presumably the marketing geniuses at Fractal.ai considered even less interesting and impactful projects before settling on this one.
At a recent AI meetup, I found myself debating with a crowd of six people who argued that AI will be limited to minor optimizations for the foreseeable future. This is dangerous thinking. They have lost sight of the things that AI already does at superhuman scale — such as track the hourly movement of millions of humans by faces, license plates, phones, and other indicators. They can only imagine an AI, our current AI, that is basically a repackaging of simple statistics to sort doors into “good” and “needs maintenance”, and only gets smarter by collecting more data.
These people dismiss the possibility that AI may advance in rapid and unpredictable ways. They aren’t considering the possibility that we may solve the problem of learning from a small number of important examples, or deducing cause and effect, or making a machine that designs better AI. They aren’t following the use of evolutionary algorithms to design completely new network and AI architectures. Stuck in the role of making small optimizations for giant corporations, they have lost their imagination, their curiosity, and their moral concern for a future of good and evil.