Last week’s podcast featured another discussion on artificial intelligence (AI). But maybe the correct term is “alien intelligence”, with an important distinction: alien doesn’t mean extraterrestrial.
A thinking thing that learns to think in an entirely different way than you and I think, can be both intelligent and entirely alien in how it approaches the same problems. That thought occurred after reading the excerpt below from a comment on the Hacker News website.
It shows that ways of thinking that are natural to you and I – because we’re biological thinking things with a bias toward pattern recognition – may not be natural at all to an intelligence that is NOT biologic and which “learns” to think in a different way. From the post by a Go player:
When I was learning to play Go as a teenager in China, I followed a fairly standard, classical learning path. First I learned the rules, then progressively I learn[ed] the more abstract theories and tactics. Many of these theories, as I see them now, draw analogies from the physical world, and are used as tools to hide the underlying complexity (chunking), and enable the players to think at a higher level.
For example, we’re taught [to] consider connected stones as one unit, and give this one unit attributes like dead, alive, strong, weak, projecting influence in the surrounding areas. In other words, much like a standalone army unit.
These abstractions all made a lot of sense, and feels natural, and certainly helps game play — no player can consider the dozens (sometimes over 100) stones all as individuals and come up with a coherent game play. Chunking is such a natural and useful way of thinking.
But watching AlphaGo, I am not sure that’s how it thinks of the game. Maybe it simply doesn’t do chunking at all, or maybe it does chunking its own way, not influenced by the physical world as we humans invariably [are]. AlphaGo’s moves are sometimes strange, and couldn’t be explained by the way humans chunk the game.
It’s both exciting and eerie. It’s like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). And much to our surprise, it’s a new way that’s more powerful than ours.
It’s certainly exciting. But eerie may not be the right word. An “alien intelligence” is unlikely to value the same things we do – and that’s allowing for the wide variability in things that human beings value. Would an alien intelligence value life?
There is an application for all this to financial markets by the way. RBS announced its laying of 550 staff, 200 of which are in the “advice” portion of the business. It’s replacing them with a “robo adviser”. The “adviser” puts a prospective client through a series of questions and then churns out an investment plan. Et voila!
You can still get advice from a real person. But you’ll need £250,000 in order to speak to a flesh and blood adviser. Under that threshold, and it’s telephone prompts and surveys for you.
Would an “alien robo adviser” come up with the efficient market hypothesis? Would it employ a balanced portfolio? Diversification? Asset allocation? Or would it find the entire idea of spending the last thirty years of your productive life living off the income generated by your investments completely preposterous?
Category: Investing in Technology