- In fact that would be Natural Intelligence ! Intelligence is intelligence – it is a way of processing information to arrive at inferences, recommendations, predictions and so forth …
May be it is that Contemporary AI is actually just NI !
Point #1 : Machines are thinking like humans rather than acting like Humans
- Primitives inspired by Computational Neuroscience like DeepLearning are becoming mainstream. We are no more enamored with Expert Systems that learn the rules & replace humans. We would rather have our machines help us chug through the huge amount of data.
We would rather interact with them via Google Glass – a two-way, highly interactive medium that act as a sensor array as well as augment cognition with a digital overlay over the real world
- In fact, till now, our computers were mere brutes, without the elegance and finesse of the human touch !
- Now the computers are diverging from Newtonian determinism to probabilistic generative models.
- Instead of using greedy algorithms, the machines are now being introduced to Genetic Algorithms & Simulated Annealing. They now realize that local minima, computed via exhaustive brute force, are not the answers for all problems.
- They now have knowledge graphs and have the capability to infer based on graph traversals and associated logic
Of course, deterministic transactional systems have their important place – we don’t want a probabilistic bank balance!
Point #2 : We don’t even want our machines to be like us
- The operative word is “Augmented Cognition” – our machines should help us where we are not strong and augment our capabilities. More later …
- Taking a cue from the contemporary media, “Person Of Interest” is a better model than “I,Robot” or “Almost Human” – a Mr.Spock, rather than a Sonny; Logical but resorts to the improbable and the random, when the impossible has been eliminated !
Point #3 : Now we are able to separate Interface from Inference & Intelligence
- This, I think, is the important distinction. Interface, Inference & Intelligence are the Three Amigos. Let me explain.
- New Yorker asks, “Why can’t my computer understand me?” Finding answers to questions like “Can an alligator run the hundred-meter hurdles?” is syntax.
- NLP (Natural Language Processing) and it’s first cousin NLU(Natural Language Understanding) are not intelligence, they are interface.
- In fact, the team that built IBM Watson realized that “they didn’t need a genius, … but build the world’s most impressive dilettante … battling the efficient human mind with spectacular flamboyant inefficiency”.
Taking this line of thought to it’s extreme, one can argue that Google (Search) itself is the case and point of an ostentatious and elaborate infrastructure for what it does … no intelligence whatsoever – Artificial or Natural ! It should have been based on knowledge graph rather than a referral graph. Of course, in a few years, they would have made huge progress, no doubt.
- BTW, Stephen Baker has captured the “Philosophy of an Intelligent Machine” very well.
- I have been & am keeping track of the progress by Watson.
- Since then, IBM Watson. itself, has made rapid progress in the areas of Knowledge Traversal & Contextual Probabilistic Inferences i.e. ingest large volume of unstructured data/knowledge & reason about it
- I am not trivializing the effort and the significance of machines to understand the nuances of human interactions (speech, sarcasm, slang, irony, humor, satire et al); but we need to realize that, that is not an indication of intelligence or a measure what machines can do.
Human Interface is not Human Intelligence, same with machines. They need not look like us, walk like us, or even talk like us. They just need to augment us where we are not strong … with the right interface, of course
- Gary Markus in New Yorker article “Can Super Mario Save AI” says “Human brains are remarkably inefficient in some key ways: our memories are lousy; our grasp of logic is shallow, and our capacity to do arithmetic is dismal. Our collective cognitive shortcomings are so numerous … And yet, in some ways, we continue to far outstrip the very silicon-based computers that so thoroughly kick our carbon-based behinds in arithmetic, logic, and memory …“
Well said Gary. Humans & Machines should learn from the other and complement … not mimic each other … And there is nothing Artificial about it …
- In another column in New Yorker, Gary talks about the Hype of AI. Yep, AI is very cyclic – AI Winters are legendary, so are the hypes …
I really wish we take “Artificial” out of AI – Just incorporate what we are learning about ourselves into our computers & leave it at that !