Editor’s Note: AI may sound like science fiction. but it is not. Just last month it appears that a computer allegedly passed the Turing Test for first time by convincing judges it is a 13-year-old boy. Ray Kurzweil, author of The Singularity is Near (the AI visionary bible) leads Engineering at Google, which has been making massive investments in AI, robotics, and quantum computing to crack the AI issue. He has also launched Singularity University as a think tank/accelerator with NASA, Elon Musk, and many others (including MR’s own Kyle Nel of Lowe’s) to help bring AI and other new technologies to life. There are hundreds of other examples in universities, private businesses, public companies, and government labs globally. Financial resources that surpass the GDP of many countries are deployed annually to realize the promise that AI holds for our future.
Since AI is inherently based on advanced mathematical models and is driven by data, it dovetails with the world of MR in many ways and we are already feeling the impact through the early stage advances of related technologies such as text analytics, predictive analytics, “Big Data”, data mining, agent based modelling, etc… With that in mind, regular blog contributor and Marketing Scientist Kevin Gray has put together a fantastic primer for everyone interested in the topic, especially insights pros.
This is a topic that will only gain more attention as the future unfolds, so I hope you find it as useful as I do!
By Kevin Gray
For decades computers and robots able to think have captivated our imaginations, sometimes terrifying us and sometimes charming us. Few will fail to recall R2-D2 and C-3PO from Star Wars, or the loquacious android named Data in Star Trek: The Next Generation that also helped popularize the term neural networks. We humans empathize with robots, sometimes to a disturbing degree.
On the other hand, computers and robots are also viewed more and more as threats to our livelihoods and it’s not difficult to locate blogs and newspaper articles offering tips on how to compete with machines in your job hunt. (Hint: be more charming.) Best-selling author Ray Kurzweil has written at length on Artificial Intelligence and even predicts a “singularity,” when Artificial Intelligence exceeds human intelligence, by the year 2045, with radical implications for humanity.
Humans have contemplated the human mind and human behavior for centuries and names such as Aristotle, Hobbes, Descartes and Hume will be familiar to all of us. The earliest calculating machine was probably built in the 1620’s by Wilhelm Schickard, a German scientist, and the first programmable machine, a loom that used punch cards, in 1805 by Joseph Marie Jacquard. However, in historical terms, AI is a very new field and only after WW II began to receive serious attention. The term “Artificial Intelligence” was not coined until 1955, by John McCarthy, who was then at Dartmouth College. It encompasses several disciples, for example psychology, neuroscience, natural language processing, machine learning and robotics, and has progressed in a somewhat bumpy fashion over the course of the past half century. It has grown into a major industry and the subfields within AI have become better integrated, as has AI with other disciplines.
It would be hard not to be interested in the subject of Artificial Intelligence but it’s also hard to separate fact from science fiction. Not being a computer scientist, let alone a specialist in robotics, it was difficult for me to get my human head around what is really happening in this field and to educate myself on the subject I decided to look beyond popular media. Artificial Intelligence: A Modern Approach (Russell and Norvig), now in its third edition, appears to be the leading textbook on the subject, and I also found Probabilistic Graphical Models: Principles and Techniques (Koller and Friedman) and Data Clustering: Algorithms and Applications (Aggarwal and Reddy) instructive regarding specialized topics within AI. Wikipedia is also a good source as is What is Artificial Intelligence?, a wide-ranging interview with John McCarthy.
What follows is a snapshot of what I’ve learned from my informal research.
Something that struck me immediately was that the mathematics and mathematical notation were clearly terrestrial in origin. Probability also plays a leading role in AI. Likewise, terms such as Bayesian Networks, SVM, State Space, MCMC, Utilities and Game Theory will be not be alien to most marketing scientists, and particularly those with experience in data mining and predictive analytics will note many similarities with their own work. Computer Scientists of course will feel even more at home in this field.
First, though, what is Artificial Intelligence? The term “agent” appears recurrently in this literature and refers to something that perceives and acts in an environment. Russell and Norvig define AI as “the study of agents that receive percepts from the environment and perform actions. Each such agent implements a function that maps percept sequences to actions, and [there are] different ways to represent these functions, such as reactive agents, real-time planners, and decision-theoretic systems.” AI is thus concerned with both reasoning and behavior and different researchers have variously emphasized thinking humanly, thinking rationally, behaving humanly and behaving rationally (i.e., getting it “right”, given the goals.)
From an AI perspective, there are three fundamental ways to see, or represent, the world. There are atomic representations, in which each state of the world is treated as a black box, i.e., something taken as a given that we are unable to explain. There are also factored representations, in which a state is a set of attribute/variable pairs. Finally, there are structured representations, where the world consists of objects and relationships among them. The last of these presents the greatest challenge to programmers.
A perfectly rational agent is able to find the best solution, given the information it has or has discovered. In reality, the calculations required to achieve perfect rationality are too time-consuming in most settings, therefore perfect rationality is not a practical goal. Bounded optimality, in which the agent behaves as well as possible within its computational constraints, is more realistic. The goal is the optimal program and not the optimal solution, and the agent is able to adapt to the environment in which it finds itself and is able to “guess” efficiently and accurately. As mentioned, it must be able to learn from experience and also deal with ambiguity and uncertainty, thus the relevance of Bayesian probabilistic reasoning to AI. Near-instantaneous access to massive data bases will facilitate these goals but the programming will not be trivial and the AI counterpart to general intelligence remains elusive.
The contention that machines could operate as if they were intelligent is called the weak AI hypothesis while the assertion that machines that do so are in fact thinking, not merely simulating thinking, is known as the strong AI hypothesis. The distinction may not have practical relevance to many working in the field, however. The well-known Turing Test was proposed by Alan Turing in 1950 and intended as an operational definition of intelligence. A computer “passes” if a human interrogator is unable to tell whether written responses to written questions came from a person or from a computer. (My own half-serious variant is the more stringent Cowell Test in which, to win, the program must fool the human judge into believing it is Simon Cowell.)
AI is off the drawing board and already used in medical diagnosis, education, navigation, operations, planning and scheduling, security, simultaneous interpretation and, of course, marketing and advertising. Since 1999, for instance, the Educational Testing Service in the US has used software to grade millions of essay questions on GMAT exams. A company based in Hong Kong has recently appointed AI as an official and equal board member. Even primitive expert systems are examples of AI and, in one form or another, it is working in the background with or without our being aware of it.
Computers have made small but noteworthy discoveries in astronomy, mathematics, chemistry and other fields requiring performance at human expert level. They do well at combinatorial problems (e.g., chess) but now are also able to learn from experience. That at times they can best human experts will come as no surprise to those who’ve worked in predictive analytics, since an important reason for using algorithms is that they frequently perform better than human experts at certain tasks. Their use is not simply a matter of cost.
None of this means computers use insight and understanding to perform these jobs but it does underscore that the same behavior can originate in different processes.
So where is AI headed? Artificial Intelligence has come a very long way though, obviously, “Data” remains a TV character. To quote directly from Russell and Norvig:
“Very powerful logical and statistical techniques have been developed that can cope with quite large problems, reaching or exceeding human capabilities in many tasks – as long as we are dealing with a predefined vocabulary of features and concepts. On the other hand, machine learning has made very little progress on the important problem of constructing new representations at levels of abstraction higher than the input vocabulary. In computer vision, for example, learning complex concepts such as Classroom and Cafeteria would be made unnecessarily difficult if the agent were forced to work from pixels as the input representation; instead, the agent needs to be able to form intermediate concepts first, such as Desk and Tray, without explicit human supervision.”
My self-study has been brief but here are some of my key takeaways:
• AI is no longer Sci-Fi. It’s very real but still very much a work in progress.
• It’s hugely complex and some of the best human brains are hard at work on it, though there remains disagreement among experts regarding important issues.
• AI is not an entirely new discipline without roots, and mathematics and probability lie at its heart.
• Machines still cannot truly think and lack genuine self-awareness. They do not have feelings, irrespective of our own feelings about them.
• Regarding future developments, there is no compelling need to emulate the precise functioning of the human brain which, at any rate, is still inadequately understood.
• AI need not be perfect to be very powerful. Moreover, some problems are unsolvable.
• At some point AI will begin to have an enormous impact on our lives and perhaps eventually human nature as we know it will cease to exist.
• We need to be on-guard against potential abuses of the technology.
The foregoing pertains to software and how advances in hardware such as Quantum Computing will influence developments in AI is another subject even farther from my areas of expertise. Again, I am not an authority on AI and this has been a short rundown of what I think I know.
That is, of course, if I really am…
1 Robot Abuse Is A Bummer For The Human Brain
2 If You Want To Avoid Being Replaced By A Robot, Here’s What You Need To Know
3 What Is Artificial Intelligence?
4 Venture Capital Firm Hires Artificial Intelligence To Its Board Of Directors