Artificial intelligence already has a wide-ranging impact on our lives; it powers our smart appliances, helps us shop and do research, and assists doctors with diagnoses. But how did we get here? To get the full picture, we’ve dug into the history, benefits, key concepts, and future projections for the use of AI. Here are the basics, from Alan Turing to machine learning.
We’ve all heard the term, but just what exactly is artificial intelligence (or AI)? AI is the concept of non-human entities possessing human-level intelligence and performing intelligent tasks, a concept that dates back to the ancient world. However, the version of AI we’re most familiar with today is that of computers performing tasks that normally require human intelligence.
There are two versions of AI floating around: the “narrow” AI that we deal with more and more in our everyday lives—think search engines, spam filters, and robotic floor cleaners—and then there is the concept of a fully developed or “general” AI that will someday operate independently of human beings. Narrow AI is geared toward a host of tasks that AI can perform today to varying degrees of success, including facial and voice recognition, pattern recognition, and research capabilities. The second concept of general AI is still the stuff of science fiction, but as an idea it’s an important component of the history of artificial intelligence.
The idea of AI was born in the myths of metal automatons constructed by the Olympian gods of ancient Greece and winds its way through the history of simple computational machines from before the Industrial Revolution. The idea of machine intelligence related to computers, however, originates with the initiator of modern computing himself: Alan Turing.
This British computer scientist, mathematician, logician, and cryptanalyst worked at decoding German communications in Bletchley Park during WWII. He published a paper in 1950 entitled “Computing Machinery and Intelligence,” proposing what has since been named “The Turing Test,” in which a computer tries to tell the difference between a human and a machine through a series of questions.
The actual term “artificial intelligence” shows up on the historical radar in connection with computers in the 1950s, coined by pioneering American computer scientist John McCarthy. Since these early days, the evolution of computers has gone hand-in-hand with the development of AI, and machines and software have been created that can perform more and increasingly complicated acts of “intelligence.”
With the expansion of the digital world, the development of AI applications has exploded in recent years and permeated nearly all aspects of the economy. It may soon be easier to list the things that don’t involve some kind of AI technology. Some key examples of AI in action today are:
While they aren’t the standard yet, self-driving cars are inevitable. They will populate our streets and likely even become the main mode of personal transportation. Using AI to read the landscape and avoid accidents and obstacles, they are not that different from commercial airplanes, which are mostly flown on autopilot with self-adjusting instruments.
AI can tackle translations between any two languages, as well as nearly seamlessly interpreting speech. The latter, combined with AI applications that can master huge amounts of data, has allowed for innovations like the digital assistants Siri and Alexa, as well as their rivals.
The ability of computers to not only process large volumes of data for specific goals, but also to tune into nuances of language usage and learn from mistakes (see below) allows AI to tackle data interpretation for better results in everything from overall business and customer service applications to cancer diagnoses to algorithmic financial trading to weather forecasting.
Robotics and artificial intelligence power the development of machines that can replace and surpass workers in areas from complex manufacturing to microsurgery to military action. AI also makes the sensors associated with Internet of Things-enabled devices intelligent—your sensor-equipped toothbrush does more than gather data about how you brush; it also tells you how to improve.
Though our encounters with AI in science fiction/fantasy movies like The Terminator involve killer humanoid robots, AI technology is mostly working to make our lives easier in the present. The benefits of artificial intelligence include, but are not limited to:
It’s one thing to program AI to perform a predictable task, and quite another for robots or software to become more intelligent over time. Though they are closely related, “machine learning” is more than just another term for AI. AI is an umbrella term for all instances when a machine is able to operate intelligently.
Key to the development of machine learning technology, however, has been teaching computers how to teach themselves. Using machine learning, software can go beyond simply performing intelligent tasks to actually learning from its mistakes and successes using algorithms, data, and experience. With machine learning, AI gets smarter as it operates.
The use of artificial neural networks (ANN) that essentially allow computers to categorize information in a way similar to that of the human brain has facilitated a more developed form of machine learning called “deep learning,” which aims to solve increasingly complex problems. Applications of machine learning technology are already in use in a wide variety of contexts: from financial advising to healthcare to the self-driven cars mentioned above that scan the landscape and improve their ability to read objects over time.
As Eliezer Yudkowsky of the Machine Intelligence Research Institute puts it, “Artificial Intelligence is not settled science; it belongs to the frontier, not to the Textbook.” The possibilities for AI are seemingly endless, and so is the investment. Gil Press of Forbes quotes a 2016 Forrester Research report predicting a 300% increase in investment in AI technologies from 2016 to 2017.
So what will artificial intelligence in the future look like? AI will most certainly increase its presence in our daily lives as it revolutionizes the workplace. Predictable tasks and the management of big data will more and more commonly be handed over to bots. However, Nanette Byrnes of MIT Technology Review writes that “despite fears that AI will lead to the widespread replacement of workers, human judgment and feedback remain integral to improving machine-learning systems.”
One thing is certain: labor, business, and consumer practices will involve interaction and integration with AI technology in the future—breaking away from the model that AI and workplace automation generally threaten at present. A 2016 Accenture report on the future of AI predicts that these technologies will, in fact, increase labor productivity by upwards of 30% in a host of developed economies.
Benefitting from the positive impact of AI will mean staying in tune with the benefits and opportunities it offers and integrating them into business models. A good place to start is applying AI to your Customer Relationship Management (CRM). The future of successful business will require professionals who are well-educated about AI technology and open to creative solutions and harmonious cooperation between human beings and machines.
Stay on top of technological developments and solutions of AI in business and life to remain competitive going forward. Download our e-book on AI for CRM to get started putting this knowledge into practice for your business.