One of the biggest news subjects in the past few years has been artificial intelligence. We have read about how Google’s DeepMind beat the world’s best player at Go, which is thought of as the most complex game humans have created; witnessed how IBM’s Watson beat humans in a debate; and taken part in a wide-ranging discussion of how A.I. applications will replace most of today’s human jobs in the years ahead.

Way back in 1983, I identified A.I. as one of 20 exponential technologies that would increasingly drive economic growth for decades to come. Early rule-based A.I. applications were used by financial institutions for loan applications, but once the exponential growth of processing power reached an A.I. tipping point, and we all started using the Internet and social media, A.I. had enough power and data (the fuel of A.I.) to enable smartphones, chatbots, autonomous vehicles and far more.

As I advise the leadership of many leading companies, governments and institutions around the world, I have found we all have different definitions of and understandings about A.I., machine learning and other related topics. If we don’t have common definitions for and understanding of what we are talking about, it’s likely we will create an increasing number of problems going forward. With that in mind, I will try to add some clarity to this complex subject.

Artificial intelligence applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, decision trees and machine learning to recognize patterns from vast amounts of data, provide insights, predict outcomes and make complex decisions. A.I. can be applied to pattern recognition, object classification, language translation, data translation, logistical modeling and predictive modeling, to name a few. It’s important to understand that all A.I. relies on vast amounts of quality data and advanced analytics technology. The quality of the data used will determine the reliability of the A.I. output.

Machine learning is a subset of A.I. that utilizes advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa, Apple’s Siri, or any of the others from companies like Google and Microsoft all get better every year thanks to all of the use we give them and the machine learning that takes place in the background.

Deep learning is a subset of machine learning that uses advanced algorithms to enable an A.I. system to train itself to perform tasks by exposing multi-layered neural networks to vast amounts of data, then using what has been learned to recognize new patterns contained in the data. Learning can be Human Supervised Learning, Unsupervised Learning and/or Reinforcement Learning like Google used with DeepMind to learn how to beat humans at the complex game Go. Reinforcement learning will drive some of the biggest breakthroughs.

Autonomous computing uses advanced A.I. tools such as deep learning to enable systems to be self-governing and capable of acting according to situational data without human command. A.I. autonomy includes perception, high-speed analytics, machine-to-machine communications and movement.  For example, autonomous vehicles use all of these in real time to successfully pilot a vehicle without a human driver.

Augmented thinking: Over the next five years and beyond, A.I. will become increasingly embedded at the chip level into objects, processes, products and services, and humans will augment their personal problem-solving and decision-making abilities with the insights A.I. provides to get to a better answer faster.

A.I. advances represent a Hard Trend that will happen and continue to unfold in the years ahead. The benefits of A.I. are too big to ignore and include:

  1. Increasing speed
  2. Increasing accuracy
  3. 24/7 functionality
  4. High economic benefit
  5. Ability to be applied to a large and growing number of tasks
  6. Ability to make invisible patterns and opportunities visible

Technology is not good or evil, it is how we as humans apply it. Since we can’t stop the increasing power of A.I., I want us to direct its future, putting it to the best possible use for humans. Yes, A.I. — like all technology — will take the place of many current jobs. But A.I. will also create many jobs if we are willing to learn new things. There is an old saying “You can’t teach an old dog new tricks.” With that said, it’s a good thing we aren’t dogs!

Start off The New Year by Anticipating disruption and change by reading my latest book The Anticipatory Organization. Click here to claim your copy!

Author Daniel Burrus

Daniel Burrus is considered one of the world’s leading technology forecasters and innovation experts helping clients profit from technological, social and business forces that are converging to create enormous, untapped opportunities. The New York Times has referred to him as one of the top three business gurus in the highest demand as a speaker. He is a strategic advisor to Fortune 500 executives helping them to apply his anticipatory methodologies to elevate planning, accelerate innovation and transform results. He is the author of seven books, including The New York Times bestseller Flash Foresight, and his latest bestseller The Anticipatory Organization. He is a serial entrepreneur who has founded six businesses, four were national leaders in their first year.