AI has moved faster than we expected

In the third part of this mini-series on ethics in AI, I want to talk about how we can educate ourselves against unintended consequences of our actions.

A few years ago, my team led an AI project with a vehicle engine manufacturer. The remit was to use data gathered from a range of parameters to measure engine wear and predict possible failures. The results were fairly good but the deeper level of understanding—which we had not intended to uncover—was that different drivers had different styles of operating the vehicle.

Some were easy on the engine and parts, while others maybe accelerated too hard or braked too late, causing extra mechanical stress. Essentially, the company could now identify the drivers whose style caused engine wear that would require more frequent maintenance. So, some bright spark decided that it could be a good idea to factor this into each driver’s annual performance review.

What started out as a way to improve maintenance became a potential stick to wield over employees who had never been rated on such things before. Thankfully, the plan was shelved, but it is a great example of the unintentional consequences of AI.

When designing that AI, we probably did not predict how powerful it could be or the level of detail and causal relationships it would be able to extrapolate from the data.

This is a common problem with AI. There has been a theory around the subject for a hundred years or more—and we have all enjoyed the searching philosophies of Sci-Fi books—but even a century of theory has not really prepared us for how fast the technology has developed over recent years. In fact, it has taken some large, very public fallout to wake us up.

The Facebook elections scandal was an eye-opener for a lot of people in the AI industry. We perhaps predicted that something like the manipulation of democratic elections through AI was possible, but we imagined that it would never happen, at least not in the foreseeable future.

In a way, that has served as the wake-up call for AI to be built with right intention and ensure the intention is well guarded. AI does have the power to do huge amounts of good, but it comes with a caveat if left uncontrolled. For example, personalized marketing gives people access to the products they want at better prices, but it can also be used to manipulate people’s choices.

How do we guard against the negative influence of AI?

First and foremost, we need to stop talking only about the negatives – it makes a great headline and a fine piece of clickbait, but it does not take into account that there are also several good sides to the use of AI. There are always two sides to a story and we need to be talking about both. There are some stunning breakthroughs taking place and it would help people from outside of the IT industry to really understand AI if they saw more of the overwhelming amount of good news stories that exist around the subject. That being said, I think we do also need to be open about the negatives so that we can fix them. And we will also then get better at predicting worst-case scenarios and putting up guard rails.

For me, education is the key. It needs to be happening in two places: In the classroom and in companies.

We have all seen those pictures of a teacher holding up a card saying “I’m teaching my kids about web security, please share to show them how fast and how far this can go”. The best thing about that is that classroom time is being spent on things that will really affect young people. However, the sign should also say “…and how long it can last” because young people need to understand that every interaction they have online right now may affect their lives years down the line.

Guardians of the Galaxy director James Gunn was fired for offensive jokes he had made in tweets seven years previously, and which he had deleted and therefore presumed would not be coming back to haunt him. Data does not forget. It can empower and enhance the lives of young people, but it also has the ability to damage them. This is a message we need to be getting across.

And in businesses we need to be talking about the ethics of AI and potential unintended consequences. AI’s business value is absolutely undeniable, but it does have its own idiosyncrasies.

I believe that AI will become such an important part of every business that it will be quite normal for companies of all sizes to have an AI Evangelist or a Chief AI Officer on board. And this role will not only encompass leveraging AI to drive business outcomes but also ensuring that it is ethical AI with the security built-in to protect from bad actors.

At the end of the day, any technology by itself is not good or bad – its all about how we choose to use it.

Note : This article was originally published on LinkedIn

Beena Ammanath

Executive leader with demonstrated success at solving complex business problems with simplicity and team work leveraging technology. Lead change from vision through execution at scale – across startups, large multinational corporations and M&A.

Successful at building, inspiring and retaining high performance global teams across diverse industry verticals spanning e-commerce, IoT, marketing, telecom, retail, fintech, manufacturing and services domains.

Author

Shenzhen Blog Editor