We are looking at some of the ethical issues surrounding AI that any company should be aware of, before they set out on the road to improving business through artificial intelligence. This is not a cautionary tale; as I said in the introductory post, I truly believe that all companies will need AI in the future. In fact, I convinced that you need to be getting on board that train right now before it leaves the station. The reason we talk about ethics is not to turn businesses away from AI, it is to help them understand the issues that may hold back or even corrupt their use of the technology and render it ineffective to business. By knowing what can go wrong, you know you can do it right.

Where are companies making mistakes with AI?

One of the biggest problems with AI is bias. Ideally, an artificial intelligence should have as many data sources, from as wide a range of viewpoints as possible. And that data should be clean, readable, and possible to draw conclusions from.

If you imagine a new AI implementation as a newborn child with tabula rasa (if you prescribe to that theory), then it starts with a blank slate and begins to understand the world around it through everything it is exposed to. To simplify it further, if you speak Romanian at home, you child will grow up speaking Romanian. The child cannot possibly speak Japanese, English and Polish unless it is somehow exposed to them. The input dictates the output in both machines and humans.

The problem with AI comes when we knowingly or unknowingly expose the engine to data which only tells one story or does not show the whole picture. MIT proved this by creating an AI with psychopathic tendencies through only exposing it to one type of information. Sounds like the stuff of nightmares? You betcha. But, outside of the research lab, it can be a nightmare for businesses and organizations if they accidentally do the same thing.

  • Nikon created a camera that could detect if someone blinked at the crucial moment—but did not take into account that not all people around the world have the same shaped eyes.
  • AI trained to recognize shoes also had shape issues, failing to identify high heels as a type of footwear at all.
  • Software created to predict criminal activity showed bias against black people.
  • Winterlight Labs is building auditory tests for neurological diseases like Alzheimer’s disease, Parkinson’s, and multiple sclerosis. But the 2016 technology only worked for English speakers of a particular Canadian dialect.

Such tales of AI failures are becoming a dime a dozen. They are usually the result of well-intentioned innovations that went a little wrong—and can be solved by going back to the data and providing the AI with more varied sets to examine, thus removing the narrow intelligence input that creates a feedback loop.

These stories can be humorous or only mildly offensive when it is just a minor data snag. But they are deadly serious when racial profiling and personal liberty are at stake. And for the businesses developing AI solutions that take the wrong path, the damage to their public image and financial bottom line can be irreparable.

Is there a solution?

When we think about solving the issue of data diversity, the first reaction is often to simply suggest bigger data sets. Surely greater diversity is the answer? And, yes, it is the answer… but not data diversity. The real solution is to take a step further back and look at the diversity of the people designing and building the AI machine. The first step to eliminating biases in the Artificial Intelligence is to eliminate biases in the human intelligence that creates it.

Silicon Valley is quite open about its own diversity problems and the fact that much of the leadership in the industry is white men. It is an issue that has connotations in many fields, but it is particularly important in AI, as a more diverse set of people commissioning software and then validating the performance of the AI as it starts to learn, will consider a wider range of potential pitfalls and spot diverse issues in the early data that others could not see. This can ultimately affect the results of the AI and fundamentally impact business.

Machines keep making the same mistakes because they are designed and built by the same subset of people. We don’t know for sure but perhaps that failure to recognize high heels as footwear was because the people who fed  the initial examples of shoes into the AI were not the kind of people who wear heels.

Women and minorities are an essential addition to the design process of AI, not as a box ticking exercise but because it will lead to better solutions. In my twenty-plus years as a woman and an ethnic minority in Silicon Valley, I can see some positive change—especially with minority admissions at college level, even if they are not always translating to the boardroom—but there is a long way to go. In fact, within my role on the advisory board of a college, we see less and less young women joining up to study computer sciences.

This is a worrying trend but AI is actually one area where there is a great deal of hope that a more diverse people can contribute. You don’t need a computer sciences degree to be involved in the process: people can transition and bring value from pretty much any realm of business, be it finance, HR, legal, or a more humanistic background. This means that while the Age of AI is upon us, the age of excuses for not making great AI with a diverse range of inputs to eliminate data biases has no reason to ever occur.

So, if you are thinking about an AI solution, don’t think about data bias as being an unavoidable problem. Instead, think about if you have a team of people that will present you AI, with something other than a single point of view. And if you don’t, start hiring before you start firing up your AI machine.

 

Note : This article was originally published on LinkedIn.

 

Beena Ammanath

Executive leader with demonstrated success at solving complex business problems with simplicity and team work leveraging technology. Lead change from vision through execution at scale – across startups, large multinational corporations and M&A.

Successful at building, inspiring and retaining high performance global teams across diverse industry verticals spanning e-commerce, IoT, marketing, telecom, retail, fintech, manufacturing and services domains.

Author Shenzhen Blog Network

This article shared in Shenzhen blog with author's permission. It was already published on the author's social media and other platforms.