“The fact is that we are transforming as species into a technological species – that is a fact.”

Nick Bostrom

In an interview for the Axios on HBO documentary series, Elon Musk, on being asked about his neuroscience company Neuralink working on project building an interface to the brain, responded that his company was working on electrode to neuron interface at a micro level.

He offered to explain it as a means to achieve symbiosis with artificial intelligence in the long term. He mentioned that it was important that intelligence was democratized and “was not held in a purely digital form by governments and large corporations”.

The approach being adopted by Musk’s company is not a novel one. In an article published in Nature, somewhat similar approach was described as “using a brain implant to deliver electric pulses that alter neural activity”. This is what could be referred to as deep brain stimulation.

However, the scope of contemporary AI extends beyond the limited use of deep brain stimulation and encompasses a scenario where, as Musk, agreeing to the interviewer’s proposition that in order to beat machines one has to merge with machines, highlights in the interview, everyone becomes ‘hyper-smart’.

Even the famous historian Yuval Noah Harari, at the World Economic Forum, mentioned about hacking of human beings and that we are on the way to learning how to engineer bodies, brains and minds.

According to Neuralink’s website, its mission is to develop ultra-high bandwidth brain-machine interfaces to connect humans and computers. The same has raised philosophical speculations as to who will exercise effective control over the merged humans and AI digital space.

Though the narrative seeks to make human beings more competent by giving their brains access to the same data processing capacities as these machines have, it is not known whether the human brain will retain its consciousness and (ir)rationality during the evolution that follows through.

Is changing the brain or the way mind functions, without venturing into the question whether it is good or bad, something that is conducive to being humans? The question as pragmatic as it is philosophical in its essence. It seems that everything finally boils down to value system.

Thought system is internalized and when external feed to that system as well as the processing of that feed by brain could be controlled in any manner, whatsoever, whether we would still call human beings as free beings is an issue worth considering.

Would this make us more competent or just make our ‘will’ vulnerable to explicit external control taking away all the possibilities of us reaching the realization that our mind and thus everything inherent to our ‘being’ has become subject to something that is not ‘us’.

Human behavior can be easily subjected to external stimuli without them even realizing it if the stimuli is sneaky and fed slowly, as is suggested by Jaron Lanier, the founding father of virtual reality.

The changing contours of how human beings’ brain will work is putting forth several issues that ought to be pondered over not only by intellectual think-tanks in the field of AI but also by general public.

The whole definition of ‘smartness’ or ‘intelligence’ seems to be shifting away from how much one could make one’s natural brain work to how much AI could be used to compliment or even take over the function of human brains.

When an external ‘entity’, doesn’t matter if you call it artificial ‘intelligence’, ‘tempers’ or ‘guides’ or ‘hijacks’ your thought system, what are the likely consequences?

Does it depends on the ‘will’ of the AI, for AI is designed to learn on its own leaving the programming by humans out of consideration? Putting in place the laws that will make it illegal for AI and/or its ‘human-control’, if any, responsible and liable for the ‘rogue’ actions is a good idea.

However, whether AI will be subject to the human-made law is a relevant question in this regard. If AI is learning from human behavior, whether it will be able to leave behind the bad things like racism, parochialism, gender bias etc.

that are manifested on a very large scale in our society. If AI could figure its way out of these and then make humans live the life they ought to, it is normatively going to be a better case scenario.

Regardless, do we really aspire to be ‘beings-without-faults’? It seems that AI-ing the human intelligence is going to unfold several unconventional objective and subjective dimensions of human existence in AI-future.