Assuming that the future will be characterised by not only advanced general AI and superintelligence but will also entail mergence between human ‘mind’ and AI, it is interesting to imagine conflict of interest in AI-future.

Society comprises people; people (either as individuals or as groups, doesn’t matter whether institutionalised or not) have different interests; the interests are in conflict with each other especially because of the fact that resources are limited and politics of allocation of resources to serve those interests (‘prioritization’) mostly produces skewed results because policy making is hijacked by a few. Drawing a broad conclusion, all conflicts in the world are result of the dynamics described in the preceding sentences.

Human beings have interests because they need to survive not only biologically but also socially by continuously participating in the process of advancing ahead in life according to standards set by society. Society, in order to set the standards, allocates value to certain things and human beings strive to attain and accumulate those things to progress ahead socially. Economic status is one such standard on which social system is based. In the process of what can be called as development, human beings tend to exclude each other because resources are limited. In other words, everyone is trying to make it to the top, according to standards set by society, using the available limited resources and thus, in the process, ending up working against interests of other people. This causes conflict of interests and gives rise to other problems.

Though cooperation can be spotted, transiently, as long as attainment of common goal is concerned, it is pertinent to note that common goal is relevant not because it serves as a common platform for the whole society to come together and strive for a common aim, in essence, nor is it seen because society offers pure incentive to work in cooperation with each other; it is relevant because it is cumulation of individualistic goals. In other words, cooperation is just a more efficient way of achieving individualistic personal goals, if common referred to as common goals, and the same is the result of how societal structure has been shaped by influencing human factors.

Machines/robots/AI (‘AI’) do not and will not have interests because they are outside the social system and thus do not need satisfaction of those interests for social survival. If human beings, being conscious beings, perceive AI as part or extension of social scenario, AI are inherently outside the domain. Even if machines are able to become conscious beings, which is very unlikely, they will not be conscious the way human beings are. This implies that AI and humans cannot share the same conceptualisation of social dynamics because their understanding of the same is rooted in their consciousness which cannot be the same. Therefore, AI, despite being conscious, will not engage in conflict of interests and prioritization the way humans do. It could be something else, but not this.

In case of humans and robotised-humans (as this is what one can conclude from the work being done by companies like Neuralink and Deepmind) in AI-future, no one would need to worry about satisfaction of needs in view of unlimited resources (extra-terrestrial mining and artificial generation of food), exponential economic growth with increased productivity and output.

In closing, society might witness absence of major types of conflicts that we have witnessed through the evolution of civilisation.

Author Paramjeet Singh Berwal

Paramjeet holds an LL.M. from MIPLC, a union of Max Planck Institute of Innovation and Competition (Germany) and the George Washington University (USA). He is often invited as AI policy making expert. In addition to being a qualified lawyer with an extensive experience, he is an invited lecturer at several universities.