Disclaimer-mark
This is a user generated content for MyStory, a YourStory initiative to enable its community to contribute and have their voices heard. The views and writings here reflect that of the author and not of YourStory.
Disclaimer-mystory

Is extended intelligence the answer to AI uncertainty?

There’s a rising trend towards extended intelligence, which seeks to change our relationship with AI and machines forever.

Is extended intelligence the answer to AI uncertainty?

Saturday May 23, 2020,

5 min Read

As it stands, dishonest data use, biased outcomes and questions about job security feed a ‘human vs AI’ mentality. This adversarial relationship between humans and machines creates an uncertain future for AI.


But what if things were different? There’s a rising trend towards extended intelligence, which seeks to change our relationship with AI and machines forever.


Looking to the future, could extended intelligence be the answer to today’s AI uncertainty?




What is ‘extended intelligence’?

Extended intelligence is the blending of AI technology with human intelligence. It refers to the use of artificial intelligence to enhance human intelligence, and vice-versa.


So, extended intelligence is about the integration between humans and machines.


Perhaps the more literal side of this is the idea of a physical integration between machines and humans. It involves exploring ways to physically connect computers and the human brain. So, you might think of technology like wearables or even brain implants.


But extended intelligence isn’t just about becoming a cyborg. It’s about creating a better partnership between humans and machines. It involves finding uses for AI that support and integrate with human abilities.





What does extended intelligence need?

  • Physical interface


Extended intelligence will involve some form of physical integration with machines.


This might include wearable tech that can integrate with our abilities and inform our behaviour. Or, more drastically, it could include implants and augmentations that connect our brains to our computers. This would call for a huge boost to general trust in artificial intelligence. After all, it’s one thing to accept AI in your gadgets. It’s another thing entirely to accept it in your brain.


  • Participatory design


The key point of extended intelligence is that it’s all about blurring the lines between machines and humans. A key component in achieving this is participatory design. This is an approach that attempts to include all stakeholders — from developers to end-users — in the design process. This helps to generate trust in AI technology and promotes an open and honest approach to data use. (There’s no sketchy data harvesting, for example.)





But isn’t that just science fiction?

The idea of integrating AI with human intelligence sounds closer to the realms of fantasy than reality. (Even with the advancements in technology we’re seeing today.) But that doesn’t mean we aren’t heading towards it.


For example, Elon Musk’s company Neuralink have started to develop and test implants and electrodes that fan out into the brain. (And neuroscientists haven’t immediately scoffed at the idea, either.) The hope is that Neuralink could one day help to create a symbiosis between humans and AI.


Focusing more on the relationship between AI and human, meanwhile, there’s The Council on Extended Intelligence (CXI). This is an endeavour to change the relationship between humans and AI machines. The focus lies on participatory design and prioritising people over profit.


Indeed, far from the realms of science fiction, extended intelligence is an area that people are starting to explore in the real world.





AI uncertainty

There is an awkward interplay between man and machine these days. As artificial intelligence grows more and more a reality, people are finding it hard to trust.


For a start, there’s the common assertion that AI will take jobs. AI will change the job landscape, no matter how you look at it. But people need reassurance that it doesn’t mean they’ll lose everything.


Algorithmic bias and surveillance AI are other major concerns hurting AI acceptance. It’s difficult to trust AI when it represents a loss of privacy. Particularly when there’s a chance that the AI algorithm has learned to discriminate against you.


All these concerns amass to an ‘us vs them’ mentality towards AI technology. Artificial intelligence has become an adversary, a threat, as much as it has an ally. And this distrust towards AI technology leads to an uncertain future for everyone. 





Why chase extended intelligence?

Extended intelligence is about creating a symbiosis between humans and machines. (That is, a mutually beneficial relationship.) For that to happen, we must view AI as an ally, not an enemy. So, the drive for extended intelligence promotes trust and acceptance of AI. In this sense, it represents a way to embrace the future, rather than fight it.


Extended intelligence also stands to improve both AI technology and human ability. In fact, some claim that it represents the next stage of human evolution.


Both AI and human intelligence each have strengths and weaknesses. Humans are adept at processing sensory input and interacting with the world. We can understand the things we see, hear and feel. Artificial intelligence, meanwhile, excels at processing speed and remembering things.


The two together complement each other’s strengths. For instance, consider the speed and scope of AI processing, paired with the understanding of human intelligence.





Extended intelligence: AI future?

It’s still uncertain that AI alone poses the threat that popular fearmongers spread. The future of AI is unclear.


What is clear is that among the hype and speculation, there is a harmful ‘us vs them’ mentality building between humans and machines. And, even if we don’t merge our brains with machines, the focus of extended intelligence stands to help us blur that divide.





Article originally published at https://www.thinkautomation.com/bots-and-ai/is-extended-intelligence-the-answer-to-ai-uncertainty/ on February 11, 2020