The World Government Summit explores the falsely instilled fear that AI is taking jobs, hindering the full creative expansion and capability of our generation
Spiderman’s Uncle Joe once said, “With great power comes great responsibility.”
With so much data to account for, it is up to us, the artificial intelligence (AI) innovators and entrepreneurs, to find out how to go the extra mile to protect privacy and personal data. There is an infinite number of algorithmic tools that deal with the content of the user's privacy, design, usage, and consent. If we don’t begin working with the government to establish regulation and protection now, we will be in a world of hurt later.
This responsibility brought to my attention by Nozha Boujemaa, author and Advisor to the CEO of Inria in data science, is one of the most significant takeaways from The World Government Summit. Her impressive background makes her a force to be reckoned with and exemplifies the ecosystem of brilliant minds who were in attendance for this three-day, all-encompassing event.
There have been scandals regarding apps sharing coordinates of users who did not agree to the terms and conditions, even to the point of uncovering secret CIA bases.
“You think that when you say ‘yes’, it will be taken into account, and when you say ‘no’,” that it will actually happen,” says Boujemaa, adding, “But, through recent studies conducted in France, some applications, no matter whether the answer was 'yes' or 'no', the location of an individual was still taken into account. Therefore, the consent was not respected because of an economic reason, or it could be due to technical failure.”
As AI continues to pave its way into healthcare, technology, business and enterprise at such tremendous rates, it is more important than ever to regulate AI, ensuring that there is a symmetry between the AI service provider and consumer. A breach of doctor-patient confidentiality, or even price discrimination from web services, would ripple through our ecosystem and society. These are the types of scenarios we need to forecast and begin taking preventative measures now. Creating a global understanding of a topic that is as large and intimidating as AI is not an easy task, but breaking down individual instances for public consumption is a start.
Therefore, it is up to experts such as Boujemaa to apply tools to monetise data usage in relation to the consumer’s needs, and apply necessary laws to ensure safety and transparency through AI.
Now that AI is more than a prototype or a picture, implementing some of these gadgets and tools in education, government and business will be a challenge to regulate. To Boujemaa, accountability and transparency by design are imperative. Boujemaa understands this, and says, “The idea is to integrate these questions of accountability, reliability, and also the responsibility of algorithmic systems.”
Though AI is making a ubiquitous influence on everyday functions, it is still a term that holds much weight of uncertainty and intimidation. There is an infinite number of interpretations in which it can be defined, but Boujemaa describes it simply as solving problems. This may include facial recognition, or finding new solutions and value, based on information and data that we have acquired through other means of technological growth.
For example, Publicis Groupe, the third largest communications company in the world, has made its mission to create transformational impact through the alchemy of creativity and technology. They are revolutionising the role of a professional assistant with their powerful new AI platform that now tracks, manages and coordinates tools to advance collaboration and workflow for their employees. They are calling this platform Marcel.
Maurice Levy, Chairman of the supervisory board of Publicis Groupe, explains the benefits of introducing AI into its business strategy. “The first goal is to accelerate collaborations between people. The second is to make sure that people have access to briefings. The third is to give people a chance to express their potential.”
The falsely instilled fear that AI is taking jobs hinders the full creative expansion and capability of our generation. “AI, as it is seen in the Marcel platform, isn’t a substitute for people; it’s to help people to do a better job,” combats Levy. Marcel is used to discover additional talents that their 80,000 global employees may encompass and can now utilise for work in both international and domestic markets.
But drilling down further, it’s easy to see where AI is concentrating its big guns. The top AI startups deal with cybersecurity, healthcare, and enterprise.
Helen Liang, a managing partner at FoundersX Ventures in San Francisco, expresses an area in healthcare enterprise that is relying on AI/ML to resolve an issue in medical imaging. “The statistic is that only 20 percent of those medical images are actually diagnosed by qualified radiologists,” she says.
So, does this mean radiologists will be out of a job? No. AI/ML is actually providing time for them to face other tasks and creating more interesting ones. “The radiologist does the hard job and then machine learning can come in and do the easy ones like first-line scanning,” explains Liang. “Radiologist now have more time to devote to diagnose the harder cases, the complex cases. Then, easy ones can be taken care of by ML. So the patient gets better care as well. It’s a win-win for everybody.”
As AI continues to offer real-time solutions to everyday tasks, we must stop seeing their skills as a threat and more as an advantage to foster our roles even more. However, we as the experts and creators must also find rules and regulations to ensure our safety and eliminate bias for when AI is applied.
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)