Experts discuss key concerns and the future of AI at RAISE 2020 summit
Artificial Intelligence (AI) is red hot, but as we talk about its potential, the space is also engulfed with apprehensions such as loss of jobs, unethical use of the technology, and the biggest worry is - what if AI takes over humanity?
Jaideep Mishra, Joint Secretary, MeitY, said, trust issues, lack of clarity on development and deployment processes are the main barriers for the adoption of AI in India.
Commenting on this, some of the experts at the summit said there is possibility for ‘responsible’ AI if best practices such as ethical framework, redressal system, open data, collaborations, and others are followed by the ecosystem.
Jaideep was speaking at a panel discussion on ‘Education and Awareness for Responsible AI’ at RAISE 2020, an AI summit organised by the Ministry of Electronics and Information Technology (MeitY) in partnership with NITI Aayog. The mega virtual summit on artificial intelligence was inaugurated on October 5 by Prime Minister Narendra Modi, who called for India to be a global hub for AI.
Jaideep was joined by other panellists like Rahul Sharma, President, Amazon Internet Services Private Limited (AISPL) Public Sector, India & South Asia; Urvashi Aneja, Founding Director, Tandem Research; Rahul Panicker, Chief Research and Innovation Officer, Wadhwani AI; Rohini Srivathsa, CTO, Microsoft India; and Kye Andersson, Strategist, Major Impact Initiatives at AI Sweden.
The experts discussed some key concerns, potential models of awareness generation across the public sector, private sector, academic institutions, and also for the general public.
Amazon’s Rahul Sharma said that for education and awareness, one of the most important factors is to make the technology experiential in a way that technologists on various levels can leverage it.
Rahul suggested there should be availability of open data and the way to use open data can help the overall progress in the use of machine learning.
He underlined that the culture of innovation needs to be implemented from an early stage such as k-12 education. He also suggested that collaborating with new-age entrepreneurs and the government to work on AI use cases across sectors such as health, education, and agriculture is necessary, so that awareness and education about the potential of AI can be seen by a wider audience.
“The government should give use cases and set outcomes to start up to build an AI solution,” asserted Rahul, who also touched upon Amazon’s initiatives at AWS such as collaboration with academic institutions so that millennials have hands-on experience about the technology. This, Rahul believes, can help the journey of AI, and will create awareness around the technology.
“Championing a culture of machine learning, identifying use cases, using data, and providing platforms to startups and the public sector to build applications should be the key priority to accelerate responsible Al adoption,” he added.
Microsoft CTO Rohini highlighted that to be able to rise from fears of AI, companies and organisations need to have the right answers on ‘what computers should do and what computers should not do’, and have strategies in place to keep implementing that in any setup.
She also said technology is never going to slow down, and hence AI should not be a matter of discussion of just data scientists, but everyone, including the society, governments, leaders in the corporate sector, and others.
“To be able to apply machine learning and AI at scale, and to be able to see that you are doing it in a responsible manner means you also have tools and techniques that can be applied at scale. This means we have tools and techniques to think about security, privacy, fairness, data, and interpretability, which are now active areas of research as well,” said Rohini.
Adding to this, Rahul Panicker, Chief Research and Innovation Officer, Wadhwani Institute for Artificial Intelligence, said, like humans have created a social system over the years, the responsible AI also needs a similar journey.
“AI needs governance, a redressal system, public rules, tools, forums, etc, to be able to have responsible AI at scale,” he said.
Panicker also stressed upon the fact that whatever we do in the process of scaling AI will have no meaning if there is no post-deployment monitoring system. As he puts it, nothing can be perfect and it is impossible to predict all aspects of the technology. Hence, a strong, post-deployment monitoring system is required to have best practices in place when it comes to AI, said Panicker.
Bringing in Sweden’s experience, Kye Andersson underlined that one of the critical problems occur where technology is far outpacing the policy and regulation, and there should be an alignment of both the trajectories. He said, apart from a cross-sectoral collaboration, there should be collaboration between companies and other key stakeholders to create solutions for the upcoming generations.