AI has great potential in transforming the world: Alphabet CEO Sundar Pichai

At the recently concluded WEF summit at Davos, Alphabet and Google CEO Sundar Pichai spoke about why he is a technology optimist, his take on artificial intelligence, and how it will transform the world.

29th Jan 2020
  • +0
Share on
close
  • +0
Share on
close
Share on
close

In recent years, artificial intelligence (AI) has become the talk of the town. No forum seems to be complete without talking about how technology is going to impact the world. 


In a conversation with Professor Klaus Schwab, Founder and Executive Chairman of World Economic Forum, Sundar Pichai, CEO of Google and Alphabet shared some valuable insights on the age of AI, the future of the open web, and technology's impact on society at the recently concluded WEF summit at Davos, Switzerland.  


While several may argue that technology is negatively impacting the world by taking away jobs and comprising the safety and security of individuals, Pichai calls himself a ‘technology optimist’ and believes that despite its disadvantages, AI has great potential in reforming the world – from climate to healthcare. 


Sundar Pichai

Credit: World Economic Forum




Edited excerpt from the interview: 


Professor Klaus Schwab (PKS) - Welcome Sundar Pichai. My first question is, you have called yourself a technology optimist, and we hear a lot of concerns about technologies. What makes you an optimist?


Sundar Pichai (SP) - What makes me a technology optimist? I think it's more about how I got introduced to technology. Growing up, I think, I had to wait for a long time before I got my hands on either a telephone or television when it came to our household. I discreetly remember how it changed our lives. TV allowed me access to world news, football, and cricket. So I always had this first-hand experience of how gaining access to technology changes people's lives. 


Later on, I was inspired by the ‘One Laptop per Child’ project, where the school was giving $100 laptops to children. They quite didn't get there. But I think it was a very inspiring goal and made a lot of progress in the industry. Later, we were able to make progress with Android. Each year, millions of people get access to computing for the first time. We do this with low-cost affordable Chromebooks. And seeing the difference it has made in people's lives, it gives me great hope for the path ahead. And more recently with AI, just in the last month, we have seen how it can help doctors better detect breast cancer with more accuracy.


We also launched a better rainfall prediction app. Over time, AI can play a role in climate change. So when you see these examples firsthand, I'm clear-eyed about the risks with technology. But the biggest risk with AI may be failing to work on it and make more progress with it because it can impact billions of people. 


PKS - Can you explain what we can expect from quantum computing? 


SP - It’s an extraordinarily important milestone we achieved last year, something that’s known in the field as quantum supremacy. It is when you can take quantum computers and they can do something which classical computers cannot. To me, nature at a fundamental level works in a quantum way. At a subatomic level, things can exist in many different states at the same time. Classical computers work in ones and zeros, so we know that's an imperfect way to simulate nature. Nature works differently. What's exciting about quantum computing and why we are so excited about the possibilities is – it will allow us to understand the world more deeply. We can simulate nature better. So that means simulating molecular structures to discover better drugs, understanding the climate more deeply to predict weather patterns and tackle climate change, etc. We can design better batteries, nitrogen fixation – the process by which we make the world's fertilisers, and accounts for two percent of carbon emissions. And the processes have not changed for a long time because it's very complicated.


Quantum computers will allow us the hope that we can make that process more efficient. So it's very profound. We've all been dealing in technology with the end of Moore's law. It's revolutionised in the past 40 years, but it's levelled off. So when I look at the future and say how do we drive improvements, quantum will be one of the tools in our arsenal by which we can keep something like Moore's Law continuing to evolve. The potential is huge and we'll have challenges. But in five to 10 years, quantum computing will break encryption as we know it today. But we can work around it. We need to do quantum encryption. There are challenges as always with any evolving technology. But I think the combination of AI and quantum will help us tackle some of the biggest problems we see.


PKS - And also to a certain extent, genetics. I think quantum computing and biology will have great potential positive or negative impacts.


SP - The positive one, as you're saying, rightly is to simulate molecules, protein folding, etc. It's very complex today. We cannot do it with classical computers. So with quantum computers, we can. But we have to be clear about all these powerful technologies. And this is why I think we need to be deliberate and regulate technologies like AI, and as a society, we need to engage in it.


PKS - And this leads me to the next question, actually because in an editorial in the Financial Times, which I read just before the annual meeting, you stated and I quote, “Google's whole starts with recognising the need for a principle and regulated approach for applying artificial intelligence.” What does it mean?


SP - You know, I've said this before that AI is one of the most profound things we are working on as humanity. It's more profound than fire, electricity, or any of the other bigger things we have worked on. It has tremendous positive sides to it. But it has real negative consequences. When you think about technologies like facial recognition, it can be used to benefit. It can be used to find missing people, but it can (also) be used for mass surveillance. And as democratic countries with a shared set of values, we need to build on those values and make sure when we approach AI we're doing it in a way that serves society. And that means making sure AI doesn't have a bias that we build and test it for safety. We make sure that there is a human agency that is ultimately accountable to people. 


About 18 months ago, we published a set of principles under which we would develop as Google. But it's been very encouraging to see the European Commission has identified AI and sustainability as their top priorities. And the US put out a set of principles last week. And, be it the OECD or G20, they're all talking about this, which I think is very encouraging. And I think we need a common framework by which we approach AI.


PKS - How do you see Google in five years from now?


SP - We know we will do well, only if others do well along with us. That's how Google works today through search. We help users reach the information they want including businesses and businesses grow along with search. In the US, last year, we created $335 billion of economic opportunity. And that's true in every country around the world. We think with Alphabet, there's a real chance to take a long-term view and work on technology which can improve people's lives. But we won't do it alone. In many other bets, which we are working on where we can, we take outside investments. These companies are independent, so you can imagine we'll do it in partnerships with other companies. And Alphabet gives us the flexibility to have different structures for different areas in a way we need them to fix healthcare, and we can deeply partner with other companies. Today, we partner with the leading healthcare companies as we work on these efforts. 


So we understand for Alphabet to do well, we inherently need to do it in a way that works with other companies, creating an ecosystem around it. This is why last year, just through our venture arm, we invested in over 100 companies. We are just investors in these companies, and they're going to be independent companies. We want them to thrive and succeed. And so, you know, that's the way we think about it. But I think it gives us a real chance to take a long-term view, be it self driving cars or AI. 


PKS - So last question. You said you are an optimist. When you wake up at night and you cannot sleep anymore, what worries you at some time?


SP - You were pretty insightful. That is true. Yeah, I do wake up at night. What worries me at night? I think technology has a chance to transform society for the good, but we need to learn to harness it to work for society's good. But I do worry that we turn our backs on technology. And I worry that when people do that they get left behind too. And so to me, how do you do it inclusively? I was in Belgium and I went to MolenGeek, a startup incubator in Molenbeek. In that community, you see people who may not have gone to school, but when you give them access to digital skills, they're hungry for it. People want to learn technology and be a part of it. That's the desire you see around the world when we travel. When I go to emerging markets, it's a big source of opportunity. And so I think it's our duty and responsibility to drive this growth inclusively. And that keeps me up at night.


(Edited by Suman Singh)




  • +0
Share on
close
  • +0
Share on
close
Share on
close