Google shows us why we need to keep a check on how AI is used
Google’s decision not to use Artificial Intelligence (AI) for aiding war weapons or surveillance technology, may well be the start of a series of moves that could curb its possible misuse.
The American tech giant had entered into a partnership with the Pentagon to use AI to review drone footage, in a move that was later seen as controversial. Google employees themselves had signed a petition, asking company CEO Sundar Pichai not to extend the contract.
This led to Google spelling out its policy on the matter. The company said those technologies that cause or are likely to cause overall harm will not be encouraged.
“Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints,” the firm stated.
Google was clear that it did not want to take up work that involved the following:
* Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
* Technologies that gather or use information for surveillance, violating internationally accepted norms.
* Technologies whose purpose contravenes widely accepted principles of international law and human rights.
India needs to learn some lessons from Google’s stand on the matter. Recently, NITI Aayog had published a paper on the National Strategy for Artificial Intelligence, identifying five focus areas for growth. It looked at healthcare, agriculture, education, urban infrastructure, and transportation. The paper should have looked at some aspects that Google was forced to look at. After all, AI can be a beast if not controlled. It will be great if India can stitch a line or two in the document about not using AI to develop war weapons, for instance.
In a town hall event in San Francisco in January, Pichai had said that AI was one of the most important things humanity was working on. It is more profound than electricity or fire," he had said, in a dramatic fashion.
Some of the other tech leaders like Elon Musk are not so sure. The Tesla chief had once said that AI was more dangerous than North Korea, while legendary physicist Stephen Hawking had termed AI as the worst event in the history of the civilisation.
However, it has to be said that benefits of AI far outweigh the concerns. Hindustan Lever is looking to utilise AI to predict grocery needs in a family, through its initiative called Project Maxima. With this project, the company is hoping to get details of the people who visited a certain grocery store, the kind of purchases they made, and predict what they could be buying the next time around. This could greatly benefit FMCG companies. A state like Andhra Pradesh is using AI to keep track of school dropouts. There are many other benefits in the health sector too, like in predicting cancer.
Google’s move to keep AI out of the ambit of warfare can be seen as an attempt to check its spread into undesirable territories. As long as we can keep it within the sphere of positive interventions, the technology can only be a boon.