Google chief trusts tech creators to regulate AI

Google chief Sundar Pichai said that the fear surrounding the development of fully-fledged artificial intelligence (AI) and the risks it poses on humanity are “very legitimate” – but tech creators should be trusted to regulate AI technology.

Pichai made the comments in an interview with the Washington Post on Tuesday afternoon, where he began to discuss the effects and consequences of implementing AI.

The ultimate goal, he says, is to make sure that AI can behave in a way that stimulates human intelligence, and that such machines can be used without causing any harm to humans.

“I think tech has to realize it just can’t build it, and then fix it,” Pichai said. “I think that doesn’t work.”

Pichai said he is worried about the damage super-intelligent AI could cause, but he believes the industry should be trusted to effectively regulate its use. “Regulating a technology in its early days is hard, but I do think companies should self-regulate,” he said.

“This is why we’ve tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”

The future of AI technology

Despite mounting criticism over AI and its use, Google recently revealed a set of internal principals that aim to keep AI beneficial to humanity.

“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in a memo posted with the principles.” As a leader in AI, we feel a deep responsibility to get this right.”

Google also recently pledged to never use its AI technology for weapons, illegal surveillance, and anything that could cause harm to humanity.

The company said they will continue to collaborate with the military and government in areas such as cybersecurity, training, recruitment, search-and-rescue, and healthcare.

AI systems have already been implemented by the U.S government to recognize people in public, and have been used in self-driving cars. 

Related Posts