Advertise here

How is Google taking responsibility for AI?

responsible AI

At this year’s Google Next ’19 event, the global technology company set a private conference to discuss the progress they are making with AI. Three experts sat at the front of the room and spoke of what changes Google was making in AI and how, as one of the world’s biggest tech firms, they rightly have a responsibility to ensure the work they are doing with AI is ethical and leading the way.

Initially, the expert’s discussed the 7 principles they believe come with creating responsible AI. They discussed these ideals being, “Things like, not creating or propagating unfair bias. Be countable to people, be built and tested for safety and need for uses in accordance with these principles.” Tracey Frey, one of the Google representatives discussed. However, although these principles are great guidelines, they are not tools to work off, she hastily added.

The role that trust plays

The firm believes that trust is a major part of ensuring responsible AI. “In order for anyone to succeed with an AI deployment, trust has to be very strong between us and our customers who need to know that we’re giving them responsible and trusting technology.” Says Frey.

Ultimately, the progression of AI responsibility in Google comes down to trust between them and AI teams, between AI teams and customers and customers and other customers. “If at any point, that line of trust is broken, we run the risk of the benefits not being realised.” Frey added.

Ethics and artificial intelligence

Andrew Moore, one of the representatives who sat before us, spoke of the ethics that come in developing AI and moreover, the responsibility that developers have to not only consider the rights and wrongs behind using AI but how this links to the human side of progress.

He told the story: “We wanted to use computer vision to make sure that folks on construction sites were safe. [This involved using AI in their helmets.] As you can imagine, it’s a nuanced question whether releasing the products is always going to be the right thing to do. It seems, at first sight, it’s obviously good because it helps with safety. But then there will be a point in the future where we worry about whether this is turning into a world where we are monitoring workers in a kind Victorian way. And the answer to that is actually to work through the complexities of these things.”

Is data science still relevant?

The representatives also spoke of how data science is now ‘a thing of the past’ and how much the advancements of predictions have come along. This leads to the thinking that data science is not always the answer. The example was given of healthcare and a story was told of how machines are often used to try and predict problems. However, in one case, the machine was using everything it had been taught to make a prediction regarding cancer that contradicted the human doctor. As it turned out, the machine was reading the marks on the x-ray as cancerous, when they were just surface marks on the actual sheet.

In response to this point, Moore said: “And then you ask [the robot] to predict something and it surprises you in its prediction. And what’s very bad there is if you’ve got a safety-critical system or a societally important thing which may have unintended consequences, you want to be sure if you think you made a mistake, you have to be able to diagnosis this. That’s motivated the release of a suite of explainability tools. Now, we made available through our API, the ability that when you make a prediction, instead of just getting a number, you also get back a little piece of metadata. This includes other information about which factors contributed to this prediction. You can look at what and why did that happen. We really wanted to carefully talk through and explain [results]. For us, it saved our butts against making serious mistakes.”

Taking a human-centered approach

The final point that the company made in terms of thinking about AI responsibly was that those working with AI need to take a human-centered approach. Subsequently, they don’t want to take the work away from people but want machine and humans work alongside one another. The representatives emphasise the point that a question comes as a developer as to not what they can do, but what they should do. Aka, AI may have the ability to surpass human capability, but it is a developer’s responsibility to ensure that it becomes a collaboration rather than a domination.

“Google AI organisation is a way also to help increase understanding about a machine learning model…When it best performs under what conditions and oftentimes this information is not clear, particularly for things like pre-trained models which we have available for consumers and customers.”

 

 

Related Posts

Advertise here
Menu
Cart Item Removed. Undo
  • No products in the cart.