Bosch sets company guidelines for the use of artificial intelligence

by Hans Diederichs

Bosch has established ethical “red lines” for the use of artificial intelligence (AI). The company has now issued guidelines governing the use of AI in its intelligent products. Bosch’s AI code of ethics is based on the following maxim: Humans should be the ultimate arbiter of any AI-based decisions. “Artificial intelligence should serve people. Our AI code of ethics provides our associates with clear guidance regarding the development of intelligent products,” Bosch CEO Volkmar Denner said at the opening of Bosch ConnectedWorld (BCW), the company’s annual IoT conference in Berlin. “Our goal is that people should trust our AI-based products.”

AI is a technology of vital importance for Bosch. By 2025, the aim is for all Bosch products to either contain AI or have been developed or manufactured with its help. The company wants its AI-based products to be safe, robust, and explainable. “If AI is a black box, then people won’t trust it. In a connected world, however, trust will be essential,” said Michael Bolle, the Bosch CDO and CTO. Bosch is aiming to produce AI-based products that are trustworthy. The code of ethics is based on Bosch’s “Invented for life” ethos, which combines a quest for innovation with a sense of social responsibility. Over the next two years, Bosch plans to train 20,000 of its associates in the use of AI. Bosch’s AI code of ethics governing the responsible use of this technology will be part of this training program.

AI offers major potential

Artificial intelligence is a global engine of progress and growth. The management consultants PwC, for example, project that between now and 2030, AI will boost GDP in China by 26 percent, by 14 percent in North America, and by around 10 percent in Europe. This technology can help overcome challenges such as the need for climate action and optimize outcomes in a host of areas such as transportation, medicine, and agriculture. By analyzing huge volumes of data, algorithms are able to reason and make decisions. Well in advance of the introduction of binding EU standards, Bosch has therefore taken the decision to actively engage with the ethical questions that the use of this technology raises. The moral foundation for this process is provided by the values enshrined in the Universal Declaration of Human Rights.

Humans should retain control

Bosch’s AI code of ethics stipulates that artificial intelligence should not make any decisions about humans without this process being subject to some form of human oversight. Instead, artificial intelligence should serve people as a tool. Three possible approaches are described. All have the following in common: in AI-based products developed by Bosch, humans should retain control over any decisions the technology makes. In the first approach (human-in-command), artificial intelligence is purely an aid – for example, in decision-supporting applications, where AI can help people classify items such as objects or organisms. In the second approach (human-in-the-loop), an intelligent system autonomously makes decisions that humans can, however, override at any time. Examples of this include partially automated driving, where the human driver can directly intervene in the decisions of, say, a parking assistance system. The third approach (human-on-the-loop) concerns intelligent technology such as emergency braking systems. Here, engineers define certain parameters during the development process. Here, there is no scope for human intervention in the decision-making process itself. The parameters provide the basis on which AI decides whether to activate the system or not. Engineers retrospectively test whether the system has remained within the defined parameters. If necessary, these parameters can be adjusted.

Building trust together

Bosch also hopes its AI code of ethics will contribute to public debate on artificial intelligence. “AI will change every aspect of our lives,” Denner said. “For this reason, such a debate is vital.” It will take more than just technical know-how to establish trust in intelligent systems – there is also a need for close dialogue among policymakers, the scientific community, and the general public. This is why Bosch has signed up to the High-Level Expert Group on Artificial Intelligence, a body appointed by the European Commission to examine issues such as the ethical dimension of AI. In a global network currently comprising seven locations, and in collaboration with the University of Amsterdam and Carnegie Mellon University (Pittsburgh, USA), Bosch is working to develop AI applications that are safer and more trustworthy. Similarly, as a founding member of the Cyber Valley research alliance in Baden-Württemberg, Bosch is investing 100 million euros in the construction of an AI campus, where 700 of its own experts will soon be working side by side with external researchers and start-up associates. Finally, the Digital Trust Forum, a committee established by Bosch, aims to foster close dialogue among experts from leading international associations and organizations. Its 11 members are meeting up at Bosch ConnectedWorld 2020. “Our shared objective is to make the internet of things safe and trustworthy,” Bolle said.    

Source and photo: Robert Bosch GmbH

Go back