Visitors experience riding on an autonomous bus at a park in Beijing on October 31, 2018 (XINHUA)
Across Asia, countries are putting in place plans to lead the way in the artificial intelligence (AI)-driven era. From China's Made in China 2025 and Singapore's Smart Nation plan to Japan's focus on automating more tasks in a bid offset an aging workforce, it is clear Asia is taking adequate policy steps.
Blue-chip companies are leading the way with AI. Alibaba has used the Singles' Day Shopping Festival as a testing ground for the power of virtual reality to drive shopping decisions. The Tencent slogan—AI in all—represents the company's directional thinking.
The big boys of e-commerce are also battling it out on the AI investment front in research and development in the hunt to find the next big thing to revolutionize the industry. In 2018, Tencent invested in UBTech, which focuses on developing so-called humanoid robots (they look like humans). Alibaba led investments in Chinese AI startup SenseTime, which specializes in facial recognition technologies.
Human-like robotics and facial recognition capabilities immediately, and rightly, spark ethical concerns. The humanoid robot has been a key feature of sci-fi films that predict a cataclysmic future while facial recognition capabilities spur concerns about the "Big Brother" situation. But there are more immediate, arguably less obvious, examples of where ethics need to be a part of the decision-making processes of AI's application to business.
Companies across Asia are considering how to employ AI to help make decisions regarding insurance claims, mortgage applications, tax determinations, credit ratings, medical diagnosis and job applications. For many, the immediate concern is: Can it be efficiently implemented? There is no doubt that AI can help sift through large amounts of data but human analysis is absolutely required to make subtle analytical decisions. If AI is used to make financial or medical conclusions, companies need to ensure that no bias is present.
Organizations have begun to address concerns and aberrations that AI has been known to cause. But companies need to move beyond directional AI ethics codes and provide prescriptive, specific and technical guidelines to develop AI systems that are secure, transparent, explainable, and, perhaps most importantly, accountable. Unintended consequences and compliance challenges that can be harmful to individuals, businesses and society must be avoided.
It's imperative that everyone in a company thinks about the ethical implications of applying AI. Decisions about AI should be anchored to a company's core values, ethical guardrails and accountability frameworks. No technical decision should be made in isolation. This means AI-based decisions and actions need to be explainable, monitored and audited against key value-driven metrics, including algorithmic accountability, bias and cybersecurity. To do this, ethics should be a key performance indicator at every step of the implementation process.
Human resources teams may also need to step up to the plate in tandem with re-skill staff to ensure that they are capable of working with AI-driven tools. This means implementing regular educational programs that evaluate the ethical use of AI and keep the topic in front and center of a company's thought process.
Ethics should drive AI decisions and if they do, companies will be armed with data that can help drive better business decisions, provide better customer service and offer more efficient outcomes. If ethics are left out of the picture, companies risk making bad decisions that could damage their services and offerings, credibility and ultimately their business.
This is an edited excerpt of an article by Gianfranco Casati, Group Chief Executive Officer for Growth Markets at consulting firm Accenture
Copyedited by Craig Crowther
Comments to firstname.lastname@example.org