How Companies Can Take a Global Approach to AI Ethics
Ideas about right and wrong can differ from one cultural context to the next. Corporate AI governance must reflect this.
August 05, 2024
Illustration by Ola Szczepaniak
Post
Post
Share
Annotate
Save
Many efforts to build an AI ethics program miss an important fact: ethics differ from one cultural context to the next. Ideas about right and wrong in one culture may not translate to a fundamentally different context, and even when there is alignment, there may well be important differences in the ethical reasoning at work — cultural norms, religious tradition, etc. — that need to be taken into account. Because AI and related data regulations are rarely uniform across geographies, compliance can be difficult. To address this problem, companies need to develop a contextual global AI ethics model that prioritizes collaboration with local teams and stakeholders and devolves decision-making authority to those local teams. This is particularly necessary if their operations span several geographies.
Getting the AI ethics policy right is a high-stakes affair for an organization. Well-published instances of gender biases in hiring algorithms or job search results may diminish the company’s reputation, pit the company against regulations, and even attract hefty government fines. Sensing such threats, organizations are increasingly creating dedicated structures and processes to inculcate AI ethics proactively. Some companies have moved further along this road, creating institutional frameworks for AI ethics.
New!
HBR Learning
Ethics at Work Course
Accelerate your career with Harvard ManageMentor®. HBR Learning’s online leadership training helps you hone your skills with courses like Ethics at Work. Earn badges to share on LinkedIn and your resume. Access more than 40 courses trusted by Fortune 500 companies.
Avoid integrity traps in the workplace.
Start Course
-
SD
Swanand Deodhar is an associate professor at the Indian Institute of Management Ahmedabad. His engaged research in topics such as digital platforms and digital transformation is rooted in deep collaboration with practice. His work has appeared in journals of global repute and reference, such as MIS Quarterly, Information Systems Research, and Journal of International Business. You can follow him on LinkedIn.
-
FB
Favour Borokini is a PhD student with the Horizon Centre for Doctoral Training, hosted at the Faculty of Computer Science at the University of Nottingham. Her research interest is in the ethical framework that addresses harm in immersive environments. She holds a Law degree from the University of Benin, Nigeria, and is a member of the Nigerian bar. She has successfully leveraged her legal background to investigate issues such as the impact of technology on human rights, particularly women’s rights, the impact of AI on African women, and the experiences of African women working in AI across various sectors.
-
Ben Waber is a visiting scientist at the MIT Media Lab and a senior visiting researcher at Ritsumeikan University. His research and commercial work is focused on the relationship between management, AI, and organizational outcomes. He is also the author of the book People Analytics. Follow him on Mastodon: @[email protected].
Post
Post
Share
Annotate
Save
New!
HBR Learning
Ethics at Work Course
Accelerate your career with Harvard ManageMentor®. HBR Learning’s online leadership training helps you hone your skills with courses like Ethics at Work. Earn badges to share on LinkedIn and your resume. Access more than 40 courses trusted by Fortune 500 companies.
Avoid integrity traps in the workplace.