Using Blockchain to Build Customer Trust in AI
HBR Staff
Post
Post
Share
Annotate
Save
Get PDF
Buy Copies
If organizations want to reap real business benefits from their investments in AI, customers need to trust it. Systemic social mistrust in AI can be dissolved only when questions about how this technology works — from customers, regulators, and other appropriate parties — can be answered. Using blockchain-based accountability provides an attainable, operational path to accountability and enforceability. FICO developed a private blockchain that automated documentation and standards in model development. This approach sped its time to market with AI and analytic innovation, but has also helped keep new models in production; blockchain has reduced support issues and model recalls by over 90%. Making this system work was less a tech challenge than a people one. They learned it was important to start with standards, then develop the tech; that making the system user-friendly was non-negotiable; that it was essential to iterate on quick wins; that they had to build repositories to hold large AI assets in alternate storage; and that they needed capable IT teams to handle the maintenance demands of this system.
In a remarkably short period of time, organizations across industries have deployed artificial intelligence (AI) to produce decisions that affect people’s daily lives. Since AI can be characterized as “a mirror that reflects our biases and moral flaws back to us,” sometimes this practice results in unfortunate and even tragic mistakes. And bias is just one of a multitude of reasons why AI is considered a “black box” with a trust problem. Last year Pew Research found that 52% of Americans are more concerned than excited about AI in daily life, compared with just 10% who say they are more excited than concerned.