Balancing Bold, Fast, and Responsible AI Deployment
Post
Post
Share
Annotate
Save
By Emily Frolick, Bryan McGowan, and Tim Phelps
As the rapid ascent of generative AI (GenAI) continues to escalate changes in the way organizations work, an unexpected paradox is also emerging.
A majority of leaders of billion-dollar organizations that KPMG recently surveyed say they intend to integrate more GenAI into new initiatives and business functions and to train more of their workforce to use AI. Of these respondents, 71% say they’re using GenAI data in their decision making, 52% say the technology is shaping their competitive positioning, and 47% say it’s helping them uncover revenue opportunities.
AI offers these organizations great potential to yield powerful advantages in both operational efficiency and innovative strategy because it can process massive volumes of data at incomprehensible speeds and strengthen humans’ capabilities, insights, and productivity.
Yet even some organizations eager to embrace AI approach the technology with caution, envisioning its risks more clearly than its rewards. Will AI cause workforce redundancies? Will it introduce cybersecurity risks or erode data privacy?
That’s why the biggest challenge in adopting AI isn’t developing the technology itself but developing an environment of trust.
To unlock AI’s potential, organizations, their customers and employees, and regulators need to trust AI to yield only useful, relevant, safe, secure results. Building that trust requires designing AI purposefully for reliability and high ethical standards. Adopting AI boldly, quickly, and responsibly means upholding these standards and regulatory mandates from the very beginning.
Guidelines and Guardrails
Every organization with an AI strategy needs to put trust at the heart of its policies.
Beyond establishing trust in its tools and data sources, an organization needs an autonomous AI governance body to develop ethical rules, guidelines, and procedures. An AI steering committee can manage AI across all teams, clarifying for all employees, partners, and customers when and how it uses (and doesn’t use) AI.
As AI becomes more pervasive and omnipresent in business models, an AI-cautious organization still needs to take its first steps in governance to build trust and goodwill and to mitigate the risks of the technology. Even if an organization is not yet prepared to establish a full governing body, many are still considering a chief AI officer, a C-suite leader who understands the technology and sees its range of business opportunities and risks. And a company that isn’t ready to standardize AI practices and procedures across all lines of business could identify incubator teams to take that first deep dive.
Organizations, including KPMG, are now connecting directly into their underlying infrastructures, capturing and extracting metadata so they can automate and scale portions of their AI governance, security, and risk management programs to more efficiently detect and monitor configured guardrails and controls.
Another strategy is to take a risk-tiered approach that applies different governance standards to AI systems based on risk and impact to customers, partners, and employees.
Building a Culture of Trust
To start the AI journey at KPMG, we took this approach of always putting trust at the center of our plans.
We started with a trusted AI commitment that outlined our strategy, with ethical pillars to ensure our use of AI would always be trustworthy and human-centric. We used that value statement to develop AI policies and guidelines for each phase of the AI lifecycle to set out usage expectations for our personnel and partners, data considerations of what was permissible and what was off-limits, and an AI council to actively shape guidelines and communicate our AI policies to our 39,000-employee staff.
With those guidelines and teams established, we launched AI learning and development for the entire organization, using individual persona-based training to give every employee the guidance they need to understand and adopt our approach to AI safely and responsibly.
KPMG’s Office of AI and Digital Innovation launched KPMG aIQ, a firmwide AI transformation program that is focused on driving adoption of AI across all areas of the business to create value for clients and an enhanced experience for employees. The program was designed to put AI technology directly in the hands of all partners and professionals and provide accessible resources, such as a user-friendly AI-centric portal—the aIQ hub—that lets employees explore use cases, emerging products, training courses, and individual AI guidance.
Going AI-First
How does an AI-forward organization balance bold innovation with responsible use?
Establishing a governance team and learning infrastructure paves the way to becoming an AI-first organization that strives to unify policies of AI adoption across teams and disciplines. That requires continual testing—and continual vigilance.
The C-suite recognizes AI’s potential to drive innovation, generate revenue, and optimize operations: 54% of executives expect GenAI to support new business models, and 46% anticipate it will help them develop new products and revenue streams, according to KPMG’s survey on the Executive Outlook on GenAI. A full 95% of executives said they consider training and education essential to ensuring their organizations use GenAI ethically; 91% also consider regular audits and human oversight critical.
Keeping AI leaders in lockstep with the C-suite ensures an organization addresses its executives’ top concerns. One sensitive aspect of AI, especially for organizations in closely regulated sectors, is ensuring compliance. It is vital to establish guardrails that govern AI usage, ensuring leaders in IT and governance, risk, and compliance (GRC) can apply AI responsibly and ethically.
In 2023, our organization established an AI Center of Excellence (AI CoE) responsible for evaluating emerging products and platforms and determining which AI tools and technology to bring to the rest of our organization and beyond. The AI CoE is at the core of our experimentation, research, development, and adoption of GenAI-enabled technology across the firm. It informs our tools and technology approach and provides a foundation for executing our AI strategy across the firm.
With our own AI-first infrastructure and programs in place, KPMG now builds training programs for our partners and customers as well—an effort to unify our network’s standards on AI governance, guidance, and best practices of building AI trust by teaching the technology to govern itself.
KPMG also collaborates on product development with our alliance partners to help them refine existing product offerings—and design new ones.
We see these differentiating AI strategies as the top solution areas for trusted AI.
“KPMG and ServiceNow have a strong partnership and collaboration, focusing on innovation, AI, and digital transformation,” said Michael Park, SVP and Global Head of AI Go to Market, ServiceNow. “Their approach to developing and deploying AI demonstrates their commitment to transforming their business and supporting their clients on their AI journeys. Establishing a robust governance structure and a clear roadmap at the outset is foundational in building trust and realizing the value of AI while scaling the technology with speed.”
For these organizations and for our own, the decision to go AI-first is a bold move—and a responsible one. It requires a profound transformation in how we view technology’s role in governance.
The future belongs to those who establish trust in AI—not merely as a powerful tool but as a valuable component in the intricate balance to support innovation, business transformation, competitive advantage, and compliance.
To learn more about KPMG’s Trusted AI approach and insights, please click here.
Emily Frolick is Delivery Model Transformation and Product Management Leader for Risk Services, Bryan McGowan is Global Trusted AI Leader, and Tim Phelps is Risk Services Leader at KPMG LLP.