The article was originally published in OR/MS Today magazine.
By John Larson
At Booz Allen, we empower people – our colleagues, our clients, our communities – to change the world. It’s our purpose, and it’s what we do every day through the expression of our values.
With global headquarters in McLean, Va., Booz Allen employs approximately 29,200 people globally as of Sept. 30, 2021, and had revenue of $7.9 billion for the 12 months ending March 31, 2021. For more than 100 years, military, government and business leaders have turned to Booz Allen to solve their most complex problems. We are a key partner on some of the most innovative programs for governments worldwide and trusted by its most sensitive agencies. We work shoulder-to-shoulder with clients, using a mission-first approach to choose the right strategy and technology to help them realize their vision.
As we work to propel our clients’ missions forward, the importance of harnessing emerging technologies like artificial intelligence (AI) cannot be overstated. AI is not a single technology breakthrough. It is a complex integration of people, processes and technology that is rapidly transforming how public and private sector organizations do business worldwide.
Accordingly, Booz Allen has invested in AI capabilities, markets and talent, focusing on speed, collaboration and scale. We are urgently acting to invest, activate and scale AI, while leveraging commercial partnerships and investments to accelerate the mission. Finally, mastery of operational deployment – the ability to scale AI applications across the enterprise – has been critical to taking innovation from the lab to the field.
Getting to true enterprise AI adoption requires a holistic approach to AI operations, or AIOps – the processes, strategies and frameworks to operationalize AI to address real-world challenges and realize high-impact, enduring outcomes. AIOps combines responsible AI development, data, algorithms and teams into an integrated, automated and documented modular solution for development and sustainment of AI. But there’s more to recognize in AI’s complex story. That’s why we’re deeply committed to the innovative and responsible use of AI to empower our clients to tackle the most complex and pressing challenges facing America and the world today.
Deploying the AI that Federal Agencies Need
Booz Allen is the largest provider of AI services for the federal government. Our team of more than 500 AI and machine learning (ML) practitioners provides professional and technical services to design, architect, engineer and integrate AI solutions to accomplish critical missions and maintain U.S. technological leadership. We support some of our nation’s most high-profile and innovative programs – including the Defense Threat Reduction Agency (DTRA), Operations and Integration Directorate and the Joint Artificial Intelligence Center (JAIC) – to transform and advance enterprise AI initiatives in a deliberate, outcome-focused manner to drive mission impact.
Our work on more than 150 AI projects across civilian, intelligence and defense organizations ranges from early research to large-scale enterprise operations. In addition, Booz Allen has conducted award-winning research and development projects with leading publications in top academic journals and forums. Our AI business also encompasses a Tech Scouting network and unique partnerships with big-tech AI/ML vendors and nontraditional startups.
Centering AI Solutions Around People
Based on this extensive knowledge derived from AI project applications across an array of mission challenges, we uniquely understand how, as AI use proliferates, very real ethical challenges become clearer. While these applications will range in terms of magnitude and impact, it’s important to remember that real people’s lives are often at stake, from medical diagnoses to safeguarding veteran identity or benefits from fraudulent actors. How can users of AI know the outcomes are fair? Can they see into the AI system to understand how it was built and how it operates? Are there clear lines of accountability when things go wrong?
Because of questions like these, we take great care to proactively mitigate ethical risks before they arise. Lessons learned across our AI portfolio inform comprehensive AI frameworks and approaches, guiding and governing the delivery of AI solutions and helping teams manage risk along the way. All of this feeds into “responsible AI.”
The oft-cited issues of trustworthiness, bias and fairness are key concepts, but not the whole story. Other components include adoption, ethics and the workforce.
AI must be based on a set of shared values and principles, and more importantly, on how those values and principles relate to the context of the system for which a fair, interpretable, reliable and robust solution is designed. AI models are meant to be used in the real world – not just in a lab. By aligning AI principles to a set of clear values, organizations can create a positive impact and reduce unintended harm.
Booz Allen defines “responsible AI” as ensuring that AI solutions, when deployed, meet performance requirements. Solutions must also adhere to organizational standards and values and be designed to achieve mission outcomes, while responsibly accounting for human impact and equity.
The Who, What and Why of Responsible AI
It’s critical to think about responsible AI from the design phase all the way through to implementation and monitoring. But first, it’s necessary to clarify a few key points.
In some applications, AI will have limited or no adverse impact on humans directly – think of cyber threat hunting, sniffing out potentially malicious data packets. However, where AI impacts humans, it is critical to understand the question of “who?” Responsible AI needs a focus – an individual, a group or class of people, or an entire society. What are the political, economic, social, technological, legal and environmental impacts of an AI solution across different stakeholders over time? This analysis should inform how organizations design and build models and select datasets to achieve desired results.
Because responsible AI is about values driving application, any organization considering how to manage, design, evaluate and use AI must also develop guiding principles anchored to mission-driven core values, and do so early in the process. This is the “why” behind responsible AI – a source of authority or moral compass to guide decisions throughout the development and application process.
These “north stars” can stem from an organization’s shared values and principles, or a society’s norms and cultural practices. For example, if inclusivity is a core value, an organization may develop a guiding principle that calls for all AI teams to be meaningfully diverse and inclusive, with members from different backgrounds, skills and thinking styles.
By developing a set of regulatory principles, an organization can define what a “fair” AI system is, who it’s designed for, and why its implications are considered fair in the first place. It is then key to embed these values into the governance structure, with controls and measures to validate execution.
When developing these responsible AI principles, it’s important to carefully consider the role of responsible AI throughout the entire AI lifecycle, from designing the solution to training the users and integrating AI into production. Then evaluate each step through the lens of the AI guiding principle to ensure ethical implementation in the real world.
Shaping a Responsible AI Future
We have identified five main considerations when integrating responsible AI:
- Impact: Understand how AI systems will impact an organization’s stakeholders in specific and tangible ways. This assessment should include routinely considering the political, economic, social, technological, legal and environmental impacts across different stakeholders over time.
- Diversity, Equity & Inclusion: Build meaningfully diverse and inclusive development teams. Include members with different backgrounds, skills and thinking styles. A team’s collective experience and insights will reduce unconscious bias, identify potential unintended consequences, and better reflect stakeholders’ wide‐ranging values and concerns.
- Auditability: Develop mechanisms for data provenance and auditability to verify AI systems are operating as intended. If something goes wrong, data tracing and auditability mechanisms will help uncover data or concept drift, or potentially expose upstream/downstream data issues. Clear accountability mechanisms and data tests can help reduce ethical concerns (e.g., data bias and amplification of the bias in ML for training, inference, etc.), so it is critical to transparently account for the results. Teams should understand that they are accountable for the actions, outputs and impact of their models.
- Mitigation: Stay informed regarding AI technical developments. Because this field of study rapidly changes, tools used to design and implement ethical systems have limited shelf lives. A model’s sophistication will often outpace ethical tools, increasing the probability something will go wrong and reducing the ability to fix it if it does. Maintaining awareness of AI technical developments and implementing measures to monitor can help mitigate risk and protect an organization from unintended consequences.
- Applicability: Design systems with specific applications and use cases in mind. Assessing the “fairness” of a model requires context and specificity. AI systems should be fair, but fair to whom? And in what way? Fairness is a laudable goal but only becomes useful when applied to a specific situation. Something that may be a fair outcome for someone in one situation could appear totally unfair to another person in another situation.
Guided by these responsible AI considerations and approaches, Booz Allen is committed to the innovative and responsible use of AI to empower organizations to tackle their most complex challenges, wherever they may be on their AI journey.
John Larson is a senior vice president at Booz Allen Hamilton.