The US Army is leveraging Carnegie Mellon University’s AI stack model as the foundation for its AI development platform, according to Isaac Faber, Chief Data Scientist at the US Army AI Integration Center. Speaking at the recent AI World Government event, Faber emphasized the importance of platform flexibility in the Army’s digital modernization efforts.
The AI Stack and Army’s Approach
The AI stack, which incorporates ethics across all layers, consists of:
- Planning stage (top layer)
- Decision support
- Modeling
- Machine learning
- Massive data management
- Device layer/platform (bottom layer)
“I am advocating that we think of the stack as a core infrastructure and a way for applications to be deployed and not to be siloed in our approach,” Faber explained. “We need to create a development environment for a globally-distributed workforce.”
Common Operating Environment Software (Coes)
First announced in 2017, the Army’s Coes platform emphasizes:
- Scalability
- Agility
- Modularity
- Portability
- Open architecture
The Army is collaborating with CMU and private companies like Visimo on prototype development, preferring coordination with industry over off-the-shelf solutions. Faber noted that pre-built solutions often don’t address the unique challenges of DOD networks.
AI Workforce Development
The Army’s AI training program targets multiple teams:
- Leadership (professionals with graduate degrees)
- Technical staff (certification-based training)
- AI users
Technical teams focus on different areas including:
- General purpose software development
- Operational data science
- Deployment analytics
- Machine learning operations
Expert Panel Insights on AI Implementation
During the event’s panel discussion on Foundations of Emerging AI, experts shared valuable perspectives:
Promising Use Cases
- Jean-Charles Lede (US Air Force): Decision advantages at the edge and mission planning
- Krista Kinnard (Department of Labor): Natural language processing for handling data on people, programs, and organizations
Key Implementation Risks
- Scale of Impact: Changes in algorithms can affect millions of stakeholders
- Model Drift: Continuous monitoring needed as underlying data changes
- Simulation Gaps: Challenges in mapping algorithms to real-world scenarios when historical data is limited
Best Practices
- Maintain human oversight (“humans in the loop and humans on the loop”)
- Implement robust testing strategies
- Focus on AI explainability
- Ensure independent verification and validation
- Monitor deployed models continuously
Leadership Challenges
Faber identified executive education as a critical challenge: “The hardest to reach are the executives. They need to learn what the value is to be provided by the AI ecosystem. The biggest challenge is how to communicate that value.”