Background: Executive Order 14110 and the AI Governance Mandate
This federal agency employs over 2,000 staff across 12 divisions with a mission that spans regulatory oversight, public services, and national infrastructure management. Following the issuance of Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence, the agency received a directive to deploy AI capabilities with appropriate governance controls within its existing operational framework.
The agency had already begun piloting AI tools for document processing, citizen correspondence management, regulatory analysis, and internal knowledge management. However, these pilots operated outside formal governance structures and lacked the controls required by the Executive Order, OMB Memorandum M-24-10, and the agency's own security requirements.
The agency's CIO and CISO were tasked with standing up an AI governance program that could be operational within 120 days while meeting all applicable federal requirements. The program needed to fit within the agency's existing FedRAMP High authorization boundary to avoid triggering a new authorization process - which would add 12-18 months to the timeline.
The Challenge: Federal Compliance at Federal Scale
Federal AI governance presents challenges that are fundamentally different from private sector deployments:
- FedRAMP boundary constraints: All AI governance tooling had to operate within the agency's existing FedRAMP High authorization boundary. Introducing any cloud component that required a new ATO (Authority to Operate) was unacceptable given the timeline. This effectively required a fully on-premise deployment with no external dependencies.
- NIST AI RMF compliance: The National Institute of Standards and Technology AI Risk Management Framework defines four core functions - Govern, Map, Measure, and Manage - that federal agencies must implement for AI systems. The agency needed to demonstrate compliance across all four functions with auditable evidence.
- OMB M-24-10 reporting: OMB Memorandum M-24-10 requires agencies to report on AI use cases, risk assessments, and governance controls on a recurring basis. The agency needed automated reporting capabilities to meet these requirements without creating a significant manual reporting burden.
- FISMA and NIST 800-53 controls: All AI governance systems had to comply with the Federal Information Security Management Act and applicable NIST 800-53 security controls, including access control (AC), audit and accountability (AU), and system and communications protection (SC) control families.
- Cross-division deployment: The 12 divisions had different missions, data sensitivity levels, and AI use cases. Governance needed to be centralized for reporting and oversight while allowing division-specific policies for data handling and AI model access.
The agency evaluated several AI governance platforms and found that most required cloud components, SaaS dependencies, or architectural patterns that would fall outside their FedRAMP boundary. They needed a solution that could be deployed as a self-contained, on-premise system with no external data dependencies.
Air-Gapped Deployment Considerations
Two of the agency's 12 divisions operate in environments with restricted internet connectivity. While not fully air-gapped, these divisions required AI governance tooling that could function with limited or intermittent connectivity to external networks. The governance platform needed to support on-premise AI models for these divisions while maintaining consistent policy enforcement and audit logging regardless of connectivity status.
The Solution: On-Premise Deployment with NIST AI RMF Mapping
Areebi was selected because its single golden image architecture could be deployed entirely within the agency's existing FedRAMP boundary with no external dependencies. The deployment was structured in four phases:
- Phase 1 - Infrastructure and authorization (Weeks 1-3). The Areebi golden image was deployed on the agency's on-premise Kubernetes cluster within the existing FedRAMP High boundary. The agency's security team conducted a security assessment against NIST 800-53 controls and determined that Areebi's deployment within the existing boundary did not require a new ATO - it fell under the existing system's authorization with a configuration change documented in the system security plan (SSP).
- Phase 2 - NIST AI RMF policy configuration (Weeks 3-5). Areebi's policy engine was configured to map directly to the four NIST AI RMF core functions. Govern policies established accountability structures and organizational AI use policies. Map policies classified AI use cases by risk level. Measure policies defined metrics and monitoring thresholds. Manage policies established response procedures for AI incidents and policy violations.
- Phase 3 - Division onboarding (Weeks 5-10). Each of the 12 divisions was onboarded to the platform with division-specific workspaces, AI model access policies, and DLP configurations. Divisions handling CUI (Controlled Unclassified Information) received enhanced DLP rules and restricted model access. Two restricted-connectivity divisions were configured with local AI model access via Ollama integration for on-premise inference.
- Phase 4 - Reporting automation and go-live (Weeks 10-12). OMB M-24-10 reporting templates were configured in Areebi's compliance reporting engine, enabling automated generation of required agency AI use case inventories, risk assessments, and governance control documentation. The platform went fully operational across all 12 divisions with 1,500+ active users.
The deployment was executed by the agency's internal IT team with support from Areebi's implementation engineers operating under appropriate clearance and access agreements.
NIST AI RMF Core Function Mapping
Each of the four NIST AI RMF core functions was mapped to specific Areebi platform capabilities:
- GOVERN: Role-based access controls, organizational AI use policies, accountability assignments, and approval workflows for new AI use cases.
- MAP: AI use case inventory, risk classification system, data sensitivity categorization, and AI model provenance tracking.
- MEASURE: Automated metrics collection, policy violation tracking, usage analytics, performance monitoring, and risk score calculations.
- MANAGE: Incident response workflows, policy violation remediation, AI model access revocation procedures, and continuous monitoring capabilities.
This mapping provided the agency with a complete NIST AI RMF implementation within a single platform, eliminating the need for multiple tools and manual processes to address each function.
Results: Full NIST AI RMF Compliance Across 12 Divisions
The deployment achieved all objectives within the 120-day mandate and delivered capabilities that positioned the agency as a leader in federal AI governance:
100% NIST AI RMF compliance. All four core functions - Govern, Map, Measure, and Manage - are fully operational with auditable evidence. The agency can demonstrate compliance to oversight bodies including OMB, GAO, and the agency's Inspector General through automated compliance reports generated directly from the platform.
On-premise deployment within FedRAMP boundary. The entire Areebi deployment operates within the agency's existing FedRAMP High authorization boundary. No data leaves the agency's infrastructure for AI governance purposes, and no new ATO was required. The deployment was documented as a configuration change in the existing system security plan.
1,500+ governed AI users across 12 divisions. All divisions are now operating AI workloads through the governed platform with division-specific policies, DLP configurations, and AI model access controls. Usage data shows an average of 4,200 governed AI interactions per day across the agency.
Automated OMB reporting. M-24-10 compliance reports are generated automatically from platform data, reducing what was previously a multi-week manual data collection process to a single-click report generation. The agency estimates this saves approximately 240 staff hours per reporting cycle.
The agency has been cited by its parent department as a model for AI governance implementation and has shared its deployment approach with three other federal agencies considering similar implementations.
“The Executive Order gave us 120 days. Most of our peer agencies are still building plans. We deployed governed AI across 12 divisions with full NIST AI RMF compliance in 12 weeks - within our existing FedRAMP boundary, with no new ATO required. Areebi's on-premise architecture was the only solution that made this timeline possible.”
- Chief Information Officer, Federal Agency
Stay ahead of AI governance
Weekly insights on enterprise AI security, compliance updates, and governance best practices.
Stay ahead of AI governance
Weekly insights on enterprise AI security, compliance updates, and best practices.
Frequently Asked Questions
Does Areebi require a new FedRAMP ATO for federal deployments?
No. Areebi deploys as a self-contained golden image within your existing infrastructure. Because it operates entirely within your current FedRAMP authorization boundary with no external dependencies or data flows, it typically falls under your existing ATO as a configuration change documented in your system security plan. Consult your authorizing official and ISSO for your specific boundary.
How does Areebi map to the NIST AI Risk Management Framework?
Areebi provides platform capabilities that directly address all four NIST AI RMF core functions: Govern (policies, roles, accountability), Map (use case inventory, risk classification), Measure (metrics, monitoring, risk scores), and Manage (incident response, remediation, continuous monitoring). Compliance reports demonstrate coverage across all functions.
Can Areebi operate in air-gapped or restricted-connectivity environments?
Yes. Areebi's on-premise deployment model supports air-gapped and restricted-connectivity environments. When combined with on-premise AI models via Ollama or other local inference engines, the entire AI governance stack operates with no external network dependencies. Audit logs and compliance reports are generated and stored locally.
Does Areebi support automated OMB M-24-10 reporting?
Yes. Areebi includes reporting templates aligned with OMB M-24-10 requirements including AI use case inventories, risk assessments, governance control documentation, and usage metrics. Reports are generated automatically from platform data, reducing manual collection effort by over 90%.
Related Resources
See Areebi in action
Learn how Areebi delivers AI governance for government organizations with a personalized demo.