What Lakera Built - And Where It Stopped
Lakera emerged as one of the earliest dedicated LLM security tools, best known for Lakera Guard - a prompt injection firewall that intercepts and classifies adversarial inputs before they reach a language model. The product was real, the threat it addressed was real, and for organisations whose primary concern was prompt injection, Lakera provided genuine value.
But Lakera's scope was deliberately narrow. It was an LLM security tool, not an AI governance platform. It answered one question well - "Is this prompt an attack?" - while leaving every other governance question unanswered:
- Who is authorised to use which AI models for which purposes? (No policy engine)
- Should this interaction be blocked, masked, or escalated for approval? (Limited action choices)
- Why was this specific decision made, and can we prove it to a regulator? (Partial provenance)
- What did the AI system see at the time of a failure? (No incident replay)
- Which unsanctioned AI tools are employees using? (Limited shadow AI detection)
- What is our total AI model exposure and risk posture? (No model registry)
- Are AI outputs - not just inputs - being enforced against policy? (Input-only scanning)
These are not edge cases. They are the core capabilities that SOC 2, HIPAA, and EU AI Act auditors expect to see in any organisation's AI governance programme.
The Check Point Acquisition: What It Means for You
In September 2025, Check Point Software acquired Lakera for an undisclosed amount. Lakera had approximately 52 employees and an estimated $5.7M ARR at the time of acquisition - making it a technology acqui-hire rather than a standalone product bet.
This follows a pattern across the AI security market. In 2024–2025, seven major AI security startups were acquired by infrastructure vendors:
- Robust Intelligence → Cisco ($400M)
- Protect AI → Palo Alto Networks ($650–700M)
- CalypsoAI → F5 Networks ($180M)
- Prompt Security → SentinelOne ($250–300M)
- Pangea → CrowdStrike ($260M)
- Lakera → Check Point (undisclosed)
- Promptfoo → OpenAI (undisclosed)
The implication for customers is consistent across all seven acquisitions: the standalone product disappears into the acquirer's platform. Lakera is now "Check Point AI Security" - available as a module within the Check Point security stack, not as an independent product.
For existing Lakera customers, this means:
- Migration pressure. Check Point will sunset the standalone Lakera Guard product and migrate customers to the integrated offering - which requires Check Point infrastructure.
- Roadmap capture. Lakera's engineering team now serves Check Point's priorities. AI governance features compete with firewall, VPN, and threat prevention for roadmap attention.
- Pricing bundling. What was a focused, affordable tool becomes part of an enterprise platform with enterprise pricing and minimum commitments.
If you are evaluating Lakera today, you are actually evaluating Check Point AI Security - with all the ecosystem requirements, pricing complexity, and platform dependencies that entails. For an alternative that remains independent and purpose-built, see how Areebi compares.
AI Control Plane vs LLM Firewall: A Category Difference
The comparison between Areebi and Lakera is not a feature-for-feature contest - it is a category difference. Lakera was an LLM firewall. Areebi is an AI control plane. The distinction matters because it determines what problems you can solve.
What an LLM firewall does
An LLM firewall sits between users and models, inspecting inputs for known attack patterns - prompt injection, jailbreak attempts, and adversarial inputs. It is a defensive perimeter control, analogous to a web application firewall (WAF) for AI. Lakera did this well.
What an AI control plane does
An AI control plane governs the entire AI interaction lifecycle: who can access which models, what data can flow in and out, what actions are taken on policy violations, why decisions were made, and whether the organisation can prove compliance. It encompasses the firewall function but extends to policy, governance, audit, and compliance.
The practical difference:
| Governance question | LLM firewall (Lakera) | AI control plane (Areebi) |
|---|---|---|
| Is this prompt an attack? | Yes | Yes |
| Does this prompt contain sensitive data? | Yes | Yes |
| Should this user access this model? | No | Yes - policy engine |
| What action should be taken? (block / mask / approve) | Limited | Granular, per-rule |
| Does the AI output violate policy? | No - input only | Yes - output enforcement |
| Is this AI making decisions or advising? | No | Yes - decision authority controls |
| Can we replay what the AI saw during an incident? | No | Yes - incident replay |
| Can we produce audit evidence for a regulator? | Logs only | Yes - compliance-mapped evidence |
| What unsanctioned AI tools are in use? | Limited | Yes - shadow AI discovery |
| What is our total model exposure? | No | Yes - model registry + risk scoring |
Organisations that only need a prompt firewall may find Lakera's successor (Check Point AI Security) sufficient. Organisations that need to govern AI - to control, prove, and defend their AI usage - need a control plane. That is what Areebi provides.
The 8 Critical Capabilities Lakera Never Built
The CTO's comparison data tells a clear story: of 14 governance capabilities evaluated, Lakera covers 4. Areebi covers all 14. Here are the capabilities that matter most - and that Lakera (now Check Point AI Security) cannot provide.
1. AI policy engine
Lakera had no concept of identity-aware, context-aware policies. It could not enforce rules like "Marketing can use Claude for copy, but not for customer data analysis" or "Contractors lose AI access outside business hours." Areebi's policy engine enforces these rules natively, with a visual builder that compliance teams can operate without engineering support.
2. Decision authority controls
As AI moves from advisory to autonomous, organisations must classify which AI interactions are "assist" (human decides) versus "decide" (AI acts). Lakera had no mechanism for this. Areebi enforces decision boundaries, ensuring AI does not silently escalate from recommendation to action - a growing source of regulatory liability.
3. Decision provenance
When an auditor asks "Why was this interaction blocked?", Lakera could show a detection log. Areebi provides the complete provenance chain: which policy was evaluated, what inputs were assessed, which rule triggered, what action was taken, and who approved the policy. This is the difference between a log entry and defensible evidence.
4. Incident replay
This capability is unique to Areebi. When an AI incident occurs, Areebi can reconstruct exactly what the model saw at the time of failure - the full prompt context, the policy state, the model version, the user's permissions. Standard logs capture events; incident replay captures the complete decision context. This is critical for forensic investigation and regulatory defence.
5. Model registry & risk scoring
Organisations cannot govern what they cannot see. Areebi's model registry catalogues every AI model in use - sanctioned and discovered - and assigns risk scores based on data sensitivity, deployment context, and compliance exposure. Lakera operated at the prompt level with no visibility into the broader model landscape.
6. Output enforcement
Lakera scanned inputs. Areebi scans inputs and outputs. This matters because sensitive data can appear in model responses - hallucinated PII, training data leakage, or outputs that violate content policies. Input-only monitoring misses an entire class of data exposure risks.
7. Audit-ready evidence
Lakera provided logs. Areebi provides evidence - pre-mapped to HIPAA, SOC 2, ISO 27001, NIST AI RMF, and EU AI Act control requirements. The difference between logs and evidence is the difference between "we have data" and "we can prove compliance." Regulators and auditors require the latter.
8. Governed AI workspace
Lakera was infrastructure - invisible to end users. Areebi includes a multi-model AI workspace with RAG, conversation history, and collaboration features. This is not a nice-to-have; it is the mechanism that drives adoption of governed AI channels. Without a workspace that employees actually prefer over consumer alternatives, governance controls are routinely bypassed.
Migrating from Lakera (or Check Point AI Security) to Areebi
Whether you are an existing Lakera customer facing migration to Check Point's platform, or evaluating alternatives before committing to the Check Point ecosystem, the transition to Areebi is straightforward.
What carries over
If you have Lakera Guard policies configured, the prompt-security rules translate directly to Areebi's input enforcement layer. Areebi supports the same detection categories - prompt injection, jailbreak, PII, PHI, PCI, secrets - plus custom patterns that Lakera did not support.
What you gain
Everything Lakera could not provide: a complete governance platform with policy engine, decision controls, incident replay, compliance automation, shadow AI detection, model registry, output enforcement, and a governed AI workspace. These capabilities activate alongside your existing prompt security rules - no gap in coverage.
Timeline
| Phase | Duration | Activities |
|---|---|---|
| Assessment | 1 week | Map existing Lakera rules to Areebi policies, identify governance gaps |
| Parallel deployment | 1–2 weeks | Run Areebi in monitoring mode alongside existing Lakera/Check Point setup |
| Cutover | 1 week | Activate enforcement, enable additional governance capabilities |
| Decommission | Ongoing | Remove Lakera/Check Point AI module after validation period |
Total migration time: 3–4 weeks with no governance gaps during transition. Request a demo to see the migration path specific to your Lakera configuration, or take the free AI governance assessment to understand your current coverage gaps.
Pricing: Standalone Product vs Platform Bundle
Lakera's original pricing was competitive - focused, affordable, per-API-call pricing for prompt security. Post-acquisition, the pricing model changes fundamentally.
Check Point AI Security (formerly Lakera)
| Component | Estimated annual cost |
|---|---|
| Check Point platform (prerequisite) | $40,000–$100,000 |
| AI Security module | $15,000–$35,000 |
| Implementation services | $15,000–$30,000 |
| Total (200 users) | $70,000–$165,000 |
Areebi (complete AI control plane)
| Component | Annual cost |
|---|---|
| Areebi platform (200 seats) | $48,000–$84,000 |
| Implementation | $5,000 (one-time) |
| Total Year 1 | $53,000–$89,000 |
Areebi costs 25–46% less than Check Point AI Security while delivering 14 governance capabilities vs 4. And there is no prerequisite infrastructure - Areebi deploys standalone in your VPC, on-prem, or air-gapped environment. See transparent pricing on our website.
Frequently Asked Questions
Is Lakera Guard still available as a standalone product?
Lakera was acquired by Check Point Software in September 2025. The standalone Lakera Guard product is being integrated into Check Point's security platform. New customers are directed to Check Point AI Security, which requires the broader Check Point ecosystem. Existing Lakera customers face migration timelines set by Check Point.
How does Areebi handle prompt injection detection compared to Lakera?
Areebi's input enforcement layer includes prompt injection and jailbreak detection comparable to Lakera Guard, plus additional capabilities: output-side enforcement (detecting sensitive data in model responses), custom detection patterns for organisation-specific threats, and granular action choices (block, mask, approve, or escalate) rather than simple detect-and-alert. Prompt security is one of 14 governance capabilities Areebi provides.
We chose Lakera specifically because it was lightweight and API-based. Is Areebi heavier?
Areebi offers flexible deployment models. If you need lightweight API-level enforcement, Areebi's input/output scanning API works similarly to Lakera Guard. The difference is that you also get the complete governance platform - policy engine, audit trail, compliance mapping, shadow AI detection - available when you need it. You can start with API-level enforcement and activate additional capabilities as your governance programme matures.
What happens if Check Point improves the Lakera module significantly?
Check Point may improve prompt security features, but the structural limitation remains: AI governance is a small module within a network security platform. Areebi's entire engineering team focuses exclusively on AI governance, shipping improvements weekly. The 10 capabilities Lakera never built - policy engine, decision controls, incident replay, compliance automation - require purpose-built architecture, not incremental additions to a firewall product.
Related Resources
Ready to switch from Lakera?
Migration support included
Get a personalized demo and see how Areebi compares for your specific requirements.