Confident Security has emerged from stealth with $4.2 million in funding to address privacy concerns surrounding artificial intelligence deployments in enterprise environments. The startup’s debut comes as organizations across regulated industries grapple with the tension between leveraging AI capabilities and maintaining data confidentiality.
The funding round attracted investment from Decibel, South Park Commons, Ex Ante, and Swyx, backing the company’s mission to enable secure AI interactions without compromising sensitive information. Founded by two-time entrepreneur Jonathan Mortensen, the team comprises specialists from leading technology companies including Google, Apple, and Johns Hopkins University.
CONFSEC Platform Architecture
The company’s flagship offering, CONFSEC, represents an enterprise-grade implementation of Apple’s Private Cloud Compute architecture. This system allows organizations to deploy AI inference engines whilst maintaining strict privacy controls over user inputs and metadata.
Unlike traditional AI deployments where data may be accessed for training purposes or stored in unencrypted formats, CONFSEC creates a protective wrapper around AI systems. The platform supports deployment across various infrastructure options, including public cloud services and bare metal installations, whilst delivering privacy assurances that surpass existing regulatory requirements.
The technology specifically targets large language model providers, hyperscale cloud operators, government agencies, and enterprises seeking to implement AI without exposing confidential information.
Regulated Sector Applications
Healthcare, financial services, government, and legal organisations represent primary markets for Confident Security’s solution. These sectors traditionally face significant barriers when adopting AI technologies due to stringent data protection requirements and concerns about intellectual property exposure.
“Privacy is now the critical barrier to AI adoption in enterprise,” said Jess Leao, Partner at Decibel.
The platform aims to resolve what Mortensen describes as a fundamental tension in regulated industries. Organisations recognise AI as essential for competitive advantage but struggle with the privacy implications of feeding sensitive data into AI systems.
“Businesses and consumers are feeding AI everything from medical information to a company’s roadmap and trade secrets,” Mortensen explained.
Market Positioning and Trust Infrastructure
Confident Security’s approach differs from existing privacy solutions by embedding trust mechanisms directly into the infrastructure layer rather than relying solely on policy-based protections. This architectural approach provides cryptographic guarantees about data handling rather than procedural assurances.
The company believes that mastering privacy controls will determine which organisations maintain competitive advantages as AI adoption accelerates across industries. By enabling secure AI deployment, CONFSEC potentially removes a significant obstacle preventing enterprise AI implementation.
“The companies that master privacy will maintain their competitive edge during AI’s next evolution,” Mortensen noted.
The solution addresses concerns about data value retention, allowing organisations to benefit from AI capabilities without relinquishing control over proprietary information or customer data.
Technical Implementation
CONFSEC has undergone extensive testing to validate its security claims and operational reliability. The platform’s design philosophy centres on preventing unauthorised access to user prompts and associated metadata during AI processing cycles.
The system’s flexibility allows integration with existing AI infrastructure whilst maintaining the privacy guarantees that regulated sectors require. This compatibility factor could prove crucial for organisations with established AI deployments seeking to enhance their privacy posture without complete system overhauls.
Industry observers suggest that solutions like CONFSEC may become essential as enterprises increasingly recognise that privacy considerations cannot be addressed through policy measures alone but require technological safeguards built into AI systems themselves.
