Skip to content
Center of Agentic Research and Education

Making AI safe enough for your hospital, your courtroom, your country

We build the science of keeping private data private — even when AI agents need to reach the cloud. Open research. Open tools. Zero secrets.

7,504
Live API calls tested
0.022%
Maximum overhead
0
Tokens leaked
$4.37
Total experiment cost

Four forces in tension

Every AI agent deployment balances these four. CARE measures the tradeoffs so you don't have to guess.

Security

Can the system resist attacks? Injection, extraction, re-tokenization — we test what breaks and build what holds.

vs. usability

Privacy

Does private data stay private? We classify every query before it moves. PHI, PII, privileged — each gets a different gate.

vs. capability

Cost

What does safety actually cost? We proved governance adds 0.2ms and zero tokens. The overhead myth is dead.

vs. quality

Speed

Can you be safe and fast? On-device models answer in milliseconds. Frontier models answer better. We help you pick per query.

vs. accuracy

Who we build for

AI agents are powerful. But in the hardest environments, "powerful" is not enough. They must also be provably safe.

Healthcare / HIPAA
"Is this A1C result concerning given his current meds?"
A nurse between patient rounds

The query contains a patient name and lab value. It must stay on-device. But a follow-up question about drug interactions is general knowledge — that can go to a frontier model. The system must know the difference.

Legal / Privilege
"Summarize the deposition and flag contradictions with the filing."
A lawyer preparing for trial

Every document is attorney-client privileged. The agent can use a frontier model for legal reasoning — but the case facts, names, and strategy must never leave the firm's server.

Defense / Classified
"Cross-reference this signal pattern with known threat signatures."
An intelligence analyst on a secure network

Classified data cannot touch a cloud model. But the analyst still needs frontier-level reasoning. The routing decision is binary, the stakes are national, and the latency budget is zero.

Open call

CARE Research Residency

Join CARE as a remote research fellow. Work on privacy-routing, agent governance, or agentic economics. Publish under the CARE banner. Build tools the field needs.

Duration
3 to 6 months, remote
Output
Published paper or open-source tool
Areas
Security, privacy, cost, speed
Status
Accepting applications
Apply for residency →

Support CARE

CARE is a 501(c)(3) nonprofit. Every dollar funds open research, open tools, and open education. Your donation is tax-deductible.

Make a donation

Stay in the loop

New papers, open tools, and analysis. No spam. Unsubscribe any time.