AI agents are evolving from stateless chatbots to persistent digital assistants. They remember your preferences, learn from past interactions, and maintain context across sessions. This memory makes them useful—and creates new security challenges.
The Memory Problem
Modern AI agents maintain several types of state:
| State Type | Examples | Security Risk |
|---|---|---|
| Conversation History | Past messages, queries, responses | Sensitive discussions exposed |
| User Preferences | Settings, learned behaviors | Personal profiling |
| Long-term Memory | Facts learned about user | Comprehensive data collection |
| Credentials | API keys, tokens, passwords | Account compromise |
| Action History | Past commands, transactions | Activity surveillance |
All of this state typically lives in plaintext on infrastructure you don't control.
How Agent Memory Works Today
┌─────────────────────────────────────────────────────┐
│ AI Agent Infrastructure │
├─────────────────────────────────────────────────────┤
│ │
│ Vector Database Redis/Cache │
│ ┌───────────────┐ ┌───────────────┐ │
│ │ User memories │ │ Session state │ │
│ │ (plaintext) │ │ (plaintext) │ │
│ └───────────────┘ └───────────────┘ │
│ │
│ PostgreSQL Object Storage │
│ ┌───────────────┐ ┌───────────────┐ │
│ │ Conversation │ │ User files │ │
│ │ history │ │ and uploads │ │
│ │ (plaintext) │ │ (plaintext) │ │
│ └───────────────┘ └───────────────┘ │
│ │
│ All accessible to: DBAs, ops, attackers │
└─────────────────────────────────────────────────────┘
Who Can Access Your Agent's Memory?
- Database administrators — Full read access to all user data
- Infrastructure operators — Can access logs, backups, snapshots
- Cloud provider employees — Potential access to underlying storage
- Attackers — Anyone who compromises any of the above
Encryption at Rest Isn't Enough
You might think: "My database has encryption at rest enabled." But encryption at rest only protects against physical disk theft. The encryption keys are managed by the infrastructure provider, and the database decrypts data transparently for any authenticated query.
-- This works fine even with "encryption at rest"
SELECT * FROM user_memories WHERE user_id = 'target_user';
-- Returns: plaintext memories
The Confidential Memory Architecture
CIFER implements a fundamentally different approach: client-side encryption with hardware-enforced key management.
┌─────────────────────────────────────────────────────────────┐
│ CIFER Architecture │
├─────────────────────────────────────────────────────────────┤
│ │
│ Vector Database Redis/Cache │
│ ┌───────────────┐ ┌───────────────┐ │
│ │ User memories │ │ Session state │ │
│ │ (ENCRYPTED) │ │ (ENCRYPTED) │ │
│ │ ████████████ │ │ ████████████ │ │
│ └───────────────┘ └───────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ TEE Enclave (Hardware Isolated) │ │
│ │ ┌─────────────────────────────────────────────┐ │ │
│ │ │ Decryption only happens here │ │ │
│ │ │ Keys never leave the enclave │ │ │
│ │ │ AI processing happens on decrypted data │ │ │
│ │ │ Re-encrypted before storage │ │ │
│ │ └─────────────────────────────────────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
│ DBAs see: encrypted blobs. Ops see: encrypted blobs. │
└─────────────────────────────────────────────────────────────┘
Key Properties
- Data encrypted before leaving user's session
- Decryption keys exist only inside TEE
- AI processes decrypted data inside enclave
- Results encrypted before storage or transmission
- No plaintext ever touches persistent storage
Implementation Patterns
Pattern 1: Encrypted Conversation Memory
import { CIFER } from '@cifer/sdk';
class SecureAgentMemory {
private cifer: CIFER;
constructor(agentId: string) {
this.cifer = new CIFER({ appId: agentId });
}
async storeConversation(userId: string, messages: Message[]) {
// Encrypt conversation for this specific user
const encrypted = await this.cifer.encrypt({
data: JSON.stringify(messages),
policy: {
allowedUsers: [userId],
allowedAgents: [this.agentId],
// Conversation expires after 30 days
expiresAt: Date.now() + 30 * 24 * 60 * 60 * 1000
}
});
// Store encrypted blob - DB admins see nothing useful
await this.db.conversations.upsert({
where: { userId },
data: {
userId,
encryptedMessages: encrypted.ciphertext,
// Metadata can be unencrypted for queries
messageCount: messages.length,
lastUpdated: new Date()
}
});
}
async retrieveConversation(userId: string): Promise<Message[]> {
const record = await this.db.conversations.findUnique({
where: { userId }
});
if (!record) return [];
// Decryption happens inside TEE
const decrypted = await this.cifer.decrypt({
ciphertext: record.encryptedMessages
});
return JSON.parse(decrypted.plaintext);
}
}
Pattern 2: Encrypted Agent Credentials
AI agents often need credentials to act on behalf of users—API keys, OAuth tokens, wallet keys. These are the most sensitive data an agent holds.
class SecureCredentialStore {
async storeCredential(
userId: string,
service: string,
credential: string
) {
const encrypted = await this.cifer.encrypt({
data: credential,
policy: {
allowedUsers: [userId],
allowedAgents: [this.agentId],
// Credentials require explicit user presence for decryption
requireUserAuth: true,
// Can only be used for specific actions
allowedActions: ['api_call', 'transaction']
}
});
await this.db.credentials.create({
data: {
userId,
service,
encryptedCredential: encrypted.ciphertext,
// Never store the actual credential
}
});
}
async useCredential(
userId: string,
service: string,
action: string
): Promise<string> {
const record = await this.db.credentials.findFirst({
where: { userId, service }
});
// Decryption requires user authentication
// and action must be in allowed list
const decrypted = await this.cifer.decrypt({
ciphertext: record.encryptedCredential,
context: { action }
});
return decrypted.plaintext;
}
}
Pattern 3: Encrypted Vector Memories
For RAG (Retrieval-Augmented Generation) systems, you need to search over memories while keeping them encrypted.
class SecureVectorMemory {
async addMemory(userId: string, text: string) {
// Generate embedding (can be done on encrypted data with secure enclaves)
const embedding = await this.cifer.generateEmbedding({
text,
userId
});
// Encrypt the actual text content
const encryptedText = await this.cifer.encrypt({
data: text,
policy: { allowedUsers: [userId] }
});
// Store encrypted text with searchable embedding
await this.vectorDb.upsert({
id: generateId(),
embedding: embedding.vector, // Searchable
metadata: {
userId,
encryptedContent: encryptedText.ciphertext // Encrypted
}
});
}
async searchMemories(userId: string, query: string): Promise<string[]> {
const queryEmbedding = await this.cifer.generateEmbedding({
text: query,
userId
});
// Search returns encrypted results
const results = await this.vectorDb.query({
embedding: queryEmbedding.vector,
filter: { userId },
topK: 5
});
// Decrypt only what's needed, inside TEE
return Promise.all(
results.map(r =>
this.cifer.decrypt({ ciphertext: r.metadata.encryptedContent })
.then(d => d.plaintext)
)
);
}
}
Security Properties
What's Protected
| Threat | Protection |
|---|---|
| Database breach | Attacker gets encrypted blobs |
| Insider threat (DBA) | DBA cannot decrypt user data |
| Cloud provider access | Provider sees only ciphertext |
| Backup theft | Backups contain only encrypted data |
| Log exposure | No plaintext in logs |
| Memory dump attack | TEE protects runtime memory |
What's Not Protected
| Threat | Mitigation |
|---|---|
| Compromised user device | Client-side security measures |
| User shares their own data | User education, audit logs |
| TEE vulnerabilities | Multiple enclave providers, attestation |
| Side-channel attacks | TEE hardening, noise injection |
Migration Path
Already have an AI agent with plaintext memory? Here's how to migrate:
Phase 1: New Data Encrypted
- Start encrypting all new conversations and memories
- Existing data remains in legacy format
- Dual-read from both encrypted and legacy stores
Phase 2: Background Migration
- Gradually encrypt historical data
- Use off-peak processing
- Verify encryption before deleting plaintext
Phase 3: Legacy Removal
- Remove all plaintext data
- Disable legacy read paths
- Audit for any remaining unencrypted data
Performance Considerations
Encryption adds overhead, but CIFER optimizes for AI workloads:
| Operation | Overhead | Mitigation |
|---|---|---|
| Encrypt | ~5ms | Batch operations |
| Decrypt | ~5ms | Cache decrypted data in TEE |
| Vector search | ~10% | Optimized secure embeddings |
| Storage | +30% | Compression before encryption |
For most AI agents, the latency is imperceptible—users won't notice 5ms when they're waiting for an LLM to generate a response.
Conclusion
AI agent memory is a ticking time bomb. Every conversation, every preference, every credential—it's all sitting in databases that dozens of people can access. It's not a matter of if this data will be breached, but when.
Confidential computing provides a path forward: hardware-enforced encryption that protects data even from the people who operate the infrastructure. Your AI agents can have memory without creating a surveillance database.
Ready to secure your AI agent's memory? Contact us to learn how CIFER can help you implement confidential computing.
See also: Why AI Agents Need Confidential Computing and Defending Against Prompt Injection Data Exfiltration.