Memory
You might still wonder what differentiates Clatri from ChatGPT, and whether ChatGPT is also an agent or not. Well, ChatGPT is, in fact, an agent. Claude.ai is too. Gemini App as well. But GPT is not an agent, Claude is not an agent, and Gemini is not an agent.
What does this mean? The companies that create large language models — OpenAI, Anthropic, Google — offer them for consumption via API. A developer can call GPT, Claude, or Gemini from their code and get text responses. That's a model, not an agent. But those same companies also build products for end consumers — ChatGPT, Claude.ai, Gemini App — and they've been adding agentic capabilities to those products: they can generate Python code and execute it, create Excel files, search the internet, generate images, and connect to Google services. As time goes on, these chatbots acquire more tools and behave more like agents.
However, none of them can do what Clatri does: store your information in a structured way. And that difference changes everything.
Semantic memory and structured memory
No one can say ChatGPT doesn't have memory. It knows things about you and uses them to influence how it responds. But the type of memory it uses is semantic memory. What does this mean?
When you send a message, the model converts your text into a numerical representation called an embedding — a vector of hundreds or thousands of dimensions that captures the meaning of what you said, not the exact words. "I have a terrible headache" and "my head is killing me" produce similar embeddings, because the meaning is similar even though the words are different.
With embeddings, a system can search your conversation history and retrieve relevant fragments for the current question. This is known as RAG (Retrieval-Augmented Generation): before responding, the model retrieves semantically related information and injects it into its context. It's powerful for remembering past conversations, expressed preferences, or documents you uploaded.
But it has a fundamental limit: it's not precise. Semantic memory retrieves what resembles your question, not what is your question. If you ask ChatGPT "how much did I spend on restaurants in January?", it might remember that in some conversation you mentioned a restaurant expense — but it can't sum your transactions, because it doesn't have them stored in a structured way. There's no expense table with date, amount, category, and account. There's only remembered text, approximate, incomplete.
Clatri uses structured memory. Every piece of data you register — an expense, income, medication, task, appointment — is stored in a PostgreSQL database with defined tables, columns, and relationships. When you ask "how much did I spend on restaurants in January?", the agent doesn't search past conversations: it executes a SQL query on your real transactions, filtered by category, date, and entity. The result is exact, not approximate.
Clatri currently operates with over 60 structured tables across 8 PostgreSQL schemas. Each table has relationships, constraints, and row-level security. When the agent acts, it operates on real data with accounting precision.
Structured memory solves the precision problem, but it doesn't cover everything. Some things don't fit in a table: the context of a past conversation, a preference you mentioned in passing, an instruction you gave weeks ago. To address this, Clatri's roadmap includes incorporating semantic memory with RAG — a system that stores embeddings of your conversations and documents, and retrieves them when relevant. The combination of both types of memory — structured for precise data, semantic for context and preferences — is what will allow Clatri to remember not just your numbers, but also your intentions.
A single chat
In ChatGPT, Claude.ai, or Gemini App, you create separate chat sessions: one to ask for help with an email, another to talk about cooking, another for code. Each session is an independent thread with its own context, and if you want to revisit something you said in another conversation, you have to find it yourself.
Clatri works differently. Since its purpose is to be your agent for everything — finances, health, tasks, projects —, it adopts a single continuous chat per entity. There are no sessions to open and close. You talk to it and the agent always has the last 6 messages from your history in context, giving it immediate continuity without you having to repeat recent context. It doesn't need separate sessions because its tools operate on your database: the real context isn't in the conversation — it's in your data.
Processed and stored documents
When you upload a file to Clatri, it doesn't remain as an inert attachment. Each document goes through processing that makes it queryable in the future:
- Images: the system generates a text description of their content, which is stored and associated with the message.
- Documents (PDFs, Excel, CSV): they go through OCR processing that extracts their content. The processed text is stored with a unique identifier, and the agent can query it at any time using its tools — without you having to upload it again.
This means that if you uploaded a bank statement PDF three months ago, the agent can retrieve its content and operate on it — as long as you give it clear instructions about which document you're referring to.