Category: All posts
May 13, 2025
Posted by
Jacky Liang
The rush to integrate large language models (LLMs) into production apps has exposed a common failure mode: without proper authorization in place, they can easily expose sensitive data to the wrong users. Combine that with complex infrastructure (vector databases, sync pipelines, separate stores for embeddings and metadata), and you’re shipping a fragile system that puts user data at risk.
At Timescale and Oso, we think there’s a better way.
In this webinar, we show how you can build a secure, scalable AI chatbot using Postgres—and only Postgres—by leveraging Timescale’s pgai library and Oso’s authorization platform as a service.
Here are the webinar highlights, summarized for you in chapters for easy reference.
(To deploy our sample app for authorized secure chatbot built using Oso and pgai, see this open-source code.)
[08:30–11:50]
Why do simple chatbots break in production? Demo chatbots are easy: embed your docs, slap on an OpenAI API key, and you’re done.
But in a real business environment, Bob (the employee) should never see Alice’s harsh performance review feedback. Only Alice, their manager and HR should. Sales shouldn’t see engineering tickets.
Without authorization boundaries, your chatbot becomes a data leak waiting to happen.
Many demos fall short because they:
The fix? Build with authorization and data consistency as first principles.
[13:34–17:47]
We introduced an end-to-end reference stack that solves both the data synchronization and authorization complexity problem. The solution uses:
The result: you get a secure, performant, and authorized chat system with zero duplicated data.
[14:33–20:45]
Instead of bolting a vector database on top of your existing Postgres database, pgai Vectorizer keeps your embeddings automatically synchronized with your source data in Postgres.
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column(column_name => 'content'),
embedding => ai.embedding_openai(model => 'text-embedding-3-small', dimensions => 768),
destination => ai.destination_table('blog_embeddings')
);
Run your vectorizer worker:
pgai vectorizer worker -d postgresql://...
No extra queues, pipelines, or lambdas needed. Just Python and Postgres.
[21:43–28:14]
Many apps rely on Role-Based Access Control (RBAC). But real-world permissions often depend on relationships:
Oso lets you model this in code:
resource Folder{
roles = ["viewer"];
permissions = ["view"];
relations = { team: Team };
"viewer" if "member" on "team";
"viewer" if global "hr";
"viewer" if is_public(resource);
"view" if "viewer";
}
It also incorporates your Postgres data using native SQL, so you don’t need to sync users, roles, or groups into a second system.
[30:44–37:32]
Here’s how the architecture works:
The result: the same chatbot provides personalized, secure answers based on who’s asking—without leaking data or requiring redundant systems.
[29:01–48:00]
We’ve open-sourced the reference app and walkthrough:
If you’re building AI agents, chat interfaces, or internal copilots—don’t wait to layer in security and data correctness.
Your users will thank you. Your auditors will too.