The pattern
The whole point of doing the auth work in step 6 was to get to this line:
const supabase = supabaseFor(user);After which every database call is automatically scoped to what the user can see. No where user_id = ? in app code, no risk of forgetting one and leaking data.
How that works:
supabase-jstakes anAuthorization: Bearer <jwt>header per request.- Postgres receives the JWT, parses it into
auth.uid()and friends. - Our RLS policies (step 3) reference
auth.uid()to decide row visibility. - The database returns the right slice; the application is none the wiser.
1. Create the client factory
Make supabase/functions/mcp/supabase.ts:
import { createClient, SupabaseClient } from "@supabase/supabase-js";
import type { AuthedUser } from "./auth.ts";
const SUPABASE_URL = Deno.env.get("SUPABASE_URL")!;
const SUPABASE_ANON_KEY = Deno.env.get("SUPABASE_ANON_KEY")!;
/**
* Build a Supabase client scoped to the given user. Every query through
* this client runs as that user, so RLS policies enforce access control
* without the application having to repeat `where user_id = ?` everywhere.
*
* IMPORTANT: never share a client across requests — it carries the user's
* token. One client per (request, user).
*/
export function supabaseFor(user: AuthedUser): SupabaseClient {
return createClient(SUPABASE_URL, SUPABASE_ANON_KEY, {
global: {
headers: {
Authorization: `Bearer ${user.raw}`,
},
},
auth: {
persistSession: false,
autoRefreshToken: false,
},
});
}A few specifics:
- We pass the anon key as the apikey. The anon key is fine because it has no special database role —
authenticated(driven by the JWT) is what does the work. persistSession: falseandautoRefreshToken: falsekeep the client stateless. Each request rebuilds its own client.- The user's JWT is in the
Authorizationheader on every request the client makes to Supabase. Postgres readsauth.uid()from that JWT and enforces RLS.
2. Smoke test the RLS chain end-to-end
Add a quick test route to supabase/functions/mcp/index.ts:
import { supabaseFor } from "./supabase.ts";
// TEMPORARY — for verifying RLS works. Remove before deploying.
app.get("/_debug/workspaces", requireAuth, async (c) => {
const user = c.get("user");
const supabase = supabaseFor(user);
const { data, error } = await supabase
.from("workspaces")
.select("id, name, created_at")
.order("created_at", { ascending: false });
if (error) return c.json({ error: error.message }, 500);
return c.json({ workspaces: data, callerSub: user.sub });
});Then:
TOKEN=$(curl -s -X POST \
"https://<ref>.supabase.co/auth/v1/token?grant_type=password" \
-H "apikey: <your-anon-key>" \
-H "Content-Type: application/json" \
-d '{"email":"you@example.com","password":"your-password"}' \
| jq -r .access_token)
curl -s -H "Authorization: Bearer $TOKEN" \
http://127.0.0.1:54321/functions/v1/mcp/_debug/workspaces | jqYou should see:
{
"workspaces": [
{
"id": "...",
"name": "you@example.com",
"created_at": "..."
}
],
"callerSub": "..."
}If you signed up two different accounts and got a different token each time, hitting the same endpoint with each token should return only that account's workspaces. That's RLS doing its job.
3. Confirm a "leaky query" doesn't leak
To convince yourself nothing's secretly running with elevated privileges, try selecting all workspace_members (which would include other users' rows if RLS weren't applied):
app.get("/_debug/everyone", requireAuth, async (c) => {
const supabase = supabaseFor(c.get("user"));
const { data, error } = await supabase
.from("workspace_members")
.select("workspace_id, user_id, role");
return c.json({ members: data, error: error?.message });
});curl -s -H "Authorization: Bearer $TOKEN" \
http://127.0.0.1:54321/functions/v1/mcp/_debug/everyone | jqYou should only see memberships of workspaces the caller is a member of, not the universe. The RLS policy members_select we wrote in step 3 (using ( public.is_workspace_member(workspace_id) )) is what enforces this.
4. The factory in context
When we wire MCP tools in step 8, every tool handler will look like:
async (input, ctx) => {
const user = ctx.user; // AuthedUser, attached by requireAuth
const supabase = supabaseFor(user);
const { data, error } = await supabase
.from("snippets")
.select(...)
.eq(...);
if (error) throw new Error(error.message);
return { result: data };
}The body of the handler is just "do the query, return the result." No where user_id = $1. No "is this user a member of that workspace" check. The RLS policies handle it, and any mistake in a tool implementation can't escalate beyond what the user already has access to.
5. Strip the debug routes
Before moving on:
// Remove these — they were only for verification
app.get("/_debug/workspaces", ...)
app.get("/_debug/everyone", ...)(Or hide them behind if (Deno.env.get("ENV") === "dev") if you'd like them around for debugging.)
6. Quick mental model recap
Claude
│ Bearer <jwt>
▼
Hono middleware: requireAuth
│ verifies signature, audience, exp; attaches user
▼
Tool handler
│ supabase = supabaseFor(user)
▼
PostgREST (via supabase-js)
│ forwards Authorization header to Postgres
▼
Postgres
│ auth.uid() == user.sub
│ RLS policies fire on every row read/write
▼
Filtered rows ── back up the stack ──> ClaudeThe work-per-tool is now zero auth code. We can spend the next four steps just writing the actual MCP tool surface.
Step 8 wires the MCP SDK in properly and implements the first three tools: list_snippets, get_snippet, save_snippet.