MCP Server
Blobify exposes a Model Context Protocol (MCP) server so AI assistants — Claude, ChatGPT, Codex, Cursor — can read and write content in your workspace through their built-in tool surface. No SDK to install, no glue code: connect once with an API key and your AI assistant gets a typed authoring tool set.
What MCP is
MCP is a protocol that lets AI hosts call APIs as tool functions. Each tool has a name, a description, and a JSON schema. The AI decides when to call which tool based on the user's request and the descriptions. Blobify ships an MCP server inside the API that maps directly onto the content lifecycle: discover, read, edit, publish.
Endpoint
POST https://api.blobify.io/v1/mcp/{orgId}
Authorization: Bearer {api-key}The transport is Streamable HTTP, single-endpoint, plain
application/json responses. Stateless: every request opens a fresh
session.
Connect from your AI host
The setup is the same shape everywhere — paste a URL and an
Authorization header.
Claude Desktop
~/Library/Application Support/Claude/claude_desktop_config.json (macOS)
or the equivalent on your platform:
{
"mcpServers": {
"blobify": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://api.blobify.io/v1/mcp/YOUR_ORG_ID",
"--header",
"Authorization: Bearer YOUR_API_KEY"
]
}
}
}mcp-remote is a stdio↔HTTP bridge maintained by Anthropic. Save the
file, fully quit Claude Desktop (⌘Q), and reopen — blobify should
appear in the server list at the bottom of the chat input.
claude.ai (web/iOS) and ChatGPT — OAuth flow
claude.ai (web + mobile) and ChatGPT both require OAuth, not raw bearer keys. We support that:
- In claude.ai → Settings → Connectors → "Add custom connector".
- Enter the URL:
https://api.blobify.io/v1/mcp/YOUR_ORG_ID. - Click connect. You'll be redirected to the Blobify dashboard, log in if you aren't already, pick the workspace you want to share, and click Allow.
- The connector now appears in your chat. The token issued behind the scenes mirrors your role in the chosen workspace.
ChatGPT's flow is identical via Settings → Connectors → "Add custom".
You can revoke access anytime from your account page in the dashboard — each OAuth-issued token shows up alongside your manually created API keys.
Codex CLI
In the MCP servers settings, choose Streamable HTTP as the transport and supply the URL and bearer header.
Cursor / Zed
Same JSON shape as Claude Desktop, in the IDE's MCP config. These hosts
don't ship a built-in HTTP fetch tool, so we recommend also enabling
@modelcontextprotocol/server-fetch alongside Blobify so the AI can
follow our bucket-direct read URLs when needed.
Authentication
Two ways to authenticate, picked automatically based on the AI host:
Bearer API key — for hosts that accept arbitrary headers (Claude Desktop, Codex CLI, Cursor, Zed). API keys are minted in Settings → Developer → API Keys.
OAuth 2.1 — for hosts whose connector UI requires OAuth (claude.ai web/iOS, ChatGPT web/iOS). The user clicks Allow on a Blobify-hosted consent screen; the access token is issued automatically.
Both modes share the same role + space-scope model:
viewer,editor,developer,adminspaces: ["*"]for full access, or specific space ids
Every MCP call is attributed to the issuing token in audit logs. Tokens (manually created keys and OAuth-issued ones alike) can be revoked from your account page.
Tool catalog
Thirty-one tools across discovery, reads, writes, lifecycle, schema management, codegen, asset upload, and webhooks. The AI
sees full descriptions automatically via tools/list — this section is
for humans skimming what's available.
Discovery
Call getReadContext once at the start of a session to learn the shape
of the workspace. Use the lighter getModelSchema / getBlockSchema
when you need full field details.
| Tool | Purpose |
|---|---|
| getReadContext | Bucket URL, locales, spaces, model + block names, URL templates |
| getModelSchema | Full schema for one model (fields, types, required flags, indexes) |
| getBlockSchema | Full schema for one block, used before constructing block instances |
Read
Use these to find an entry by name and read its data, especially in sandboxed AI environments where direct bucket fetching isn't available.
| Tool | Purpose |
|---|---|
| findContent | List/search entries by displayField/slug. Returns {id, title, slug, publishedLocales, state} |
| getContent | Full content document by id, draft or published state |
| findAssets | List/search assets with filters: query, category, extensions, folder |
Write
| Tool | Purpose |
|---|---|
| createDraft | Create a new draft. Returns the generated id |
| saveDraft | Replace a draft's fields (full replace — fields you omit are removed) |
| patchFields | Apply RFC 6902 JSON Patch ops. Use this for surgical edits. Pure test ops return action: "verified" (no write); patches that don't actually change anything return action: "unchanged" |
| appendBlock | Append one block to a blocks field. Generates the block id for you |
| setRichtext | Set a richtext field from markdown. Server converts to AST |
| bulkImport | Up to 100 items per call. Per-item status; failures don't abort the batch |
Asset upload
Two-step flow that mirrors the dashboard. Bytes never round-trip through Blobify's API — the AI's host PUTs them directly to a presigned bucket URL.
| Tool | Purpose |
|---|---|
| requestAssetUpload | Returns a 15-minute presigned PUT URL plus the final assetUrl. Pass filename, contentType, size |
| confirmAssetUpload | Finalize after the PUT. Records optional width / height for images and rebuilds the asset catalog |
Flow:
- AI calls
requestAssetUpload({ filename: "hero.jpg", contentType: "image/jpeg", size: 524288 }). - AI's host PUTs the file bytes to the returned
uploadUrlwith the sameContent-Typeheader. - AI calls
confirmAssetUpload({ assetId, width: 1920, height: 1080 }). - The asset is now visible to
findAssetsand can be referenced from content fields.
For files larger than ~5 GB, use the HTTP multipart-upload endpoints directly — multipart isn't yet exposed via MCP.
Lifecycle
| Tool | Purpose |
|---|---|
| validateContent | Validate a candidate doc holistically — schema/required-field checks AND blocks-field structural checks. Returns { valid, errors, fieldErrors } |
| canPublish | Dry-run the publish-time validation against a saved draft. Returns { canPublish, locales, errors, fieldErrors } without committing |
| publish | Publish to one or more locales |
| unpublish | Unpublish from one or more locales (omit locales to unpublish entirely) |
| deleteContent | Permanently delete an entry. Destructive |
Schema management
Higher-trust than content tools: changing a schema affects every existing content item in the model. All five require an API key with developer or admin role; the destructive ones (deleteModel, deleteBlock) require admin.
| Tool | Purpose |
|---|---|
| upsertModel | Create a new model or replace an existing one |
| deleteModel | Delete a model. Rejects if any content still uses it |
| upsertBlock | Create a new block schema or replace an existing one |
| deleteBlock | Delete a block schema. Rejects if any model lists it in allowedBlocks |
| importSchemas | Apply a bundle of models + blocks at once. The right tool for site migrations — design the full schema graph locally, then push it. Pass override: true to overwrite existing schemas |
Codegen
| Tool | Purpose |
|---|---|
| generateClient | Generate the latest TypeScript client + types for the workspace as a string. Save it locally as blobify.ts (or wherever you keep generated code). Re-run after any schema change to keep the typed client in sync |
Webhooks
The single most useful tool here is getWebhookSpec — call it before writing a webhook receiver so you don't have to guess header names or invent a signature scheme. The other five wrap the same lifecycle endpoints exposed at /v1/orgs/.../webhooks.
| Tool | Purpose |
|---|---|
| getWebhookSpec | Returns the wire-level contract: header names, signature format (t=<unix>,v1=<hmac-hex>), event types, payload shape, retry policy, and a Node verification snippet |
| listWebhooks | List configured webhooks (secrets are masked — <first8>...) |
| createWebhook | Register a new webhook. Returns { webhook, secret } — the secret is shown ONCE. developer/admin only |
| updateWebhook | Modify URL or event subscription. developer/admin only |
| testWebhook | Synchronous test delivery. Returns { success, statusCode? } from the customer's endpoint. developer/admin only |
| deleteWebhook | Permanently unregister. admin only |
validateContent and canPublish are intentionally distinct:
validateContentvalidates a candidate document the AI is constructing — useful before a write, to catch shape errors early. Runs schema/required-field checks and blocks-field structural checks.canPublishvalidates the saved draft against publish-time rules (per-locale required fields, uniqueness across the published scope, block-field validation). Useful before showing a "Publish" button or asking the user to confirm. Adds the uniqueness check on top of whatvalidateContentdoes.
Response shape
Read tools return JSON directly. Write tools return a concise success header so the AI doesn't have to parse a full document to confirm success:
{
"ok": true,
"action": "created",
"model": "page",
"id": "019c4486-…",
"title": "About Us",
"publishedLocales": [],
"doc": { "id": "…", "model": "page", "fields": { … } }
}title is resolved using the model's displayField and the workspace
default locale.
The action field is honest about what actually happened:
| Value | Meaning |
|---|---|
| created | A new draft was written |
| updated | An existing draft's fields changed |
| verified | A patchFields call with only test ops succeeded (read-only assertion, nothing written) |
| unchanged | A patchFields call applied but produced identical fields (no-op, nothing written) |
| published | One or more locales went live |
| unpublished | One or more locales were taken down |
| deleted | The entry was permanently removed |
Architecture: writes through the API, reads from the bucket
Blobify's content data lives in your S3/R2 bucket as plain JSON files.
For production traffic from generated SSG sites, reads come straight
from the bucket — the API is never on the critical path. The MCP
server exposes a few convenience read tools (findContent,
getContent, findAssets, getModelSchema, getBlockSchema) so AI
hosts in sandboxed environments can still author content even when
they can't reach the bucket directly. For high-volume scans, fetching
URLs from getReadContext's pathTemplates directly is still cheaper
than going through the API.
Common workflows
Find an entry by title and edit it
- AI calls
findContent({ model: "page", query: "About" }). - Picks the matching id from the response.
- Calls
getContent({ model: "page", id })to read current fields. - Calls
patchFields({ model, id, patches: [{ op: "replace", path: "/title/en", value: "About Us" }] }).
Add a hero block to a page
- AI calls
getModelSchema({ model: "page" })to learn which blocks are allowed in thecontentfield. - Calls
getBlockSchema({ blockId: "hero-block" })to learn its fields. - Calls
appendBlock({ model: "page", id, block: { type: "hero-block", fields: { … } } }).
Pick an image for a field
- AI calls
findAssets({ category: "image", extensions: ["jpg", "png"], folder: "marketing" }). - Picks an asset id from the response.
- References it in the field via
patchFieldsorsaveDraft.
Migrate a site to Blobify
The full migration loop runs end-to-end through MCP if your API key has developer role:
- AI inspects the source site (HTML scraping, sitemap walk, etc.) using its own fetch tool.
- AI designs the schema graph locally — what models, what blocks, what fields per model.
importSchemas({ models, blocks })to push the schema in one bundle.bulkImport({ model, items })in batches of 100 for the content side. Inspect per-itemstatusand retry failed ones.generateClient()so the user's frontend can immediately consume the new types.publishthe entries the user explicitly wants live.getWebhookSpec()to learn the wire contract, thencreateWebhook(...)to wire revalidation into the user's frontend (Next.jsrevalidateTag, Vercel ISR, etc.).testWebhook(id)confirms the receiver verifies signatures correctly.
Bulk-import content from another CMS
- Run schema validation locally to catch shape errors.
- Call
bulkImport({ model: "article", items: [...] })in batches of 100. - Inspect the per-item
statusarray; retry failed items with corrected data.
Confirm publish readiness before committing
- AI calls
canPublish({ model: "page", id }). - If
canPublish: true, proceed topublish. - If
canPublish: false, show the user theerrorsarray (or usefieldErrorsto highlight specific fields/locales) and ask them to fix the missing data before retrying.
Stateless behaviour
Each MCP request is independent — there are no long-lived sessions, no resumable streams. This makes the server cheap to scale and trivially horizontally distributable. If you need streaming notifications (e.g. webhooks for publish events), use the Webhooks API instead.
CORS and origins
The MCP endpoint is server-to-server: AI hosts proxy through their own infrastructure or run as native apps. Browser-based JavaScript can't reach it (and shouldn't — exposing an API key client-side defeats the auth model). We don't need to relax CORS for MCP.
Troubleshooting
"data did not match any variant of untagged enum JsonRpcMessage"
A client transport that advertises support for SSE but can't parse it. Confirm your client is set to "Streamable HTTP" mode (not SSE). Our server replies with plain JSON when this is selected.
-32601 Method not found on resources/list
We don't expose MCP "resources" — only tools. The error is informational; the AI tried both surfaces and one of them legitimately doesn't exist.
401 Unauthorized
API key is missing, malformed, or has been revoked. Check
Settings → Developer → API Keys. Keys are case-sensitive.
403 Forbidden on a tool call
The API key's role doesn't permit that operation, or the key is restricted to specific spaces and the call targeted a different one.
AI says "I can't fetch URLs"
Your AI host doesn't have a built-in HTTP fetch tool, so it can't
follow getReadContext's URL templates to scan the bucket directly.
Use findContent / getContent / findAssets instead — they don't
need outbound HTTP from the AI host.
Limitations
- One MCP request = one tool call (stateless).
- No SSE / no resumable streams in the current transport.
- Markdown→AST conversion supports CommonMark + GFM strikethrough. Other GFM extensions (tables, task lists) aren't auto-converted yet.
bulkImportcap is 100 items per call.
Changelog
The MCP surface follows the rest of Blobify's API versioning. Backwards- incompatible tool changes will land behind new tool names; existing tool inputs and outputs are stable.