Blob Methods
Blobs are opaque handles to large data. The Skills Protocol provides two methods for creating and reading blobs.
create_blob
Store large content as a blob and return its ID.
Stores arbitrary content (text or binary) as a blob in the runtime's storage. Returns a blob ID that can be used to reference the data in subsequent operations.
Request parameters
- Name
content- Type
- string
- Required
- required
- Description
Content to store. Can be text, JSON, or base64-encoded binary data.
- Name
kind- Type
- string
- Required
- required
- Description
MIME type describing the content:
"text/plain"— Plain text"text/csv"— CSV data"application/json"— JSON data"application/pdf"— PDF document
Request
{
"jsonrpc": "2.0",
"id": "6",
"method": "create_blob",
"params": {
"content": "name,email,company\nJohn Doe,john@example.com,Acme Corp\nJane Smith,jane@example.com,Tech Inc",
"kind": "text/csv"
}
}
Response
{
"jsonrpc": "2.0",
"id": "6",
"result": {
"blob_id": "blob:transcript-abc123",
"size_bytes": 105
}
}
Return value
- Name
blob_id- Type
- string
- Description
Opaque identifier for the blob (format:
blob:<id>). Use this ID to reference the blob in other operations.
- Name
size_bytes- Type
- integer
- Description
Size of stored content in bytes
Common MIME types
| Type | Usage |
|---|---|
text/plain | Plain text, logs |
text/csv | CSV data |
application/json | JSON data |
application/pdf | PDF documents |
text/markdown | Markdown documents |
application/xml | XML data |
read_blob
Retrieve a preview of blob content. Full reads are discouraged for very large blobs.
Reads a portion of blob content. Supports sampling strategies to avoid reading entire large blobs.
Request parameters
- Name
blob_id- Type
- string
- Required
- required
- Description
Blob identifier (from
create_blobor skill execution)
- Name
mode- Type
- string
- Description
Sampling strategy:
"sample_head"(default) — Firstmax_bytesof content"sample_tail"— Lastmax_bytesof content"full"— Return entire content (may be rejected for very large blobs)
- Name
max_bytes- Type
- integer
- Description
Maximum bytes to return (default: 2000)
Request
{
"jsonrpc": "2.0",
"id": "7",
"method": "read_blob",
"params": {
"blob_id": "blob:transcript-abc123",
"mode": "sample_head",
"max_bytes": 2000
}
}
Response
{
"jsonrpc": "2.0",
"id": "7",
"result": {
"content": "name,email,company\nJohn Doe,john@example.com,Acme Corp\nJane Smith,jane@example.com,Tech Inc",
"truncated": false,
"kind": "text/csv"
}
}
Return value
- Name
content- Type
- string
- Description
The blob content (or preview if truncated)
- Name
truncated- Type
- boolean
- Description
Whether the returned content was truncated (true if requested bytes exceeded actual size or max_bytes limit)
- Name
kind- Type
- string
- Description
MIME type of the blob
Sampling modes
sample_head (default)
Returns the first max_bytes of content. Useful for previewing file headers, logs, or large datasets.
Example: First 2000 bytes of a CSV file
sample_tail
Returns the last max_bytes of content. Useful for reading recent log entries or error messages at the end of files.
Example: Last 2000 bytes of an error log
full
Returns entire content. May be rejected for very large blobs. Avoid for blobs > 10MB.
Example: Reading a complete small configuration file
Blob lifecycle
Creation
Blobs are created by:
- Calling
create_blobdirectly - Skill code using
runtime.blobs.write_text()during execution - Returned as
output_blobsfromexecute_skillorrun_code
Access
Once created, a blob ID can be:
- Passed to subsequent skill executions via
input_blobs - Mounted in code execution sandboxes
- Read via
read_blob - Referenced in skill arguments
Lifecycle
Creation — Blob is stored and assigned an ID
Usage — Blob can be read or mounted in executions
Cleanup — Runtime may implement TTL or manual cleanup policies
The Skills Protocol MVP does not define a delete operation. Runtimes should implement appropriate retention policies for production use.
Best practices
When to use blobs
Use blobs for data larger than a few KB:
- CSV/TSV data
- JSON datasets
- Log files
- Large text documents
- Binary files
When not to use blobs
Keep in output fields for small data:
- Success/failure status
- Small metadata
- Counts or summary statistics
- Configuration flags
Example: Proper size management
# DON'T: Put large data in output
result = {
"status": "completed",
"large_csv": "...10MB of CSV data..." # ❌ Too large
}
# DO: Write large data to blobs
large_csv_blob = blobs.write_text(large_csv_data)
result = {
"status": "completed",
"rows_processed": 10000,
"output_blob": large_csv_blob
}
Sampling strategy
# Preview the start of a log file
preview = read_blob("blob:logs-123", mode="sample_head", max_bytes=2000)
# Check for errors at the end
errors = read_blob("blob:logs-123", mode="sample_tail", max_bytes=1000)
# Only use full read for small files
config = read_blob("blob:config-456", mode="full")
In-sandbox blob access
Within run_code executions, use the runtime.blobs module:
from runtime import blobs
def main(args):
# Read a blob passed as argument
data = blobs.read_text(args['input_blob'])
# Process data...
# Write results to a new blob
output_blob = blobs.write_text(processed_data)
# Write JSON data
json_blob = blobs.write_json({"status": "ok", "count": 42})
return {
"output_blob": output_blob,
"json_blob": json_blob
}