Skill Documentation (SKILL.md)
The SKILL.md file combines YAML frontmatter with markdown content. It provides LLM-readable instructions, examples, and usage patterns for a skill.
Overview
Every skill MUST have a SKILL.md file. The file format is:
---
YAML frontmatter
---
Markdown content
The frontmatter provides structured metadata. The body provides narrative instructions for the LLM.
Frontmatter
YAML metadata block at the beginning of the file:
---
name: Salesforce Lead Sync
short_description: Sync leads from a sheet blob into Salesforce.
tags: [crm, leads, sync]
---
Standard fields
- Name
name- Type
- string
- Required
- required
- Description
Human-readable skill name (can differ from machine name in skill.toml)
- Name
short_description- Type
- string
- Required
- required
- Description
One-line description for discovery listings
- Name
tags- Type
- array
- Description
Array of strings for categorization
Example
---
name: Salesforce Lead Sync
short_description: Sync leads from a sheet blob into Salesforce.
tags: [crm, leads, sync, salesforce, integration]
---
Content structure
The markdown body should include:
1. Overview section
Brief explanation of what the skill does:
# Salesforce Lead Sync
This skill syncs lead data from a CSV blob into Salesforce CRM.
It handles data validation, duplicate detection, and error reporting.
2. Requirements section
What the skill needs to run:
## Requirements
- Valid Salesforce API credentials (provided via `SALESFORCE_API_KEY`)
- Target org ID
- CSV data in standard lead format
3. Inputs section
Expected arguments and their types:
## Inputs
- **sheet_blob** (blob) — Blob ID containing CSV data
- **env** (string) — Target environment: "dev", "staging", or "prod"
- **batch_size** (integer, optional) — Records per batch (default: 100)
4. Outputs section
What the skill returns:
## Outputs
Returns a JSON object with:
- **synced_count** (integer) — Number of successful syncs
- **failed_count** (integer) — Number of failed records
- **errors_blob** (string) — Blob ID with error details
5. Examples section
One or two practical examples:
## Examples
### Example 1: Sync to production
Sync a CSV blob to the production Salesforce org:
```python
result = execute_skill(
name="salesforce.leads.sync",
args={
"sheet_blob": "blob:leads-q4-abc123",
"env": "prod",
"batch_size": 500
}
)
Example 2: Multi-step workflow
Use with run_code to combine multiple skills:
# Code to sync and log results...
---
## Best practices {{ id: 'best-practices' }}
<Steps>
<Step>
**Clear purpose** — Start with a concise explanation of what the skill does
</Step>
<Step>
**Complete documentation** — Document all inputs, outputs, and requirements
</Step>
<Step>
**Practical examples** — Include real-world usage examples with sample data
</Step>
<Step>
**Error cases** — Explain what can go wrong and how to handle it
</Step>
<Step>
**Prerequisites** — List any setup or credentials needed
</Step>
<Step>
**Markdown formatting** — Use proper headers, code blocks, and formatting for readability
</Step>
</Steps>
---
## Examples {{ id: 'examples' }}
### Minimal SKILL.md
```markdown
---
name: Hello World
short_description: A simple hello world skill.
tags: [example]
---
# Hello World
Returns a greeting message.
## Inputs
- **name** (string) — Name to greet
## Outputs
Returns a greeting message.
## Example
```python
result = execute_skill(
name="hello.world",
args={"name": "Alice"}
)
### Complete SKILL.md
```markdown
---
name: Salesforce Lead Sync
short_description: Sync leads from a sheet blob into Salesforce.
tags: [crm, salesforce, leads, sync]
---
# Salesforce Lead Sync
Syncs lead data from a CSV blob into your Salesforce organization.
Handles validation, duplicate detection, and provides detailed error reporting.
## Requirements
- Active Salesforce org with API access
- Valid API key and org ID
- CSV data matching the standard Salesforce lead format
## Inputs
- **sheet_blob** (blob, required) — Blob ID containing CSV lead data
- **env** (string, required) — Target environment: "dev", "staging", or "prod"
- **batch_size** (integer, optional) — Records to process per batch. Default: 100. Max: 1000.
## Outputs
Returns a JSON object containing:
- **synced_count** (integer) — Number of successfully synced leads
- **failed_count** (integer) — Number of failed records
- **errors_blob** (string) — Blob ID containing detailed error information
## Requirements
The following environment variables must be set:
- `SALESFORCE_API_KEY` — API key for Salesforce
- `SALESFORCE_ORG_ID` — Target org ID
## Error Handling
If the skill fails:
- Check that your API credentials are valid
- Verify the CSV format matches Salesforce requirements
- Review the errors_blob for detailed failure reasons
## Example 1: Sync to production
Sync a CSV blob to your production Salesforce org:
```python
result = execute_skill(
name="salesforce.leads.sync",
args={
"sheet_blob": "blob:leads-q4-abc123",
"env": "prod",
"batch_size": 500
}
)
if result["status"] == "completed":
output = result["output"]
print(f"Synced {output['synced_count']} leads")
if output["failed_count"] > 0:
print(f"Failed: {output['failed_count']}")
Example 2: Read and sync with custom processing
Use run_code to read the CSV, apply custom logic, then sync:
code = """
from skills.salesforce.leads import sync_leads
from runtime import blobs
def main(args):
# Read CSV data
csv_content = blobs.read_text(args['sheet_blob'])
# Custom validation/filtering here
filtered_csv = filter_inactive_leads(csv_content)
# Write filtered data to new blob
filtered_blob = blobs.write_text(filtered_csv)
# Sync
result = sync_leads(filtered_blob, env=args['env'])
return result
"""
Related Skills
salesforce.accounts.export— Export Salesforce accounts to CSVgdrive.sheets.read— Read Google Sheets
---
## Next Steps
<div className="not-prose my-6 flex gap-3">
<Button href="/execution-methods" arrow="right">
<>Execution Methods</>
</Button>
<Button href="/skill-manifest" variant="outline">
<>Skill Manifest</>
</Button>
</div>