Filmwork Pipeline
Filmwork is Narrative Lion's shot production system. It stores a storyboard (markdown) alongside structured per-shot data (direction, prompts, dialogue, model config).
Creating a project: Use Film Director (AI storyboard from a concept), Reel Coach (short-form script from notes), or direct creation below. This page covers direct creation and the Filmwork data model.
Direct creation (zero credit if format is valid)
If you already have a storyboard, create the note directly via GraphQL. A format gate validates your storyboard labels automatically:
| Scenario | Cost | Behavior |
|---|---|---|
| Labels parse correctly | 0 credits | Saved directly |
| Labels malformed but repairable | 1 credit | Auto-repaired by AI, then saved |
| No shot content found | Rejected | INVALID_STORYBOARD_FORMAT error |
mutation {
createGeneralNote(
noteType: "filmwork"
skipAi: true
content: "# My Film\n\n**Summary:** 1 scene, 6 shots, ~30s.\n\n## Scene 1: Night (30s)\n\n**01A** (4s) — Wide establishing\nCity street at night.\n\n**01B** (5s) — Low tracking\nCamera follows robot."
) {
id
title
}
}The skipAi flag is ignored for filmwork — the format gate always runs. To get 0-credit creation, ensure your labels match the format below.
mutation {
createFilmworkShot(
noteId: "note-uuid"
shotId: "01A"
scene: "Wide establishing"
sequenceOrder: 1
targetDurationSec: 4
directionJson: "{\"framing\":\"Wide\",\"camera\":\"Slow push-in\",\"blocking\":\"Robot enters frame\",\"keyPose\":\"Walking\",\"editTiming\":\"Hold 2s\",\"compositeNote\":\"\"}"
promptsJson: "[{\"version\":1,\"modelTarget\":\"video\",\"body\":\"City street at night...\",\"negativePrompt\":\"morphing, jitter\",\"isActive\":true}]"
modelConfigJson: "{\"primaryModel\":\"kling\",\"aspectRatio\":\"16:9\",\"duration\":4,\"resolution\":\"1080p\",\"lipSync\":\"none\"}"
) { id shotId status }
}Call once per shot. All JSON fields are stringified (double-encoded). The agent should build directionJson, promptsJson, and modelConfigJson from its own structured data — no LLM call needed.
directionJson: { framing?, camera?, blocking?, keyPose?, editTiming?, compositeNote? }
promptsJson: [{ version, modelTarget?, body, negativePrompt?, isActive }]
modelConfigJson: { primaryModel?, fallbackModel?, duration?, aspectRatio?, resolution?, lipSync? }
dialogue: [{ lineId?, speaker, text, type, emotion? }] — or [] for silent shots
relationsJson: { mirrorOf?, prevCut?, nextCut?, arcRole? }
blockerJson: { description, action, createdAt } — used with updateShotStatuscreateGeneralNote does NOT create shots. Call createFilmworkShot per shot, then upload assets, then rolls. Film Director and Reel Coach persist endpoints create note + shots together.
JSON field schemas
All JSON fields are passed as stringified JSON (double-encoded strings). The backend validates each field against a strict Zod schema — malformed data returns a schema validation error. Empty/unused fields should be omitted or set to null, not empty strings.
directionJson
{
"framing": string | null, // e.g. "Medium close-up", "Wide locked"
"camera": string | null, // e.g. "Static", "Slow dolly forward"
"blocking": string | null, // e.g. "Claire enters from screen-left"
"keyPose": string | null, // e.g. "Arms crossed, looking down"
"editTiming": string | null, // e.g. "Cut on action at 2.5s"
"compositeNote": string | null // e.g. "Layer soundwave overlay in post"
}
// All fields optional. At least one should be set for the direction to be useful.promptsJson
[
{
"version": number, // integer >= 1, increment on each revision
"modelTarget": string | null, // "video" | "image" | "audio"
"body": string, // REQUIRED — the generation prompt
"negativePrompt": string | null, // what to avoid (model-dependent)
"isActive": boolean // REQUIRED — only one entry should be true
}
]
// Array of prompt versions. Keep old versions (isActive: false) for history.
// The body format depends on the target model — see model identifiers below.modelConfigJson
{
"primaryModel": string | null, // see "Valid model identifiers" below
"fallbackModel": string | null, // fallback if primary fails
"duration": number | null, // target seconds for this shot
"aspectRatio": string | null, // e.g. "16:9", "9:16", "1:1"
"resolution": string | null, // e.g. "1080p", "720p"
"lipSync": "internal" | "separate" | "none" | null,
"compositeNote": string | null // post-production notes
}dialogue
Must be a JSON array of dialogue lines. Use [] (empty array) for silent shots, never an empty string.
[
{
"lineId": string?, // optional identifier
"speaker": string, // REQUIRED, min 1 char — e.g. "Claire", "Employee A"
"text": string, // REQUIRED, min 1 char — the spoken/thought text
"type": "dialogue" | "os" | "sfx_cue", // REQUIRED
// "dialogue" = spoken aloud
// "os" = internal monologue (over-screen)
// "sfx_cue" = sound effect direction
"emotion": string? // optional — e.g. "nervous self-talk", "dry"
}
]relationsJson
{
"mirrorOf": string | null, // shot label this mirrors, e.g. "01A"
"prevCut": string | null, // previous shot label in edit sequence
"nextCut": string | null, // next shot label in edit sequence
"arcRole": string | null // narrative role, e.g. "inciting_incident"
}blockerJson
Used with updateShotStatus(status: "blocked"). All three fields are required.
{
"description": string, // what is blocking — e.g. "Start frame direction mismatch"
"action": string, // required next step — e.g. "Regenerate start frame"
"createdAt": string // ISO 8601 timestamp — e.g. "2026-05-04T00:00:00Z"
}Valid model identifiers
Use these exact strings in modelConfigJson.primaryModel and prompt modelTarget fields.
| Model ID | Best for | Prompt style |
|---|---|---|
omnihuman-1.5 | Close-ups, portraits, dialogue-heavy shots with subtle facial expression | Prose paragraphs with emotion words. No physical quantification. |
kling-2.6 | Wide/medium shots, camera motion, action, multi-subject blocking | Structured: Camera / Motion / Endpoint sections. Film terminology. |
wan-2.5-flf | First/last frame interpolation | Structured. Requires start + end frame assets. |
Shot status lifecycle
Status values: "not_started" | "asset_prep" | "ready" | "generating" | "review" | "done" | "blocked"
Typical flow:
not_started → asset_prep → ready → generating → review → done
↓ ↓
blocked ←──────────────────────── blocked
Use updateShotStatus to transition:mutation {
updateShotStatus(
shotId: "uuid-of-shot"
status: "blocked"
blockerJson: "{\"description\":\"Start frame facing wrong direction\",\"action\":\"Regenerate start frame with side angle\",\"createdAt\":\"2026-05-04T00:00:00Z\"}"
) { id status blockerJson }
}mutation {
updateShotStatus(
shotId: "uuid-of-shot"
status: "asset_prep"
) { id status }
}Complete createFilmworkShot example
A fully populated shot with all JSON fields. All JSON values are stringified.
mutation {
createFilmworkShot(
noteId: "note-uuid"
shotId: "01B"
scene: "Elevator ride"
sequenceOrder: 2
targetDurationSec: 5
dialogue: "[{\"speaker\":\"Claire\",\"text\":\"First day. Don't be weird. Just look normal.\",\"type\":\"os\",\"emotion\":\"nervous self-talk\"}]"
directionJson: "{\"framing\":\"Medium close-up\",\"camera\":\"Static\",\"blocking\":\"Claire alone in elevator, eyes forward\",\"keyPose\":\"Hands clasped, jaw tight\",\"editTiming\":\"Hold full duration\"}"
promptsJson: "[{\"version\":1,\"modelTarget\":\"video\",\"body\":\"A young woman stands alone in an elevator. Her posture is rigid, hands clasped at her waist. Her eyes stare at the closed doors. Subtle tension in her jaw. Soft fluorescent light from above. The moment before the doors open.\",\"negativePrompt\":\"cartoon, deformed, extra limbs, morphing\",\"isActive\":true}]"
modelConfigJson: "{\"primaryModel\":\"omnihuman-1.5\",\"duration\":5,\"aspectRatio\":\"16:9\",\"resolution\":\"1080p\",\"lipSync\":\"none\"}"
) { id shotId status }
}mutation {
createFilmworkShot(
noteId: "note-uuid"
shotId: "01A"
scene: "Street approach"
sequenceOrder: 1
targetDurationSec: 4
dialogue: "[]"
directionJson: "{\"framing\":\"Wide establishing\",\"camera\":\"Slow push-in\",\"blocking\":\"Building exterior, Claire walks toward entrance\",\"editTiming\":\"Cut on footstep at 3.5s\"}"
promptsJson: "[{\"version\":1,\"modelTarget\":\"video\",\"body\":\"Camera: Wide establishing, 16:9. Slow push-in toward glass office building entrance.\\nMotion: A woman in business attire walks toward the revolving door, briefcase in hand. Morning light. Urban sidewalk.\\nEndpoint: She reaches the door handle.\",\"negativePrompt\":\"jitter, morphing, extra people\",\"isActive\":true}]"
modelConfigJson: "{\"primaryModel\":\"kling-2.6\",\"duration\":4,\"aspectRatio\":\"16:9\",\"resolution\":\"1080p\",\"lipSync\":\"none\"}"
) { id shotId status }
}After creating, use updateFilmworkShot to add relationsJson, or updateShotStatus to set status and blockerJson.
Storyboard format
The storyboard markdown must follow this label format for shot parsing to work:
**{NN}{Letter}** ({duration}s) — Title
Description body text.
Examples:
**01A** (4s) — Wide establishing shot
City street at night. Rain reflects neon. A small robot enters frame.
**01B** (5s) — Low tracking shot, 9:16 framing
Camera follows the robot at wheel level.
**02A** (3.5s) — Close-up, static
Robot's face illuminated by passing lights.Label regex: **{2-3 digits}{one letter}{optional digit}**. Valid: 01A, 02B, 10C2. Both persist and createGeneralNote auto-repair format issues (1 credit). Write correct labels to avoid the repair cost.
Chat-based shot editing
Edit shots through the chat stream. Set the thread's active note first, then send a message with noteTypeScope.
curl -X POST https://narrativelion.com/api/threads/your-thread-uuid/active-note \
-H "Authorization: Bearer nlk_your_key" \
-H "Content-Type: application/json" \
-d '{ "noteId": "filmwork-note-uuid" }'curl -N https://narrativelion.com/api/chat/stream \
-H "Authorization: Bearer nlk_your_key" \
-H "Content-Type: application/json" \
-d '{
"threadId": "your-thread-uuid",
"actionId": "unique-action-uuid",
"event": {
"type": "user_text",
"payload": {
"text": "Make shots 01A and 01B more dramatic with rain VFX",
"noteTypeScope": ["filmwork"]
}
}
}'The complete event includes a filmworkUpdate artifact.noteTypeScope: ["filmwork"] routes to the filmwork edit skill instead of general chat.
Filmwork GraphQL operations
All filmwork operations use existing scopes: notes:read for reads and media streaming, notes:write for writes, uploads, review actions, agent holds, and deletes.
Queries
# Get Filmwork project overview (shots, assets, rolls, status counts)
filmworkOverview(noteId: String!) → { noteId, title, totalShots, rollScoreRubricJson, statusCounts, shots[], linkedNotes[] }
# Get a single shot by its internal UUID
# NOTE: shotId here is the UUID (id field), NOT the label like "01A"
filmworkShot(shotId: String!) → { ..., assets[], rolls[], goldenRoll, preflightStatus, assetCounts, rollSummary }
# Get a shot by its human-readable label (e.g. "01A") — returns the same fields as filmworkShot
filmworkShotByLabel(noteId: String!, shotLabel: String!) → { id, shotId, ... }
# Get decision log
filmworkDecisions(noteId: String!, shotId: String, limit: Int, offset: Int) → [{ id, shotId, actor, action, reason, outcome, createdAt }]
# Get insights
filmworkInsights(noteId: String, category: String, tag: String, limit: Int, offset: Int) → [{ id, noteId, category, tagsJson, title, detail, createdAt }]
# Get note links
noteLinks(noteId: String!, limit: Int, offset: Int) → [{ id, targetNoteId, targetNoteTitle, linkType }]
# Provenance — how was this asset made?
assetProvenance(assetId: String!) → { assetId, method, model, prompt, modelParamsJson, userNote, parents[] } | null
assetLineageTree(assetId: String!, maxDepth: Int) → [{ id, childAssetId, parentAssetId, parentExternalRef, role }]
rollInputSnapshot(rollId: String!) → [{ assetId, assetType, version }]
# One-shot roll debug — resolves prompt, input assets, and provenance in a single query
rollContext(rollId: String!) → { id, rollNumber, shotId, shotLabel, seed, modelUsed, promptVersion, totalScore, verdict, isGolden, promptBody, promptNegative, inputs[] }ID disambiguation: In filmworkOverview responses, the shotId field on each shot is the human-readable label (e.g. "01A"). To query or mutate a shot, either use filmworkShotByLabel(noteId, shotLabel) with the label, or use filmworkShot(shotId) with the UUID from the shot's id field.
Shot mutations
# Create a filmwork project note
createGeneralNote(noteType: "filmwork", content: String!) → { id, title }
# Create a shot record (all args are String! except sequenceOrder: Int!, targetDurationSec: Float)
createFilmworkShot(
noteId: String!, shotId: String!, scene: String,
sequenceOrder: Int!, targetDurationSec: Float,
dialogue: String, # stringified DialogueLine[] — use "[]" for silent shots
directionJson: String, # stringified Direction object
promptsJson: String, # stringified PromptEntry[]
modelConfigJson: String # stringified ModelConfig object
) → { id, shotId, status }
# Update shot fields (shotId is the UUID, not the label)
updateFilmworkShot(
shotId: String!,
dialogue: String, relationsJson: String, directionJson: String,
promptsJson: String, modelConfigJson: String, targetDurationSec: Float
) → FilmworkShot
# Update shot status (use blockerJson when status = "blocked")
updateShotStatus(shotId: String!, status: String!, blockerJson: String) → FilmworkShotImportant: shotId in mutations refers to the shot's UUID (the id field from creation), not the human-readable label like "01A". Use filmworkShotByLabel to resolve a label to a UUID.
Asset mutations
# Upload flow: requestUploadUrl → PUT file → confirmAssetUpload
# PUT requires the same Authorization: Bearer header as GraphQL calls.
requestUploadUrl(shotId: String!, assetType: String!, filename: String!) → { uploadUrl, assetKey }
# After PUT upload succeeds:
confirmAssetUpload(shotId: String!, assetKey: String!, assetType: String!, label: String, metadataJson: String, provenanceJson: String) → { id, url, status, version, isGolden }
# Manage assets
setGoldenAsset(assetId) → { id, isGolden }
updateAssetStatus(assetId, status, regenNotesJson?) → { id, status }
setAssetAgentHold(assetId, hold) → { id, agentHold }
deleteAsset(assetId) → BooleanRoll mutations
# Upload flow: requestRollUploadUrl → PUT video → confirmRollUpload
# PUT requires the same Authorization: Bearer header as GraphQL calls.
requestRollUploadUrl(shotId: String!, filename: String!) → { uploadUrl, rollKey }
# After PUT upload succeeds (seed: Int, promptVersion: Int — not String):
confirmRollUpload(shotId: String!, rollKey: String!, seed: Int, modelUsed: String, promptVersion: Int, scorecardJson: String, issues: String) → { id, rollNumber, url, totalScore, verdict }
# Review and manage rolls
scoreRoll(rollId, scorecardJson, totalScore, issues?) → { id, totalScore }
updateRollVerdict(rollId, verdict) → { id, verdict }
setGoldenRoll(rollId) → { id, isGolden }
unsetGoldenRoll(rollId) → { id, isGolden }
setRollAgentHold(rollId, hold) → { id, agentHold }
deleteRoll(rollId) → BooleanScorecard rubric
scorecardJson: { "rubricVersion": 1, "scores": { faceLikeness, expression, motionNatural, stability, styleMatch } }
// All scores: integer 1-5. Weights: faceLikeness=3, expression=3, motionNatural=2, stability=2, styleMatch=1
// totalScore = sum(score * weight), range 11-55. Pass as separate arg to scoreRoll.Collaboration mutations
# Log a decision
addDecision(noteId, shotId, actor, action, reason, outcome) → { id, createdAt }
# actor: "human" | "agent"
# Link a reference note
addNoteLink(sourceNoteId, targetNoteId, linkType) → { id, targetNoteTitle }
removeNoteLink(linkId) → Boolean
# linkType: "character" | "setting" | "story" | "continuation"
# Add an insight
addInsight(noteId, category, tagsJson, title, detail, sourceShotsJson?) → { id, category, tagsJson, title, createdAt }
# category: "image" | "video" | "voice"
# tagsJson: JSON array of tags from whitelist (required, min 1)Provenance queries
# How was this asset made? Returns method, model, prompt, and parent inputs.
assetProvenance(assetId: String!) → AssetProvenance | null
# Full lineage DAG — walks parent edges up to maxDepth levels (default 5, max 10).
# Each edge includes childAssetId + parentAssetId so the tree can be reconstructed.
assetLineageTree(assetId: String!, maxDepth: Int) → [LineageEdge!]!
# What asset versions were used when this roll was generated? (auto-captured on roll upload)
rollInputSnapshot(rollId: String!) → [RollInputSnapshot!]!
# One-shot roll debug context — resolves prompt text, input asset snapshot with provenance.
# Best starting point for debugging a roll.
rollContext(rollId: String!) → RollContextProvenance mutations
# Set or update provenance for an existing asset (upsert).
# Use this for after-the-fact recording or corrections.
setAssetProvenance(
assetId: String!, method: String!,
model: String, prompt: String, modelParamsJson: String, userNote: String,
parents: [ProvenanceInputArg!]
) → AssetProvenance!
# Remove provenance record.
deleteAssetProvenance(assetId: String!) → Boolean!
# Inline provenance: pass provenanceJson to confirmAssetUpload (preferred — single API call).
# provenanceJson is a stringified JSON object:
# { "method": "ai_generated", "model": "gpt-image-2", "prompt": "...",
# "parents": [{ "assetId": "...", "role": "base" }, { "externalRef": "file.png", "role": "reference" }] }
input ProvenanceInputArg {
assetId: String # system asset ID (for tracked parents)
externalRef: String # description for non-system inputs (URL, filename, etc.)
role: String! # base | reference | style | mask | composition | audio
}Provenance is always optional and never blocks uploads. rollInputSnapshot is captured automatically on roll upload — no action needed.
Enum reference
# Asset types (for requestUploadUrl / confirmAssetUpload)
assetType: "start_frame" | "end_frame" | "keyframe" | "dialogue" | "sfx" | "padded_audio" | "ref_video" | "ref_image"
# Cardinality: singleton vs collection
# Singleton (one active per shot — uploading a new one retires the old):
# start_frame, end_frame, dialogue, padded_audio
# Collection (multiple can coexist — use label to distinguish):
# keyframe, sfx, ref_video, ref_image
# Asset status
assetStatus: "pending" | "ready" | "approved" | "retired"
# Roll verdict
verdict: "pending" | "approved" | "rejected"
# Provenance methods
method: "ai_generated" | "user_upload" | "manual_edit" | "derived"
# Lineage roles (how a parent contributed to the child asset)
role: "base" | "reference" | "style" | "mask" | "composition" | "audio"Recommended query patterns
Use assetCounts and rollSummary for compact overview queries. Drill into full assets / rolls only when you need IDs for mutations.
{
filmworkOverview(noteId: "...") {
statusCounts { notStarted assetPrep ready generating review done blocked }
shots {
shotId status
assetCounts { startFrame endFrame keyframe dialogue sfx paddedAudio refVideo refImage total }
rollSummary { total pending approved rejected bestScore goldenRollId }
preflightStatus { ready }
}
}
}{
filmworkShot(shotId: "...") {
shotId status dialogue directionJson promptsJson modelConfigJson blockerJson relationsJson
preflightStatus { ready checks { name passed detail } }
assets { id assetType label status version isGolden agentHold }
rolls { id rollNumber seed modelUsed promptVersion totalScore verdict isGolden agentHold }
goldenRoll { id rollNumber totalScore }
}
}type AssetCounts {
startFrame: Int! # count of start_frame assets
endFrame: Int!
keyframe: Int!
dialogue: Int!
sfx: Int!
paddedAudio: Int!
refVideo: Int!
refImage: Int!
total: Int! # sum of all above
}type RollSummary {
total: Int!
pending: Int! # verdict = "pending"
approved: Int! # verdict = "approved"
rejected: Int! # verdict = "rejected"
bestScore: Int # highest totalScore across all rolls (null if none scored)
goldenRollId: String # ID of the golden roll (null if none set)
}