Saving and Loading Jobs¶
Jobs and CompositeJobs can be persisted to JSON files and restored in a later session with a fresh SDK client. This is useful for:
Resuming work after a process restart or crash
Archiving execution results alongside the program that produced them
Sharing job state between scripts or services
All execution history, results, errors, and compiler configuration are preserved faithfully. The live HTTP client and asyncio lock are the only things that are not persisted — both are re-created fresh when a snapshot is loaded.
Note
Persistence requires the pydantic package, which is a core dependency
and is always installed.
Saving a Job¶
Call save() on any Job or CompositeJob to write its full state to
a JSON file:
job = sdk.create_job(program=qasm, qpu_id="qpu:uk:2:d865b5a184")
await job.execute()
job.save("my_job.json")
The file can be reloaded in any later session — even after restarting Python or switching machines — as long as the SDK version matches.
Loading a Job¶
Use sdk.load() to restore a job from a file. The SDK automatically
detects whether the file contains a single Job or a CompositeJob:
async with OqcSdk(url=..., authentication_token=...) as sdk:
restored = sdk.load("my_job.json")
# All state is restored
print(restored.state) # e.g. JobState.COMPLETED
print(restored.result) # previous result is available
print(restored.history.all()) # full execution history
# The restored job is re-submittable
await restored.execute()
Programmatic Use¶
If you want to handle the snapshot in memory — to inspect it, transmit it, or
store it in a database — use to_snapshot() and sdk.from_snapshot()
instead of the file-based methods.
Both operations are synchronous and require no network access:
>>> from oqc_qcaas_sdk import OqcSdk, JobSnapshot
>>> from oqc_qcaas_sdk.job_snapshot import CURRENT_SCHEMA_VERSION
>>> sdk = OqcSdk(url="https://example.com", authentication_token="dummy")
>>> job = sdk.create_job(program="OPENQASM 2.0;", qpu_id="qpu:uk:2:test")
>>> # Outbound: capture full state as a Pydantic model
>>> snap = job.to_snapshot()
>>> snap.snapshot_kind
'job'
>>> snap.state
'CREATED'
>>> snap.schema_version == CURRENT_SCHEMA_VERSION
True
>>> # Serialise to a JSON string yourself if needed
>>> json_str = snap.model_dump_json()
>>> isinstance(json_str, str)
True
>>> # Inbound: rehydrate from a model
>>> restored = sdk.from_snapshot(snap)
>>> restored.program == job.program
True
>>> restored.qpu_id == job.qpu_id
True
Both JobSnapshot and CompositeJobSnapshot are exported from
oqc_qcaas_sdk and are standard Pydantic v2 models.
CompositeJob Persistence¶
CompositeJob works identically — the snapshot contains the full state of
every child job:
composite = sdk.create_job(
program=[prog_a, prog_b, prog_c],
qpu_id="qpu:uk:2:d865b5a184",
)
await composite.execute()
composite.save("my_composite.json")
# Later:
restored = sdk.load("my_composite.json")
print(len(restored.jobs)) # 3
print(restored.completed_jobs) # children that succeeded
for item in restored.outputs.all():
print(item.data) # each child's measurement counts
Schema Versioning¶
Every snapshot file contains a schema_version field. If a future SDK
release changes the snapshot format, attempting to load an old file will raise
a ValueError with a clear message rather than silently loading incorrect
data:
# File saved with schema_version "1", loaded with SDK that expects "2":
# ValueError: Unsupported snapshot schema version: '1'. This SDK supports version '2'.
Migration guides will be provided in the changelog when schema versions change.
Non-Terminal State¶
If a job was saved while still in-flight (e.g. SUBMITTED or RUNNING),
its state is preserved faithfully on load. If the underlying task is still
active on the server, calling await job.refresh() will resume polling it
normally. If the task has since been scrubbed from the server, refresh()
will surface a clear server error rather than producing a confusing internal
failure.
Stale Execution State¶
sdk.load() only validates the file format — it does not contact the
server. This means a restored job may reference server-side state that has
changed since the snapshot was saved. Common scenarios:
Task ID purged — historical task records are scrubbed from the server after a retention period.
await job.refresh()will surface a 404 error.QPU removed — the target QPU is decommissioned or renamed. Submitting will be rejected by the server.
Compiler config no longer supported — the QPU’s supported compiler options have changed. Submission will be rejected.
Qubit count reduced — the QPU’s available qubits are reduced and the saved program requests more than are now available. Submission will be rejected.
In every case the SDK handles the failure gracefully: the error is stored in
job.error and no exception is raised (unless you call
job.raise_for_status()).
restored = sdk.load("my_job.json")
await restored.execute() # safe even if server state has changed
if restored.error:
print(f"Re-execution failed: {restored.error.error_message}")
else:
print(f"Re-execution succeeded: {restored.result.data}")
The previously saved results and history remain available on the job object regardless of whether the re-execution succeeds.