Skip to content

Logs & retries

Scenario runs produce a live log. You watch it as the scenario runs; you can replay it later from the run history. This page is the operational guide.

Live log

The Scenarios view shows a console panel below the editor while a run is in progress. Output appears as it’s emitted — print() calls, dm.log.* calls, exception tracebacks, and platform-level messages (POST /A_BusinessPartner → 201).

The same stream is available via:

  • The chat panel that triggered the run (when run from chat).
  • GET /scenarios/{id}/runs/{run_id}/logs?follow=true (Server-Sent Events, useful from CI).
  • The MCP get_scenario_run_logs tool.

Retention

PlanRun history retainedLog lines per run
Free14 days5,000
Pro90 days100,000
Enterpriseper contractper contract (unlimited)

After retention expires, the run record stays (you can see it ran, when, the outcome) but the log body is dropped.

Structured vs free-text

print(...) produces a free-text line. Useful for development. For production scenarios prefer:

dm.log.info("seeded %d rows in %s", len(rows), table)
dm.log.warn("country %s has no template; falling back", country)
dm.log.error("retry exhausted for batch %d", i, exc_info=True)

dm.log.* writes structured records (JSON-shaped behind the scenes). The chat / UI renders them like text, but the exporter pulls structured fields if you want to ship logs to an external SIEM.

Sensitive values

Anything generated from a sensitive field is redacted in logs by default. The redaction is keyed on the field name in the row dict, so:

for row in rows:
print(row) # fully redacted on sensitive keys
print(row["tax_id"]) # also redacted
raw = decode(row["tax_id_blob"])
print(raw) # ← this isn't tracked; be careful

If you decode or transform a sensitive value you’re responsible for the result.

Retries

Three layers, from least to most explicit:

1. Built-in connection retries

Database and HTTP calls retry transient errors automatically (429, 5xx, network errors). See REST endpoints.

2. The @dm.retry decorator

@dm.retry(max_attempts=3, backoff="exponential", on=(ConnectionError, TimeoutError))
def push_batch(rows):
sap.post(entity="A_BusinessPartner", rows=rows, mode="batch")

Backoff strategies: "none", "linear", "exponential" (default), or pass a callable returning a number of seconds.

3. Re-runs

A failed scenario can be re-run from the run-detail page. By default a re-run starts from the top — there’s no automatic checkpointing. Build idempotency in if you need it:

if dm.connection("pg").execute("SELECT 1 FROM customers WHERE id = %s", [cid]):
return # already seeded; skip

Idempotency keys

For scenarios that POST to external systems, attach an idempotency key so duplicate runs don’t double-write:

sap.post(
entity="A_BusinessPartner",
rows=records,
idempotency_key=f"{dm.run_id}:{batch_index}",
)

DataMaker passes this through as Idempotency-Key header (REST) or as a deduplication field (when supported by the target). For SAP OData, idempotency is the caller’s responsibility — usually via a unique BusinessPartner ID per row.

Cancelling a run

From the UI: Stop. From the API: DELETE /scenarios/{id}/runs/{run_id}. Cancellation is cooperative — DataMaker raises a KeyboardInterrupt in the worker. Cleanup code in finally: blocks runs.

Exporting logs

For audit / compliance:

  • GET /scenarios/{id}/runs/{run_id}/logs.txt — plain text.
  • GET /scenarios/{id}/runs/{run_id}/logs.ndjson — newline-delimited JSON of the structured records.
  • Workspace-level export: Settings → Audit log → Export for a CSV of all runs in a date range.