Unlike other workflow engines that are purely ephemeral, dataflows builds a persistent "Context" of your company's data. This allows for historical analysis, time-travel debugging, and AI context generation.
The core schema is designed to be flexible yet strict where it matters.
service_clients)Manages the identity of external systems.
connections)Stores the actual credentials (tokens) linking a User or Account to a Service Client.
executions)The audit log of everything that happens in the system.
dataflows is designed to synchronize data from external systems into its own warehouse tables.
// Example: Syncing ClickUp Tasks to the local warehouse
export const syncClickUpTask = defineWorkflow({
id: 'sync-clickup-task',
trigger: clickup.onTaskUpdated(),
async run({ event, step }) {
const task = event.payload
// Each upsert is a durable step — retried on failure,
// skipped on replay if it already succeeded.
const project = await step.run('upsert-project', () =>
db.projects.upsert({
clickup_id: task.project.id,
name: task.project.name
})
)
await step.run('upsert-task', () =>
db.tasks.upsert({
clickup_id: task.id,
project_id: project.id,
name: task.name,
status: task.status
})
)
}
})
This allows you to query your data using standard SQL, even if the source API is slow or down.
The true power of the dataflows warehouse is its ability to turn raw API data into actionable business intelligence.
We don't just dump JSON into the database. We create structured SQL Views that join data across different services.
Because the data is in standard PostgreSQL, you can connect any visualization tool: