English
English
English
English
This page defines the building blocks every later page in this section refers to.
Data flows through five stages from the external system to SalesDash:
External system
│
▼
Provider ← one workspace in the external system
│
▼
Source ← one way of bringing records from that workspace into SalesDash
│
▼
Source Model ← the shape of one record produced by that source
│ ← optionally extended with Enrichments
▼
Processor ← the spec that turns each source model into SalesDash data
│
▼
SalesDash records ← agents, activities, sales, calls, periods, identitiesEach stage is a separate, configurable thing. You can have multiple sources on one provider, one source can execute multiple processors, and one processor can use multiple nested enrichments.
A Provider represents one workspace in one external system — for example, one Teamleader workspace, one GoHighLevel location, or one Aircall company. Different external systems use different names for this; "workspace" is the umbrella term used throughout these docs.
If your team uses two separate Teamleader workspaces, that's two providers, each with its own credentials. A provider on its own does nothing — it's the credential record that sources hang off.
A Source is one configured way of bringing records from a provider into SalesDash. A source produces source models of exactly one type.
Two kinds of source exist:
A provider can have many sources. For example, a GoHighLevel provider might have an opportunities poll source, an opportunity webhook source, and a calendar event webhook source all at once.
Example
The opportunities poll source on a GoHighLevel provider runs on the interval you configure on the source. On each run it asks GoHighLevel "give me every opportunity updated since the last run" and produces one source model for each — even if the same opportunity has been seen before. The processor decides what to do with each source model.
A Source Model is the typed shape of one record produced by a source. Each source model has an identifier — for example ghl_opportunity, teamleader_deal, payload (the generic webhook payload).
Each source model declares its field schema: the field names and their types (string, integer, datetime, boolean, etc.). Fields known to contain end-customer personal data are marked as such in the schema, and those fields are blocked from being referenced anywhere in a processor spec (see Variables and types).
The field schema is what the Spec Builder uses to populate variable autocomplete and to flag type mismatches.
Example
A ghl_opportunity source model has fields like id, name, status, pipelineStageId, monetaryValue, assignedTo (the GHL user ID of the assigned agent), and lastStatusChangeAt. It also exposes a contact object with the customer's name, email, and phone — those fields are marked as end-customer personal data and cannot be referenced anywhere in a processor spec.
An Enrichment is an extra fetch that runs after a source delivers its source models, attaching related data before the processor sees them.
Example: the GoHighLevel opportunity source delivers opportunity source models that reference a pipeline stage by ID but include no other information about it. The opportunity → pipeline stage enrichment fetches each referenced stage and attaches it to its opportunity, so the processor can read the stage's name to set the activity type without writing any HTTP code.
Enrichments are declared on the processor (in the Spec Builder's left panel). When added, they appear as nested fields on the source model, accessible from every instruction in the spec.
Enrichments can be nested. An enrichment produces a source model of its own type, which can itself have enrichments — so you can chain opportunity → contact → contact owner and reach all three from your spec.
A Processor is the unit of logic that decides what to do with each source model from a source. It owns a single document called the spec.
A processor is bound to one source-model type. It can be linked to one or more sources that produce source models of that type. Every time one of those sources delivers source models, the processor's spec runs once per source model.
Processors are defined under Providers → Processors in the tenant admin.
Example: a tiny processor
A processor on the ghl_user source model that does one thing: register each GHL user as a SalesDash agent.
The spec has a single instruction:
upsert_agent_external_identity with source_model.id as the external_id, source_model.email as the email, and the user's first and last name (joined into one string with an implode_with_spaces expression) as the external_name.Each time this processor runs against one GHL user, that user becomes (or stays) a SalesDash agent identity. There's no branching, no activity creation, and no enrichments — just identity registration. Most real processors are larger, but they all start from this shape: a source model in, a few instructions, one or more SalesDash records out.
External IDs decide what gets overwritten
Every upsert operation has an external_id field that uniquely identifies the SalesDash record it produces. When SalesDash sees the same external_id again, it updates the existing record instead of creating a new one. So the way you compose external_id decides whether each new run produces a new SalesDash record or overwrites an existing one.
This is the most important design decision when writing a processor. Suppose you're processing deal records that move through pipeline stages and eventually close (won or lost). Two reasonable choices give very different results:
" CLOSED" (joined with a concat expression) — one activity per deal, shared by every closed state. If a deal flips from "won" to "lost," the same activity is overwritten and the activity_type updates with it. You always see the deal's current end-state and nothing else.Neither is "right" — it depends on whether you want to count outcomes (one activity per deal) or stage activity (one activity per transition). The standard separator for composite ids is | (space-pipe-space).
The spec is the actual configuration of a processor. It declares:
Instructions are the verbs of the spec language: upsert_activity, upsert_sale, if_else, for_each, define_variable, and so on. They read variables from the current scope, optionally combine them with expressions (concat, parse_datetime, equals, …), and produce results.
You compose specs in the Spec Builder — there's nothing to write by hand.
Every save that changes the spec creates a new immutable spec version. Versions aren't browsable in the UI yet — their main role today is to give each historical execution a record of the spec it ran against, so the Trace viewer can show traces against their original spec.
An Execution is one run of one processor against one source model. It captures:
Processor executions are listed under Providers → Processor Executions; opening one takes you into the Trace viewer. When something looks wrong in production, the trace viewer is where you start.
There are two layers of execution records:
A failure in one processor execution doesn't fail the parent source execution; the next source model continues.
Read The Spec Builder next to see how processors are actually composed.