--- introduction.mdx ---
Zero is a new kind of sync engine powered by queries.
Rather than syncing entire tables to the client, or using static rules to carefully specify what to sync, you just write queries directly in your client code. Queries can access the entire backend database.
Zero caches the data for queries locally on the device, and reuses that data automatically to answer future queries whenever possible.
For typical applications, the result is that almost all queries are answered locally, instantly. It feels like you have access to the entire backend database directly from the client in memory. Occasionally, when you do a more specific query, Zero falls back to the server. But this happens automatically without any extra work required.
Zero is made possible by a custom streaming query engine we built called [ZQL](reading-data), which uses [Incremental View Maintenance](https://www.vldb.org/pvldb/vol16/p1601-budiu.pdf) on both client and server to efficiently keep large, complex queries up to date.
## Status
Zero is in alpha. There are still some rough edges, and to run it, you need to [deploy it yourself](deployment) to AWS or similar.
Even so, Zero is already quite fun to work with. We are using it ourselves to build our very own [Linear-style bug tracker](https://bugs.rocicorp.dev/). We find that Zero is already much more productive than alternatives, even having to occasionally work around a missing feature.
If you are building a new web app that needs to be fast and reactive, and can do the deployment yourself, it's a great time to get started with Zero. We're working toward a [beta release and full production readiness](roadmap) this year.
--- quickstart.mdx ---
## Prerequisites
- Docker
- Node 20+
This quickstart uses React, but we also have it available for SolidJS. See:
[SolidJS](/docs/solidjs).
## Run
In one terminal, install and start the database:
```bash
git clone https://github.com/rocicorp/hello-zero.git
cd hello-zero
npm install
npm run dev:db-up
```
Not using npm?
Zero's server component depends on `@rocicorp/zero-sqlite3`, which contains a
binary that requires running a postinstall script. Most alternative package
managers (non-npm) disable these scripts by default for security reasons. Here's
how to enable installation for common alternatives:
### pnpm
For [pnpm](https://pnpm.io/), either:
- Run `pnpm approve-builds` to approve all build scripts, or
- Add the specific dependency to your `package.json`:
```json
"pnpm": {
"onlyBuiltDependencies": [
"@rocicorp/zero-sqlite3"
]
}
```
### Bun
For [Bun](https://bun.sh/), add the dependency to your trusted dependencies
list:
```json
"trustedDependencies": [
"@rocicorp/zero-sqlite3"
],
```
In a second terminal, start `zero-cache`:
```bash
cd hello-zero
npm run dev:zero-cache
```
In a final terminal, start the UI:
```bash
cd hello-zero
npm run dev:ui
```
## Quick Overview
`hello-zero` is a demo app that allows querying over a small dataset of fake messages between early Zero users.
Here are some things to try:
- Press the **Add Messages** button to add messages to the UI. Any logged-in or anonymous users are allowed to add messages.
- Press the **Remove Messages** button to remove messages. Only logged-in users are allowed to remove messages. You can **hold shift** to bypass the UI warning and see that write access control is being enforced server-side – the UI flickers as the optimistic write happens instantly and is then reverted by the server. Press **login** to login as a random user, then the remove button will work.
- Open two different browsers and see how fast sync propagates changes.
- Add a filter using the **From** and **Contains** controls. Notice that filters are fully dynamic and synced.
- Edit a message by pressing the **pencil icon**. You can only edit messages from the user you’re logged in as. As before you can attempt to bypass by holding shift.
- Check out the SQL schema for this database in `seed.sql`.
- Login to the database with `psql postgresql://user:password@127.0.0.1:5430/postgres` (or any other pg viewer) and delete or alter a row. Observe that it deletes from UI automatically.
## Detailed Walkthrough
## Deployment
You can deploy Zero apps to most cloud providers that support Docker and Postgres. See [Deployment](/docs/deployment) for more information.
--- add-to-existing-project.mdx ---
Zero integrates easily into most JavaScript or TypeScript projects, whether
you're using React, Vue, Svelte, Solid, or vanilla JavaScript.
## Prerequisites
- A PostgreSQL database with Write-Ahead Logging (WAL) enabled. See [Connecting
to Postgres](connecting-to-postgres) for setup instructions.
## Installation
Install the Zero package:
```bash
npm install @rocicorp/zero
```
**Note:** If you're using [pnpm](https://pnpm.io) or [bun](https://bun.sh),
additional steps are required to install native binaries. Refer to [Not using
npm?](quickstart#not-npm) for details.
## Environment Variables
Configure Zero by creating a `.env` file in your project root:
```bash
ZERO_UPSTREAM_DB="postgresql://user:password@127.0.0.1/postgres"
ZERO_REPLICA_FILE="/tmp/sync-replica.db"
```
Replace the placeholders with your database connection details. For more
options, see [configuration options](zero-cache-config).
## Starting the Server
Start the Zero server using the CLI:
```bash
npx zero-cache
```
The server runs on port 4848 by default. To verify, open `http://localhost:4848`
in your browser. If everything is configured correctly, you'll see "OK".
## Defining Your Schema
Define your data model schema as described in the [Zero schema
documentation](zero-schema).
Example:
```ts
// schema.ts
import {createSchema, table, string} from '@rocicorp/zero';
const message = table('message')
.columns({
id: string(),
body: string(),
})
.primaryKey('id');
export const schema = createSchema({
tables: [message],
});
export type Schema = typeof schema;
```
If you're using [Prisma](https://www.prisma.io/) or
[Drizzle](https://orm.drizzle.team/), you can convert their schemas to Zero
schemas using tools listed in the [community
section](./community#database-tools).
### Permissions
Update `schema.ts` to include permissions for your tables. For example, to allow
all users to read and write to the `message` table, add the following:
```ts
// schema.ts
import {ANYONE_CAN_DO_ANYTHING, definePermissions} from '@rocicorp/zero';
export const permissions = definePermissions(schema, () => ({
message: ANYONE_CAN_DO_ANYTHING,
}));
```
For more details, see [permissions](permissions).
## Creating a Zero Instance
To create a Zero client instance:
```js
import {Zero} from '@rocicorp/zero';
const z = new Zero({
userID: 'anon',
server: 'http://localhost:4848',
schema,
});
```
In production, avoid hardcoding the server URL. Use environment variables like
`import.meta.env.VITE_PUBLIC_SERVER` or `process.env.NEXT_PUBLIC_SERVER`.
## Reading Data
To read data, use the `materialize` method on a `Query` from the `Zero`
instance. This creates a materialized view that listens for real-time updates to
the data:
```js
const view = z.query.message.materialize();
view.addListener(data => console.log('Data updated:', data));
```
When the view is no longer needed, ensure you clean up by destroying it:
```js
view.destroy();
```
For more details, see [Reading Data with ZQL](reading-data).
### React
React developers can use the `useZero` hook for seamless integration. See
[Integrations React](react) for more details.
### SolidJS
For SolidJS, use the `createZero` function instead of `new Zero`.
Refer to [Integrations SolidJS](solidjs) for additional information.
### Other Frameworks
For other frameworks, see the [UI frameworks](community#ui-frameworks)
documentation.
## Writing Data
Zero supports both simple and advanced data mutations. For basic use cases,
use the [CRUD mutator](writing-data):
```ts
z.mutate.message.insert({id: nanoid(), body: 'Hello World!'});
```
For more complex scenarios, such as custom business logic, use
[custom mutators](custom-mutators) to define tailored mutation behavior.
## Server-Side Rendering (SSR)
Zero does not yet support SSR. See [SSR](zql-on-the-server#ssr) for details on disabling SSR for your framework.
## Deployment
Ensure all `.env` variables are set in the production environment. For Zero
cache deployment, see [Deployment](deployment). For frontend deployment, consult
your framework's documentation.
--- samples.mdx ---
## zbugs
[zbugs](https://bugs.rocicorp.dev/) is a complete issue tracker in the style of Linear built with Zero.
Not just a demo app, this is the Rocicorp’s actual issue tracker. We use it everyday and depend on it. When Zero launches publicly this will be our public issue tracker, not GitHub.
**Stack:** Vite/Fastify/React
**Live demo:** https://bugs.rocicorp.dev/ (password: `zql`)
**Source:** https://github.com/rocicorp/mono/tree/main/apps/zbugs
### Features
- Instant reads and writes with realtime updates throughout
- Github auth
- Write permissions (anyone can create a bug, but only creator can edit their own bug, etc)
- Read permissions (only admins can see internal issues and comments on those issues)
- Complex filters
- Unread indicators
- Basic text search
- Emojis
- Short numeric bug IDs rather than cryptic hashes
## hello-zero
A quickstart showing off the key features of Zero.
**Stack:** Vite/Hono/React
**Source:** https://github.com/rocicorp/hello-zero
**Docs:** [Quickstart](/docs/quickstart)
### Features
- Instant reads and writes with realtime updates throughout
- 60fps (ish) mutations and sync
- Hard-coded auth + write permissions
- Write permissions (only logged in users can remove messages, only creating user can edit a message)
- Complex filters
- Basic text search
## hello-zero-solid
Same as `hello-zero`, but in Solid. See @SolidJS.
## hello-zero-do
Same as `zero-hello` and `zero-hello-solid` above, but demonstrates Cloudflare Durable Objects integration.
This sample runs `zero-client` within a Durable Object and monitors changes to a Zero query. This can be used to do things like send notifications, update external services, etc.
**Stack:** Vite/Hono/React/Cloudflare Workers
**Source:** [https://github.com/rocicorp/hello-zero-do](https://github.com/rocicorp/hello-zero-do)
--- overview.mdx ---
--- connecting-to-postgres.mdx ---
In the future, Zero will work with many different backend databases. Today only Postgres is supported. Specifically, Zero requires Postgres v15.0 or higher, and support for [logical replication](https://www.postgresql.org/docs/current/logical-replication.html).
Here are some common Postgres options and what we know about their support level:
| Postgres | Support Status |
| --------------------------------- | ----------------------------------------------------------------------------------------------------------- |
| Postgres.app | ✅ |
| postgres:16.2-alpine docker image | ✅ |
| AWS RDS | ✅ |
| AWS Aurora | ✅ v15.6+ |
| Google Cloud SQL | ✅ See [notes below](#google-cloud-sql) |
| [Fly.io](http://Fly.io) Postgres | ✅ |
| Supabase, Neon, Render, Heroku | 🤷♂️ Partial support with autoreset. See [Schema Changes](#schema-changes) and provider-specific notes below. |
## Schema Changes
Zero uses Postgres “[Event Triggers](https://www.postgresql.org/docs/current/sql-createeventtrigger.html)” when possible to implement high-quality, efficient [schema migration](zero-schema/#migrations).
Some hosted Postgres providers don’t provide access to Event Triggers.
Zero still works out of the box with these providers, but for correctness, any schema change triggers a full reset of all server-side and client-side state. For small databases (< 10GB) this can be OK, but for bigger databases we recommend choosing a provider that grants access to Event Triggers.
## Configuration
The Postgres `wal_level` config parameter has to be set to `logical`. You can check what level your pg has with this command:
```bash
psql -c 'SHOW wal_level'
```
If it doesn’t output `logical` then you need to change the wal level. To do this, run:
```bash
psql -c "ALTER SYSTEM SET wal_level = 'logical';"
```
Then restart Postgres. On most pg systems you can do this like so:
```bash
data_dir=$(psql -t -A -c 'SHOW data_directory')
pg_ctl -D "$data_dir" restart
```
After your server restarts, show the `wal_level` again to ensure it has changed:
```bash
psql -c 'SHOW wal_level'
```
## SSL Mode
Some Postgres providers (notably Fly.io, so far) do not support TLS on their internal networks. You can disable
attempting to use it by adding the `sslmode=disable` query parameter to your connection strings from `zero-cache`.
## Provider-Specific Notes
### Google Cloud SQL
To use Google Cloud SQL you must [manually create a `PUBLICATION`](/docs/postgres-support#limiting-replication)
and specify that publication name in the [App Publications](/docs/zero-cache-config#app-publications)
option when running `zero-cache`.
(Google Cloud SQL does not provide sufficient permissions for `zero-cache` to create its default publication.)
### Supabase
In order to connect to Supabase you must use the "Direct Connection" style connection string, not the pooler:
This is because Zero sets up a logical replication slot, which is only supported with a direct connection.
Additionally, you'll likely need to assign a IPv4 address to your Supabase instance. This is not supported on the free Supabase tier, and is an extra $4/mo fee.
--- postgres-support.mdx ---
Postgres has a massive feature set, of which Zero supports a growings subset.
## Object Names
- Table and column names must begin with a letter or underscore
- This can be followed letters, numbers, underscores, and hyphens
- Regex: `/^[A-Za-z_]+[A-Za-z0-9_-]*$/`
- The column name `_0_version` is reserved for internal use
## Object Types
- Tables are synced
- Views are not synced
- `identity` generated columns are synced
- All other generated columns are not synced
- Indexes aren’t _synced_ per-se but we do implicitly add indexes to the replica that match the upstream indexes. In the future this will be customizable.
## Column Types
| Postgres Type | Type to put in `schema.ts` | Resulting JS/TS Type |
| --------------------------------- | -------------------------- | -------------------- |
| All numeric types | `number` | `number` |
| `char`, `varchar`, `text`, `uuid` | `string` | `string` |
| `bool` | `boolean` | `boolean` |
| `date`, `timestamp`, `timestampz` | `number` | `number` |
| `json`, `jsonb` | `json` | `JSONValue` |
| `enum` | `enumeration` | `string` |
Other Postgres column types aren’t supported. They will be ignored when replicating (the synced data will be missing that column) and you will get a warning when `zero-cache` starts up.
If your schema has a pg type not listed here, you can support it in Zero by using a trigger to map it to some type that Zero can support. For example if you have an [enum type](https://www.postgresql.org/docs/current/datatype-enum.html#DATATYPE-ENUM) `Mood` used by column `user_mood mood`, you can use a trigger to map it to a `user_mood_text text` column. You would then use another trigger to map changes to `user_mood_text` back to `user_mood` so that the data can be updated by Zero.
Let us know if the lack of a particular column type is hindering your use of Zero. It can likely be added.
## Column Defaults
Default values are allowed in the Postgres schema but there currently is no way to use them from a Zero app. The create mutation requires all columns to be specified, except when columns are nullable (in which case,they default to null). Since there is no way to leave non-nullable columns off the insert, there is no way for PG to apply the default. This is a known issue and will be fixed in the future.
## IDs
It is strongly recommended that primary keys be client-generated random strings like [uuid](https://www.npmjs.com/package/uuid), [ulid](https://www.npmjs.com/package/ulid), [nanoid](https://www.npmjs.com/package/nanoid), etc. This makes optimistic creation and updates much easier.
Imagine that the PK of your table is an auto-incrementing integer. If you optimistically create an entity of this type, you will have to give it some ID – the type will require it locally, but also if you want to optimistically create relationships to this row you’ll need an ID.
You could sync the highest value seen for that table, but there are race conditions and it is possible for that ID to be taken by the time the creation makes it to the server. Your database can resolve this and assign the next ID, but now the relationships you created optimistically will be against the wrong row. Blech.
GUIDs makes a lot more sense in synced applications.
If your table has a natural key you can use that and it has less problems. But there is still the chance for a conflict. Imagine you are modeling orgs and you choose domainName as the natural key. It is possible for a race to happen and when the creation gets to the server, somebody has already chosen that domain name. In that case, the best thing to do is reject the write and show the user an error.
If you want to have a short auto-incrementing numeric ID for ux reasons (ie, a bug number), that is possible – See [Demo Video](https://discord.com/channels/830183651022471199/1288232858795769917/1298114323272568852)!
## Primary Keys
Each table synced with Zero must have either a primary key or at least one unique index.
This is needed so that Zero can identify rows during sync, to distinguish between an edit and a remove/add.
Multi-column primary and foreign keys are supported.
## Limiting Replication
You can use [Permissions](permissions) to limit tables and rows from replicating to Zero. In the near future you’ll also be able to use Permissions to limit individual columns.
Until then, a workaround is to use the Postgres [_publication_](https://www.postgresql.org/docs/current/sql-createpublication.html) feature to control the tables and columns that are replicated into `zero-cache`.
In your pg schema setup, create a Postgres `publication` with the tables and columns you want:
```sql
CREATE PUBLICATION zero_data FOR TABLE users (col1, col2, col3, ...), issues, comments;
```
Then, specify this publication in the [App Publications](/docs/zero-cache-config#app-publications) `zero-cache` option. (By default, Zero creates a publication that publishes the entire public schema.)
To limit what is synced from the `zero-cache` replica to actual clients (e.g., web browsers) you can use [read permissions](/docs/permissions#select-permissions).
## Schema changes
Most Postgres schema changes are supported as is.
Two cases require special handling:
### Adding columns
Adding a column with a non-constant `DEFAULT` value is not supported.
This includes any expression with parentheses, as well as the special functions `CURRENT_TIME`, `CURRENT_DATE`, and `CURRENT_TIMESTAMP`
(due to a [constraint of SQLite](https://www.sqlite.org/lang_altertable.html#altertabaddcol)).
However, the `DEFAULT` value of an _existing_ column can be changed to any value, including non-constant expressions. To achieve the desired column default:
- Add the column with no `DEFAULT` value
- Backfill the column with desired values
- Set the column's `DEFAULT` value
```sql
BEGIN;
ALTER TABLE foo ADD bar ...; -- without a DEFAULT value
UPDATE foo SET bar = ...;
ALTER TABLE foo ALTER bar SET DEFAULT ...;
COMMIT;
```
### Changing publications
Postgres allows you to change published tables/columns with an `ALTER PUBLICATION` statement. Zero automatically adjusts the table schemas on the replica, but it does not receive the pre-existing data.
To stream the pre-existing data to Zero, make an innocuous `UPDATE` after adding the tables/columns to the publication:
```sql
BEGIN;
ALTER PUBLICATION zero_data ADD TABLE foo;
ALTER TABLE foo REPLICA IDENTITY FULL;
UPDATE foo SET id = id; -- For some column "id" in "foo"
ALTER TABLE foo REPLICA IDENTITY DEFAULT;
COMMIT;
```
## Self-Referential Relationships
See [zero-schema](/docs/zero-schema#self-referential-relationships)
--- zero-schema.mdx ---
Zero applications have both a _database schema_ (the normal backend database schema that all web apps have) and a _Zero schema_. The purpose of the Zero schema is to:
1. Provide typesafety for ZQL queries
2. Define first-class relationships between tables
3. Define permissions for access control
[Community-contributed converters](./community#database-tools) exist for
Prisma and Drizzle that generate the tables and relationships. It is good to
know how the underlying Zero schemas work, however, for debugging and
conceptual understanding.
This page describes using the schema to define your tables, columns, and relationships.
## Defining the Zero Schema
The Zero schema is encoded in a TypeScript file that is conventionally called `schema.ts` file. For example, see [the schema file for`hello-zero`](https://github.com/rocicorp/hello-zero/blob/main/src/schema.ts).
## Table Schemas
Use the `table` function to define each table in your Zero schema:
```tsx
import {table, string, boolean} from '@rocicorp/zero';
const user = table('user')
.columns({
id: string(),
name: string(),
partner: boolean(),
})
.primaryKey('id');
```
Column types are defined with the `boolean()`, `number()`, `string()`, `json()`, and `enumeration()` helpers. See [Column Types](/docs/postgres-support#column-types) for how database types are mapped to these types.
Currently, if the database type doesn’t map correctly to the Zero type,
replication will continue and succeed but the data won't match the TypeScript
type. This is a bug – in the future, this will be an error. See
https://bugs.rocicorp.dev/issue/3112.
### Name Mapping
Use `from()` to map a TypeScript table or column name to a different database name:
```ts
const userPref = table('userPref')
// Map TS "userPref" to DB name "user_pref"
.from('user_pref')
.columns({
id: string(),
// Map TS "orgID" to DB name "org_id"
orgID: string().from('org_id'),
});
```
### Multiple Schemas
You can also use `from()` to access other Postgres schemas:
```ts
// Sync the "event" table from the "analytics" schema.
const event = table('event').from('analytics.event');
```
### Optional Columns
Columns can be marked _optional_. This corresponds to the SQL concept `nullable`.
```tsx
const user = table('user')
.columns({
id: string(),
name: string(),
nickName: string().optional(),
})
.primaryKey('id');
```
An optional column can store a value of the specified type or `null` to mean _no value_.
Note that `null` and `undefined` mean different things when working with Zero rows.
- When reading, if a column is `optional`, Zero can return `null` for that field. `undefined` is not used at all when Reading from Zero.
- When writing, you can specify `null` for an optional field to explicitly write `null` to the datastore, unsetting any previous value.
- For `create` and `upsert` you can set optional fields to `undefined` (or leave the field off completely) to take the default value as specified by backend schema for that column. For `update` you can set any non-PK field to `undefined` to leave the previous value unmodified.
### Enumerations
Use the `enumeration` helper to define a column that can only take on a specific set of values. This is most often used alongside an [`enum` Postgres column type](postgres-support#column-types).
```tsx
import {table, string, enumeration} from '@rocicorp/zero';
const user = table('user')
.columns({
id: string(),
name: string(),
mood: enumeration<'happy' | 'sad' | 'taco'>(),
})
.primaryKey('id');
```
### Custom JSON Types
Use the `json` helper to define a column that stores a JSON-compatible value:
```tsx
import {table, string, json} from '@rocicorp/zero';
const user = table('user')
.columns({
id: string(),
name: string(),
settings: json<{theme: 'light' | 'dark'}>(),
})
.primaryKey('id');
```
### Compound Primary Keys
Pass multiple columns to `primaryKey` to define a compound primary key:
```ts
const user = table('user')
.columns({
orgID: string(),
userID: string(),
name: string(),
})
.primaryKey('orgID', 'userID');
```
## Relationships
Use the `relationships` function to define relationships between tables. Use the `one` and `many` helpers to define singular and plural relationships, respectively:
```ts
const messageRelationships = relationships(message, ({one, many}) => ({
sender: one({
sourceField: ['senderID'],
destField: ['id'],
destSchema: user,
}),
replies: many({
sourceField: ['id'],
destSchema: message,
destField: ['parentMessageID'],
}),
}));
```
This creates "sender" and "replies" relationships that can later be queried with the [`related` ZQL clause](./reading-data#relationships):
```ts
const messagesWithSenderAndReplies = z.query.messages
.related('sender')
.related('replies');
```
This will return an object for each message row. Each message will have a `sender` field that is a single `User` object or `null`, and a `replies` field that is an array of `Message` objects.
### Many-to-Many Relationships
You can create many-to-many relationships by chaining the relationship definitions. Assuming `issue` and `label` tables, along with an `issueLabel` junction table, you can define a `labels` relationship like this:
```ts
const issueRelationships = relationships(issue, ({many}) => ({
labels: many(
{
sourceField: ['id'],
destSchema: issueLabel,
destField: ['issueID'],
},
{
sourceField: ['labelID'],
destSchema: label,
destField: ['id'],
},
),
}));
```
Currently only two levels of chaining are supported for
`relationships`. See https://bugs.rocicorp.dev/issue/3454.
### Compound Keys Relationships
Relationships can traverse compound keys. Imagine a `user` table with a compound primary key of `orgID` and `userID`, and a `message` table with a related `senderOrgID` and `senderUserID`. This can be represented in your schema with:
```ts
const messageRelationships = relationships(message, ({one}) => ({
sender: one({
sourceField: ['senderOrgID', 'senderUserID'],
destSchema: user,
destField: ['orgID', 'userID'],
}),
}));
```
### Circular Relationships
Circular relationships are fully supported:
```tsx
const commentRelationships = relationships(comment, ({one}) => ({
parent: one({
sourceField: ['parentID'],
destSchema: comment,
destField: ['id'],
}),
}));
```
## Database Schemas
Use `createSchema` to define the entire Zero schema:
```tsx
import {createSchema} from '@rocicorp/zero';
export const schema = createSchema(
{
tables: [user, medium, message],
relationships: [
userRelationships,
mediumRelationships,
messageRelationships,
],
},
);
```
## Migrations
Zero uses TypeScript-style structural typing to detect schema changes and implement smooth migrations.
### How it Works
When the Zero client connects to `zero-cache` it sends a copy of the schema it was constructed with. `zero-cache` compares this schema to the one it has, and rejects the connection with a special error code if the schema is incompatible.
By default, The Zero client handles this error code by calling `location.reload()`. The intent is to to get a newer version of the app that has been updated to handle the new server schema.
It's important to update the database schema first, then the app. Otherwise a reload loop will occur.
If a reload loop does occur, Zero uses exponential backoff to avoid overloading the server.
If you want to delay this reload, you can do so by providing the `onUpdateNeeded` constructor parameter:
```ts
const z = new Zero({
onUpdateNeeded: updateReason => {
if (reason.type === 'SchemaVersionNotSupported') {
// Do something custom here, like show a banner.
// When you're ready, call `location.reload()`.
}
},
});
```
If the schema changes while a client is running in a compatible way, `zero-cache` syncs the schema change to the client so that it's ready when the app reloads and gets new code that needs it. If the schema changes while a client is running in an incompatible way, `zero-cache` will close the client connection with the same error code as above.
### Schema Change Process
Like other database-backed applications, Zero schema migration generally follow an “expand/migrate/contract” pattern:
1. Implement and run an “expand” migration on the backend that is backwards compatible with existing schemas. Add new rows, tables, as well as any defaults and triggers needed for backwards compatibility.
2. Add any new permissions required for the new tables/columns by running [`zero-deploy-permissions`](/docs/permissions#permission-deployment).
3. Update and deploy the client app to use the new schema.
4. Optionally, after some grace period, implement and run a “contract” migration on the backend, deleting any obsolete rows/tables.
Steps 1-3 can generally be done as part of one deploy by your CI pipeline, but step 4 would be weeks later when most open clients have refreshed and gotten new code.
Certain schema changes require special handling in Postgres. See [Schema
Changes](/docs/postgres-support#schema-changes) for details.
--- reading-data.mdx ---
ZQL is Zero’s query language.
Inspired by SQL, ZQL is expressed in TypeScript with heavy use of the builder pattern. If you have used [Drizzle](https://orm.drizzle.team/) or [Kysely](https://kysely.dev/), ZQL will feel familiar.
ZQL queries are composed of one or more _clauses_ that are chained together into a _query_.
Unlike queries in classic databases, the result of a ZQL query is a _view_ that updates automatically and efficiently as the underlying data changes. You can call a query’s `materialize()` method to get a view, but more typically you run queries via some framework-specific bindings. For example see `useQuery` for [React](react) or [SolidJS](solidjs).
This means you should not modify the data directly. Instead, clone the data and modify the clone.
ZQL caches values and returns them multiple times. If you modify a value returned from ZQL, you will modify it everywhere it is used. This can lead to subtle bugs.
JavaScript and TypeScript lack true immutable types so we use `readonly` to help enforce it. But it's easy to cast away the `readonly` accidentally.
In the future, we'll [`freeze`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze) all returned data in `dev` mode to help prevent this.
## Select
ZQL queries start by selecting a table. There is no way to select a subset of columns; ZQL queries always return the entire row (modulo column permissions).
```tsx
const z = new Zero(...);
// Returns a query that selects all rows and columns from the issue table.
z.query.issue;
```
This is a design tradeoff that allows Zero to better reuse the row locally for future queries. This also makes it easier to share types between different parts of the code.
## Ordering
You can sort query results by adding an `orderBy` clause:
```tsx
z.query.issue.orderBy('created', 'desc');
```
Multiple `orderBy` clauses can be present, in which case the data is sorted by those clauses in order:
```tsx
// Order by priority descending. For any rows with same priority,
// then order by created desc.
z.query.issue.orderBy('priority', 'desc').orderBy('created', 'desc');
```
All queries in ZQL have a default final order of their primary key. Assuming the `issue` table has a primary key on the `id` column, then:
```tsx
// Actually means: z.query.issue.orderBy('id', 'asc');
z.query.issue;
// Actually means: z.query.issue.orderBy('priority', 'desc').orderBy('id', 'asc');
z.query.issue.orderBy('priority', 'desc');
```
## Limit
You can limit the number of rows to return with `limit()`:
```tsx
z.query.issue.orderBy('created', 'desc').limit(100);
```
## Paging
You can start the results at or after a particular row with `start()`:
```tsx
let start: IssueRow | undefined;
while (true) {
let q = z.query.issue.orderBy('created', 'desc').limit(100);
if (start) {
q = q.start(start);
}
const batch = await q.run();
console.log('got batch', batch);
if (batch.length < 100) {
break;
}
start = batch[batch.length - 1];
}
```
By default `start()` is _exclusive_ - it returns rows starting **after** the supplied reference row. This is what you usually want for paging. If you want _inclusive_ results, you can do:
```tsx
z.query.issue.start(row, {inclusive: true});
```
## Uniqueness
If you want exactly zero or one results, use the `one()` clause. This causes ZQL to return `Row|undefined` rather than `Row[]`.
```tsx
const result = await z.query.issue.where('id', 42).one().run();
if (!result) {
console.error('not found');
}
```
`one()` overrides any `limit()` clause that is also present.
## Relationships
You can query related rows using _relationships_ that are defined in your [Zero schema](/docs/zero-schema).
```tsx
// Get all issues and their related comments
z.query.issue.related('comments');
```
Relationships are returned as hierarchical data. In the above example, each row will have a `comments` field which is itself an array of the corresponding comments row.
You can fetch multiple relationships in a single query:
```tsx
z.query.issue.related('comments').related('reactions').related('assignees');
```
### Refining Relationships
By default all matching relationship rows are returned, but this can be refined. The `related` method accepts an optional second function which is itself a query.
```tsx
z.query.issue.related(
'comments',
// It is common to use the 'q' shorthand variable for this parameter,
// but it is a _comment_ query in particular here, exactly as if you
// had done z.query.comment.
q => q.orderBy('modified', 'desc').limit(100).start(lastSeenComment),
);
```
This _relationship query_ can have all the same clauses that top-level queries can have.
### Nested Relationships
You can nest relationships arbitrarily:
```tsx
// Get all issues, first 100 comments for each (ordered by modified,desc),
// and for each comment all of its reactions.
z.query.issue.related(
'comments', q => q.orderBy('modified', 'desc').limit(100).related(
'reactions')
)
);
```
## Where
You can filter a query with `where()`:
```tsx
z.query.issue.where('priority', '=', 'high');
```
The first parameter is always a column name from the table being queried. Intellisense will offer available options (sourced from your [Zero Schema](/docs/zero-schema)).
### Comparison Operators
Where supports the following comparison operators:
| Operator | Allowed Operand Types | Description |
| ---------------------------------------- | ----------------------------- | ------------------------------------------------------------------------ |
| `=` , `!=` | boolean, number, string | JS strict equal (===) semantics |
| `<` , `<=`, `>`, `>=` | number | JS number compare semantics |
| `LIKE`, `NOT LIKE`, `ILIKE`, `NOT ILIKE` | string | SQL-compatible `LIKE` / `ILIKE` |
| `IN` , `NOT IN` | boolean, number, string | RHS must be array. Returns true if rhs contains lhs by JS strict equals. |
| `IS` , `IS NOT` | boolean, number, string, null | Same as `=` but also works for `null` |
TypeScript will restrict you from using operators with types that don’t make sense – you can’t use `>` with `boolean` for example.
If you don’t see the comparison operator you need, let us know, many are easy
to add.
### Equals is the Default Comparison Operator
Because comparing by `=` is so common, you can leave it out and `where` defaults to `=`.
```tsx
z.query.issue.where('priority', 'high');
```
### Comparing to `null`
As in SQL, ZQL’s `null` is not equal to itself (`null ≠ null`).
This is required to make join semantics work: if you’re joining `employee.orgID` on `org.id` you do **not** want an employee in no organization to match an org that hasn’t yet been assigned an ID.
When you purposely want to compare to `null` ZQL supports `IS` and `IS NOT` operators that work just like in SQL:
```tsx
// Find employees not in any org.
z.query.employee.where('orgID', 'IS', null);
```
TypeScript will prevent you from comparing to `null` with other operators.
### Compound Filters
The argument to `where` can also be a callback that returns a complex expression:
```tsx
// Get all issues that have priority 'critical' or else have both
// priority 'medium' and not more than 100 votes.
z.query.issue.where(({cmp, and, or, not}) =>
or(
cmp('priority', 'critical'),
and(cmp('priority', 'medium'), not(cmp('numVotes', '>', 100))),
),
);
```
`cmp` is short for _compare_ and works the same as `where` at the top-level except that it can’t be chained and it only accepts comparison operators (no relationship filters – see below).
Note that chaining `where()` is also a one-level `and`:
```tsx
// Find issues with priority 3 or higher, owned by aa
z.query.issue.where('priority', '>=', 3).where('owner', 'aa');
```
### Relationship Filters
Your filter can also test properties of relationships. Currently the only supported test is existence:
```tsx
// Find all orgs that have at least one employee
z.query.organization.whereExists('employees');
```
The argument to `whereExists` is a relationship, so just like other relationships it can be refined with a query:
```tsx
// Find all orgs that have at least one cool employee
z.query.organization.whereExists('employees', q =>
q.where('location', 'Hawaii'),
);
```
As with querying relationships, relationship filters can be arbitrarily nested:
```tsx
// Get all issues that have comments that have reactions
z.query.issue.whereExists('comments',
q => q.whereExists('reactions'));
);
```
The `exists` helper is also provided which can be used with `and`, `or`, `cmp`, and `not` to build compound filters that check relationship existence:
```tsx
// Find issues that have at least one comment or are high priority
z.query.issue.where({cmp, or, exists} =>
or(
cmp('priority', 'high'),
exists('comments'),
),
);
```
## Data Lifetime and Reuse
Zero reuses data synced from prior queries to answer new queries when possible. This is what enables instant UI transitions.
But what controls the lifetime of this client-side data? How can you know whether any partiular query will return instant results? How can you know whether those results will be up to date or stale?
The answer is that the data on the client is simply the union of rows returned from queries which are currently syncing. Once a row is no longer returned by any syncing query, it is removed from the client. Thus, there is never any stale data in Zero.
So when you are thinking about whether a query is going to return results instantly, you should think about _what other queries are syncing_, not about what data is local. Data exists locally if and only if there is a query syncing that returns that data.
This is why we often say that despite the name `zero-cache`, Zero is not technically a cache. It's a *replica*.
A cache has a random set of rows with a random set of versions. There is no expectation that the cache any particular rows, or that the rows' have matching versions. Rows are simply updated as they are fetched.
A replica by contrast is eagerly updated, whether or not any client has requested a row. A replica is always very close to up-to-date, and always self-consistent.
Zero is a _partial_ replica because it only replicates rows that are returned by syncing queries.
## Query Lifecycle
Queries can be either _active_ or _backgrounded_. An active query is one that is currently being used by the application. Backgrounded queries are not currently in use, but continue syncing in case they are needed again soon.
Active queries are created one of three ways:
1. The app calls `q.materialize()` to get a `View`.
2. The app uses a platform binding like React's `useQuery(q)`.
3. The app calls [`preload()`](#preloading) to sync larger queries without a view.
Active queries sync until they are _deactivated_. The way this happens depends on how the query was created:
1. For `materialize()` queries, the UI calls `destroy()` on the view.
2. For `useQuery()`, the UI unmounts the component (which calls `destroy()` under the covers).
3. For `preload()`, the UI calls `cleanup()` on the return value of `preload()`.
### Background Queries
By default a deactivated query stops syncing immediately.
But it's often useful to keep queries syncing beyond deactivation in case the UI needs the same or a similar query in the near future. This is accomplished with the `ttl` parameter:
```ts
const [user] = useQuery(z.query.user.where('id', userId), {ttl: '1d'});
```
The `ttl` parameter specifies how long the app developer wishes the query to run in the background. The following formats are allowed (where `%d` is a positive integer):
| Format | Meaning |
| --------- | ------------------------------------------------------------------------------------ |
| `none` | No backgrounding. Query will immediately stop when deactivated. This is the default. |
| `%ds` | Number of seconds. |
| `%dm` | Number of minutes. |
| `%dh` | Number of hours. |
| `%dd` | Number of days. |
| `%dy` | Number of years. |
| `forever` | Query will never be stopped. |
If the UI re-requests a background query, it becomes an active query again. Since the query was syncing in the background, the very first synchronous result that the UI receives after reactivation will be up-to-date with the server (i.e., it will have `resultType` of `complete`).
Just like other types of queries, the data from background queries is available for use by new queries. A common pattern in to [preload](#preloading) a subset of most commonly needed data with `{ttl: 'forever'}` and then do more specific queries from the UI with, e.g., `{ttl: '1d'}`. Most often the preloaded data will be able to answer user queries, but if not, the new query will be answered by the server and backgrounded for a day in case the user revisits it.
### Client Capacity Management
Zero has a default soft limit of 20,000 rows on the client-side, or about 20MB of data assuming 1KB rows.
This limit can be increased with the [`--target-client-row-count`](./zero-cache-config#target-client-row-count) flag, but we do not recommend setting it higher than 100,000.
Contrary to the design of other sync engines, we believe that storing tons of data client-side doesn't make sense. Here are some reasons why:
- Initial sync will be slow, slowing down initial app load.
- Because storage in browser tabs is unreliable, initial sync can occur surprisingly often.
- We want to answer queries _instantly_ as often as possible. This requires client-side data in memory on the main thread. If we have to page to disk, we may as well go to the network and reduce complexity.
- Even though Zero's queries are very efficient, they do still have some cost, expecially hydration. Massive client-side storage would result in hydrating tons of queries that are unlikely to be used every time the app starts.
Most importantly, no matter how much data you store on the client, there will be cases where you have to fallback to the server:
- Some users might have huge amounts of data.
- Some users might have tiny amounts of available client storage.
- You will likely want the app to start fast and sync in the background.
Because you have to be able to fallback to server the question becomes _what is the **right** amount of data to store on the client?_, not _how can I store the absolute max possible data on the client?_
The goal with Zero is to answer 99% of queries on the client from memory. The remaining 1% of queries can fallback gracefully to the server. 20,000 rows was chosen somewhat arbitrarily as a number of rows that was likely to be able to do this for many applications.
There is no hard limit at 20,000 or 100,000. Nothing terrible happens if you go above. The thing to keep in mind is that:
1. All those queries will revalidate everytime your app boots.
2. All data synced to the client is in memory in JS.
Here is how this limit is managed:
1. Active queries are never destroyed, even if the limit is exceeded. Developers are expected to keep active queries well under the limit.
2. The `ttl` value counts from the moment a query deactivates. Backgrounded queries are destroyed immediately when the `ttl` is reached, even if the limit hasn't been reached.
3. If the client exceeds its limit, Zero will destroy backgrounded queries, least-recently-used first, until the store is under the limit again.
### Thinking in Queries
Although IVM is a very efficient way to keep queries up to date relative to re-running them, it isn't free. You still need to think about how many queries you are creating, how long they are kept alive, and how expensive they are.
This is why Zero defaults to _not_ backgrounding queries and doesn't try to aggressively fill its client datastore to capacity. You should put some thought into what queries you want to run in the background, and for how long.
Zero currently provides a few basic tools to understand the cost of your queries:
- The client logs a warning for slow query materializations. Look for `Slow query materialization` in your logs. The default threshold is `5s` (including network) but this is configurable with the `slowMaterializeThreshold` parameter.
- The client logs the materialization time of all queries at the `debug` level. Look for `Materialized query` in your logs.
- The server logs a warning for slow query materializations. Look for `Slow query materialization` in your logs. The default threshold is `5s` but this is configurable with the `log-slow-materialize-threshold` configuration parameter.
We will be adding more tools over time.
## Completeness
Zero returns whatever data it has on the client immediately for a query, then falls back to the server for any missing data. Sometimes it's useful to know the difference between these two types of results. To do so, use the `result` from `useQuery`:
```tsx
const [issues, issuesResult] = useQuery(z.query.issue);
if (issuesResult.type === 'complete') {
console.log('All data is present');
} else {
console.log('Some data is missing');
}
```
The possible values of `result.type` are currently `complete` and `unknown`.
The `complete` value is currently only returned when Zero has received the server result. But in the future, Zero will be able to return this result type when it _knows_ that all possible data for this query is already available locally. Additionally, we plan to add a `prefix` result for when the data is known to be a prefix of the complete result. See [Consistency](#consistency) for more information.
## Preloading
Almost all Zero apps will want to preload some data in order to maximize the feel of instantaneous UI transitions.
In Zero, preloading is done via queries – the same queries you use in the UI and for auth.
However, because preload queries are usually much larger than a screenful of UI, Zero provides a special `preload()` helper to avoid the overhead of materializing the result into JS objects:
```tsx
// Preload the first 1k issues + their creator, assignee, labels, and
// the view state for the active user.
//
// There's no need to render this data, so we don't use `useQuery()`:
// this avoids the overhead of pulling all this data into JS objects.
z.query.issue
.related('creator')
.related('assignee')
.related('labels')
.related('viewState', q => q.where('userID', z.userID).one())
.orderBy('created', 'desc')
.limit(1000)
.preload();
```
## Running Queries Once
Usually subscribing to a query is what you want in a reactive UI, but every so often you'll need to run a query just once. To do this, use the `run()` method:
```tsx
const results = await z.query.issue.where('foo', 'bar').run();
```
By default, `run()` only returns results that are currently available on the client. That is, it returns the data that would be given for [`result.type === 'unknown'`](#completeness).
If you want to wait for the server to return results, pass `{type: 'complete'}` to `run`:
```tsx
const results = await z.query.issue.where('foo', 'bar').run(
{type: 'complete'});
```
As a convenience you can also directly await queries:
```ts
await z.query.issue.where('foo','bar');
```
This is the same as saying `run()` or `run({type: 'unknown'})`.
## Consistency
Zero always syncs a consistent partial replica of the backend database to the client. This avoids many common consistency issues that come up in classic web applications. But there are still some consistency issues to be aware of when using Zero.
For example, imagine that you have a bug database w/ 10k issues. You preload the first 1k issues sorted by created.
The user then does a query of issues assigned to themselves, sorted by created. Among the 1k issues that were preloaded imagine 100 are found that match the query. Since the data we preloaded is in the same order as this query, we are guaranteed that any local results found will be a _prefix_ of the server results.
The UX that result is nice: the user will see initial results to the query instantly. If more results are found server-side, those results are guaranteed to sort below the local results. There's no shuffling of results when the server response comes in.
Now imagine that the user switches the sort to ‘sort by modified’. This new query will run locally, and will again find some local matches. But it is now unlikely that the local results found are a prefix of the server results. When the server result comes in, the user will probably see the results shuffle around.
To avoid this annoying effect, what you should do in this example is also preload the first 1k issues sorted by modified desc. In general for any query shape you intend to do, you should preload the first `n` results for that query shape with no filters, in each sort you intend to use.
Zero will not sync duplicate copies of rows that show up in multiple queries. Zero syncs the *union* of all active queries' results.
So you don't have to worry about syncing many sorts of the same query when it's likely the results will overlap heavily.
In the future, we will be implementing a consistency model that fixes these issues automatically. We will prevent Zero from returning local data when that data is not known to be a prefix of the server result. Once the consistency model is implemented, preloading can be thought of as purely a performance thing, and not required to avoid unsightly flickering.
--- writing-data.mdx ---
Zero generates basic CRUD mutators for every table you sync. Mutators are available at `zero.mutate.`:
```tsx
const z = new Zero(...);
z.mutate.user.insert({
id: nanoid(),
username: 'abby',
language: 'en-us',
});
```
To build mutators with more complex logic or server-specific behavior, see the
new [Custom Mutators API](./custom-mutators).
## Insert
Create new records with `insert`:
```tsx
z.mutate.user.insert({
id: nanoid(),
username: 'sam',
language: 'js',
});
```
Optional fields can be set to `null` to explicitly set the new field to `null`. They can also be set to `undefined` to take the default value (which is often `null` but can also be some generated value server-side).
```tsx
// schema.ts
import {createTableSchema} from '@rocicorp/zero';
const userSchema = createTableSchema({
tableName: 'user',
columns: {
id: {type: 'string'},
name: {type: 'string'},
language: {type: 'string', optional: true},
},
primaryKey: ['id'],
relationships: {},
});
// app.tsx
// Sets language to `null` specifically
z.mutate.user.insert({
id: nanoid(),
username: 'sam',
language: null,
});
// Sets language to the default server-side value. Could be null, or some
// generated or constant default value too.
z.mutate.user.insert({
id: nanoid(),
username: 'sam',
});
// Same as above
z.mutate.user.insert({
id: nanoid(),
username: 'sam',
language: undefined,
});
```
## Upsert
Create new records or update existing ones with `upsert`:
```tsx
z.mutate.user.upsert({
id: samID,
username: 'sam',
language: 'ts',
});
```
`upsert` supports the same `null` / `undefined` semantics for optional fields that `insert` does (see above).
## Update
Update an existing record. Does nothing if the specified record (by PK) does not exist.
You can pass a partial, leaving fields out that you don’t want to change. For example here we leave the username the same:
```tsx
// Leaves username field to previous value.
z.mutate.user.update({
id: samID,
language: 'golang',
});
// Same as above
z.mutate.user.update({
id: samID,
username: undefined,
language: 'haskell',
});
// Reset language field to `null`
z.mutate.user.update({
id: samID,
language: null,
});
```
## Delete
Delete an existing record. Does nothing if specified record does not exist.
```tsx
z.mutate.user.delete({
id: samID,
});
```
## Batch Mutate
You can do multiple CRUD mutates in a single _batch_. If any of the mutations fails, all will. They also all appear together atomically in a single transaction to other clients.
```tsx
z.mutateBatch(async tx => {
const samID = nanoid();
tx.user.insert({
id: samID,
username: 'sam',
});
const langID = nanoid();
tx.language.insert({
id: langID,
userID: samID,
name: 'js',
});
});
```
--- custom-mutators.mdx ---
_Custom Mutators_ are a new way to write data in Zero that is much more powerful than the original ["CRUD" mutator API](./writing-data).
Instead of having only the few built-in `insert`/`update`/`delete` write operations for each table, custom mutators allow you to _create your own write operations_ using arbitrary code. This makes it possible to do things that are impossible or awkward with other sync engines.
For example, you can create custom mutators that:
- Perform arbitrary server-side validation
- Enforce fine-grained permissions
- Send email notifications
- Query LLMs
- Use Yjs for collaborative editing
- … and much, _much_ more – custom mutators are just code, and they can do anything code can do!
Despite their increased power, custom mutators still participate fully in sync. They execute instantly on the local device, immediately updating all active queries. They are then synced in the background to the server and to other clients.
We're still refining the design of custom mutators. During this phase, the old
CRUD mutators will continue to work. But we do want to deprecate CRUD
mutators, and eventually remove them. So please try out custom mutators and
[let us know](https://discord.rocicorp.dev/) how they work for you, and what
improvements you need before the cutover.
## Understanding Custom Mutators
### Architecture
Custom mutators introduce a new _server_ component to the Zero architecture.

This server is implemented by you, the developer. It's typically just your existing backend, where you already put auth or other server-side functionality.
The server can be a serverless function, a microservice, or a full stateful server. The only real requirement is that it expose a special _push endpoint_ that `zero-cache` can call to process mutations. This endpoint implements the [push protocol](#custom-push-implementation) and contains your custom logic for each mutation.
Zero provides utilities in `@rocicorp/zero` that make it really easy implement this endpoint in TypeScript. But you can also implement it yourself if you want. As long as your endpoint fulfills the push protocol, `zero-cache` doesn't care. You can even write it in a different programming language.
### What Even is a Mutator?
Zero's custom mutators are based on [_server reconciliation_](https://www.gabrielgambetta.com/client-side-prediction-server-reconciliation.html) – a technique for robust sync that has been used by the video game industry for decades.
Our previous sync engine, [Replicache](https://replicache.dev/), also used
server reconciliation. The ability to implement arbitrary mutators was one of
Replicache's most popular features. Custom mutators bring this same power to
Zero, but with a much better developer experience.
A custom mutator is just a function that runs within a database transaction, and which can read and write to the database. Here's an example of a very simple custom mutator written in TypeScript:
```ts
async function updateIssue(
tx: Transaction,
{id, title}: {id: string; title: string},
) {
// Validate title length.
if (title.length > 100) {
throw new Error(`Title is too long`);
}
await tx.mutate.issue.update({id, title});
}
```
Each custom mutator gets **two implementations**: one on the client and one on the server.
The client implementation must be written in TypeScript against the Zero `Transaction` interface, using [ZQL](#read-data-on-the-client) for reads and a [CRUD-style API](#write-data-on-the-client) for writes.
The server implementation runs on your server, in your push endpoint, against your database. In principle, it can be written in any language and use any data access library. For example you could have the following Go-based server implementation of the same mutator:
```go
func updateIssueOnServer(tx *sql.Tx, id string, title string) error {
// Validate title length.
if len(title) > 100 {
return errors.New("Title is too long")
}
_, err := tx.Exec("UPDATE issue SET title = $1 WHERE id = $2", title, id)
return err
}
```
In practice however, most Zero apps use TypeScript on the server. For these users we provide a handy `ServerTransaction` that implements ZQL against Postgres, so that you can share code between client and server mutators naturally.
So on a TypeScript server, that server mutator can just be:
```ts
async function updateIssueOnServer(
tx: ServerTransaction,
{id, title},
{id: string, title: string},
) {
// Delegate to client mutator.
// The `ServerTransaction` here has a different implementation
// that runs the same ZQL queries against Postgres!
await updateIssue(tx, {id, title});
}
```
Even in TypeScript, you can do as little or as much code sharing as you like. In your server mutator, you can [use raw SQL](#dropping-down-to-raw-sql), any data access libraries you prefer, or add as much extra server-specific logic as you need.
Reusing ZQL on the server is a handy – and we expect frequently used – option, but not a requirement.
### Server Authority
You may be wondering what happens if the client and server mutators implementations don't match.
Zero is an example of a _server-authoritative_ sync engine. This means that the server mutator always takes precedence over the client mutator. The result from the client mutator is considered _speculative_ and is discarded as soon as the result from the server mutator is known. This is a very useful feature: it enables server-side validation, permissions, and other server-specific logic.
Imagine that you wanted to use an LLM to detect whether an issue update is spammy, rather than a simple length check. We can just add that to our server mutator:
```ts
async function updateIssueOnServer(
tx: ServerTransaction,
{id, title}: {id: string; title: string},
) {
const response = await llamaSession.prompt(
`Is this title update likely spam?\n\n${title}\n\nResponse "yes" or "no"`,
);
if (/yes/i.test(response)) {
throw new Error(`Title is likely spam`);
}
// delegate rest of implementation to client mutator
await updateIssue(tx, {id, title});
}
```
If the server detects that the mutation is spammy, the client will see the error message and the mutation will be rolled back. If the server mutator succeeds, the client mutator will be rolled back and the server result will be applied.
### Life of a Mutation
Now that we understand what client and server mutations are, let's walk through they work together with Zero to sync changes from a source client to the server and then other clients:
1. When you call a custom mutator on the client, Zero runs your client-side mutator immediately on the local device, updating all active queries instantly.
2. In the background, Zero then sends a _mutation_ (a record of the mutator having run with certain arguments) to your server's push endpoint.
3. Your push endpoint runs the [push protocol](#custom-push-implementation), executing the server-side mutator in a transaction against your database and recording the fact that the mutation ran. Optionally, you use our `PushProcessor` class to handle this for you, but you can also implement it yourself.
4. The changes to the database are replicated to `zero-cache` as normal.
5. `zero-cache` calculates the updates to active queries and sends rows that have changed to each client. It also sends information about the mutations that have been applied to the database.
6. Clients receive row updates and apply them to their local cache. Any pending mutations which have been applied to the server have their local effects rolled back.
7. Client-side queries are updated and the user sees the changes.
## Using Custom Mutators
### Registering Client Mutators
By convention, the client mutators are defined with a function called `createMutators` in a file called `mutators.ts`:
```ts
// mutators.ts
import {CustomMutatorDefs} from '@rocicorp/zero';
import {schema} from './schema';
export function createMutators() {
return {
issue: {
update: async (tx, {id, title}: {id: string; title: string}) => {
// Validate title length. Legacy issues are exempt.
if (title.length > 100) {
throw new Error(`Title is too long`);
}
await tx.mutate.issue.update({id, title});
},
},
} as const satisfies CustomMutatorDefs;
}
```
The `mutators.ts` convention allows mutator implementations to be easily [reused server-side](#setting-up-the-server). The `createMutators` function convention is used so that we can pass authentication information in to [implement permissions](#permissions).
You are free to make different code layout choices – the only real requirement is that you register your map of mutators in the `Zero` constructor:
```ts
// main.tsx
import {Zero} from '@rocicorp/zero';
import {schema} from './schema';
import {createMutators} from './mutators';
const zero = new Zero({
schema,
mutators: createMutators(),
});
```
### Write Data on the Client
The `Transaction` interface passed to client mutators exposes the same `mutate` API as the existing [CRUD-style mutators](./writing-data):
```ts
async function myMutator(tx: Transaction) {
// Insert a new issue
await tx.mutate.issue.insert({
id: 'issue-123',
title: 'New title',
description: 'New description',
});
// Upsert a new issue
await tx.mutate.issue.upsert({
id: 'issue-123',
title: 'New title',
description: 'New description',
});
// Update an issue
await tx.mutate.issue.update({
id: 'issue-123',
title: 'New title',
});
// Delete an issue
await tx.mutate.issue.delete({
id: 'issue-123',
});
}
```
See [the CRUD docs](./writing-data) for detailed semantics on these methods.
### Read Data on the Client
You can read data within a client mutator using [ZQL](./reading-data):
```ts
export function createMutators() {
return {
issue: {
update: async (tx, {id, title}: {id: string; title: string}) => {
// Read existing issue
const prev = await tx.query.issue.where('id', id).one();
// Validate title length. Legacy issues are exempt.
if (!prev.isLegacy && title.length > 100) {
throw new Error(`Title is too long`);
}
await tx.mutate.issue.update({id, title});
},
},
} as const satisfies CustomMutatorDefs;
}
```
You have the full power of ZQL at your disposal, including relationships, filters, ordering, and limits.
Reads and writes within a mutator are transactional, meaning that the datastore is guaranteed to not change while your mutator is running. And if the mutator throws, the entire mutation is rolled back.
Outside of mutators, the `run()` method has a [`type` parameter](reading-data#running-queries-once) that can be used to wait for server results.
This parameter isn't supported within mutators, because waiting for server results makes no sense in an optimistic mutation – it defeats the purpose of running optimistically to begin with.
When a mutator runs on the client (`tx.location === "client"`), ZQL reads only return data already cached on the client. When mutators run on the server (`tx.location === "server"`), ZQL reads always return all data.
You can use `run()` within custom mutators, but the `type` argument does nothing. In the future, passing `type` in this situation will throw an error.
### Invoking Client Mutators
Once you have registered your client mutators, you can call them from your client-side application:
```ts
zero.mutate.issue.update({
id: 'issue-123',
title: 'New title',
});
```
The result of a call to a mutator is a `Promise`. You do not usually need to `await` this promise as Zero mutators run very fast, usually completing in a tiny fraction of one frame.
However because mutators ocassionally need to access browser storage, they are technically `async`. Reading a row that was written by a mutator immediately after it is written may not return the new data, because the mutator may not have completed writing to storage yet.
### Waiting for Mutator Result
We typically recommend that you "fire and forget" mutators.
Optimistic mutations make sense when the common case is that a mutation succeeds. If a mutation frequently fails, then showing the user an optimistic result doesn't make sense, because it will likely be wrong.
That said there are cases where it is useful to know when a write succeeded on either the client or server.
One example is if you need to read a row directly after writing it. Zero's local writes are very fast (almost always < 1 frame), but because Zero is backed by IndexedDB, writes are still *technically* asynchronous and reads directly after a write may not return the new data.
You can use the `.client` promise in this case to wait for a write to complete on the client side:
```ts
try {
const write = zero.mutate.issue.update({
id: 'issue-123',
title: 'New title',
});
// issue-123 not guaranteed to be present here. read1 may be undefined.
const read1 = await zero.query.issue.where('id', 'issue-123').one();
// Await client write – almost always less than 1 frame, and same
// macrotask, so no browser paint will occur here.
await write.client;
// issue-123 definitely can be read now.
const read2 = await zero.query.issue.where('id', 'issue-123').one();
} catch (e) {
console.error("Mutator failed on client", e);
}
```
You can also wait for the server write to succeed:
```ts
try {
await zero.mutate.issue.update({
id: 'issue-123',
title: 'New title',
}).server;
// issue-123 is written to server
} catch (e) {
console.error("Mutator failed on client or server", e);
}
```
If the client-side mutator fails, the `.server` promise is also rejected with the same error. You don't have to listen to both promises, the server promise covers both cases.
There is not yet a way to return data from mutators in the success case – the type of `.clent` and `.server` is always `Promise`. [Let us know](https://discord.rocicorp.dev/) if you need this.
### Setting Up the Server
You will need a server somewhere you can run an endpoint on. This is typically a serverless function on a platform like Vercel or AWS but can really be anything.
Set the push URL with the [`ZERO_PUSH_URL` env var or `--push-url`](./zero-cache-config#push-url).
If there is per-client configuration you need to send to the push endpoint, you can do that with `push.queryParams`:
```ts
const z = new Zero({
push: {
queryParams: {
workspaceID: "42",
},
},
});
```
The push endpoint receives a `PushRequest` as input describing one or more mutations to apply to the backend, and must return a `PushResponse` describing the results of those mutations.
If you are implementing your server in TypeScript, you can use the `PushProcessor` class to trivially implement this endpoint. Here’s an example in a [Hono](https://hono.dev/) app:
```ts
import {Hono} from 'hono';
import {handle} from 'hono/vercel';
import {PushProcessor, ZQLDatabase, PostgresJSConnection} from '@rocicorp/zero/pg';
import postgres from 'postgres';
import {schema} from '../shared/schema';
import {createMutators} from '../shared/mutators';
// PushProcessor is provided by Zero to encapsulate a standard
// implementation of the push protocol.
const processor = new PushProcessor(
new ZQLDatabase(
new PostgresJSConnection(
postgres(process.env.ZERO_UPSTREAM_DB! as string)
),
schema
)
);
export const app = new Hono().basePath('/api');
app.post('/push', async c => {
const result = await processor.process(
createMutators(),
c.req.raw,
);
return await c.json(result);
});
export default handle(app);
```
`PushProcessor` depends on an abstract `Database`. This allows it to implement the push algorithm against any database.
`@rocicorp/zero/pg` includes a `ZQLDatabase` implementation of this interface backed by Postgres. The implementation allows the same mutator functions to run on client and server, by providing an implementation of the ZQL APIs that custom mutators run on the client.
`ZQLDatabase` in turn relies on an abstract `DBConnection` that provides raw access to a Postgres database. This allows you to use any Postgres library you like, as long as you provide a `DBConnection` implementation for it. The `PostgresJSConnection` class implements `DBConnection` for the excellent [`postgres.js`](https://www.npmjs.com/package/postgres) library to connect to Postgres.
To reuse the client mutators exactly as-is on the server just pass the result of the same `createMutators` function to `PushProcessor`.
### Server-Specific Code
To implement server-specific code, just run different mutators in your push endpoint!
An approach we like is to create a separate `server-mutators.ts` file that wraps the client mutators:
```ts
// server-mutators.ts
import { CustomMutatorDefs } from "@rocicorp/zero";
import { schema } from "./schema";
export function createMutators(clientMutators: CustomMutatorDefs) {
return {
// Reuse all client mutators except the ones in `issue`
...clientMutators,
issue: {
// Reuse all issue mutators except `update`
...clientMutators.issue,
update: async (tx, {id, title}: { id: string; title: string }) => {
// Call the shared mutator first
await clientMutators.issue.update(tx, {id, title});
// Record a history of this operation happening in an audit
// log table.
await tx.mutate.auditLog.insert({
// Assuming you have an audit log table with fields for
// `issueId`, `action`, and `timestamp`.
issueId: id,
action: 'update-title',
timestamp: new Date().toISOString(),
});
},
}
} as const satisfies CustomMutatorDefs;
}
```
For simple things, we also expose a `location` field on the transaction object that you can use to branch your code:
```ts
myMutator: (tx) => {
if (tx.location === 'client') {
// Client-side code
} else {
// Server-side code
}
},
```
### Permissions
Because custom mutators are just arbitrary TypeScript functions, there is no need for a special permissions system. Therefore, you won't use Zero's [write permissions](./permissions) when you use custom mutators.
When using custom mutators you will have no [`insert`](permissions#insert-permissions), [`update`](permissions#update-permissions), or [`delete`](permissions#delete-permissions) permissions. You will still have [`select`](permissions#select-permissions) permissions, however.
We hope to build [custom queries](https://bugs.rocicorp.dev/issue/3453) next – a read analog to custom mutators. If we succeed, Zero's permission system will go away completely 🤯.
In order to do permission checks, you'll need to know what user is making the request. You can pass this information to your mutators by adding a `AuthData` parameter to the `createMutators` function:
```ts
type AuthData = {
sub: string;
};
export function createMutators(authData: AuthData | undefined) {
return {
issue: {
launchMissiles: async (tx, args: {target: string}) => {
if (!authData) {
throw new Error('Users must be logged in to launch missiles');
}
const hasPermission = await tx.query.user
.where('id', authData.sub)
.whereExists('permissions', q => q.where('name', 'launch-missiles'))
.one();
if (!hasPermission) {
throw new Error('User does not have permission to launch missiles');
}
},
},
} as const satisfies CustomMutatorDefs;
}
```
The `AuthData` parameter can be any data required for authorization, but is typically just the decoded JWT:
```ts
// app.tsx
const zero = new Zero({
schema,
auth: encodedJWT,
mutators: createMutators(decodedJWT),
});
// hono-server.ts
const processor = new PushProcessor(
schema,
connectionProvider(postgres(process.env.ZERO_UPSTREAM_DB as string)),
);
processor.process(
createMutators(decodedJWT),
c.req.query(),
await c.req.json(),
);
```
### Dropping Down to Raw SQL
On the server, you can use raw SQL in addition or instead of ZQL. This is useful for complex queries, or for using Postgres features that Zero doesn't support yet:
```ts
async function markAllAsRead(tx: Transaction, {userId: string}) {
await tx.dbTransaction.query(
`
UPDATE notification
SET read = true
WHERE user_id = $1
`,
[userId],
);
}
```
### Notifications and Async Work
It is bad practice to hold open database transactions while talking over the network, for example to send notifications. Instead, you should let the db transaction commit and do the work asynchronously.
There is no specific support for this in custom mutators, but since mutators are just code, it’s easy to do:
```ts
// server-mutators.ts
export function createMutators(
authData: AuthData,
asyncTasks: Array<() => Promise>,
) {
return {
issue: {
update: async (tx, {id, title}: {id: string; title: string}) => {
await tx.mutate.issue.update({id, title});
asyncTasks.push(async () => {
await sendEmailToSubscribers(args.id);
});
},
},
} as const satisfies CustomMutatorDefs;
}
```
Then in your push handler:
```ts
app.post('/push', async c => {
const asyncTasks: Array<() => Promise> = [];
const result = await processor.process(
createMutators(authData, asyncTasks),
c.req.query(),
await c.req.json(),
);
await Promise.all(asyncTasks.map(task => task()));
return await c.json(result);
});
```
### Custom Database Connections
You can implement an adapter to a different Postgres library, or even a different database entirely.
To do so, provide a `connectionProvider` to `PushProcessor` that returns a different [`DBConnection`](https://github.com/rocicorp/mono/blob/1a3741fbdad6dbdd56aa1f48cc2cc83938a61b16/packages/zql/src/mutate/custom.ts#L67) implementation. For an example implementation, [see the `postgres` implementation](https://github.com/rocicorp/mono/blob/1a3741fbdad6dbdd56aa1f48cc2cc83938a61b16/packages/zero-pg/src/postgres-connection.ts#L4).
### Custom Push Implementation
You can manually implement the push protocol in any programming language.
This will be documented in the future, but you can refer to the [PushProcessor](https://github.com/rocicorp/mono/blob/1a3741fbdad6dbdd56aa1f48cc2cc83938a61b16/packages/zero-pg/src/web.ts#L33) source code for an example for now.
## Examples
- Zbugs uses [custom mutators](https://github.com/rocicorp/mono/blob/a76c9a61670cc09e1a9fe7ab795749f3eef25577/apps/zbugs/shared/mutators.ts) for all mutations, [write permissions](https://github.com/rocicorp/mono/blob/a76c9a61670cc09e1a9fe7ab795749f3eef25577/apps/zbugs/shared/mutators.ts#L61), and [notifications](https://github.com/rocicorp/mono/blob/a76c9a61670cc09e1a9fe7ab795749f3eef25577/apps/zbugs/server/server-mutators.ts#L35).
- `hello-zero-solid` uses custom mutators for all [mutations](TODO), and for [permissions](TODO).
--- auth.mdx ---
Zero uses a [JWT](https://jwt.io/)-based flow to authenticate connections to zero-cache.
## Frontend
During login:
1. Your API server creates a `JWT` and sends it to your client.
2. Your client constructs a `Zero` instance with this token by passing it to the `auth` option.
When you set the `auth` option you must set the `userID` option to the same
value that is present in the `sub` field of the token.
```ts
const zero = new Zero({
...,
auth: token, // your JWT
userID, // this must match the `sub` field from `token`
});
```
## Server
For `zero-cache` to be able to verify the JWT, one of the following environment variables needs to be set:
1. `ZERO_AUTH_SECRET` - If your API server uses a symmetric key (secret) to create JWTs then this is that same key.
2. `ZERO_AUTH_JWK` - If your API server uses a private key to create JWTs then this is the corresponding public key, in [JWK](https://datatracker.ietf.org/doc/html/rfc7517) format.
3. `ZERO_AUTH_JWKS_URL` - Many auth providers host the public keys used to verify the JWTs they create at a public URL. If you use a provider that does this, or you publish your own keys publicly, set this to that URL.
## Refresh
The `auth` parameter to Zero can also be a function:
```ts
const zero = new Zero({
...,
auth: async () => {
const token = await fetchNewToken();
return token;
},
userID,
});
```
In this case, Zero will call this function to get a new JWT if verification fails.
## Permissions
Any data placed into your JWT (claims) can be used by permission rules on the backend.
```ts
const isAdminRule = (decodedJWT, {cmp}) => cmp(decodedJWT.role, '=', 'admin');
```
See the [permissions](permissions) section for more details.
## Examples
See [zbugs](samples#zbugs) or [hello-zero](samples#hello-zero).
--- permissions.mdx ---
Permissions are expressed using [ZQL](reading-data) and run automatically with every read and write.
## Define Permissions
Permissions are defined in [`schema.ts`](/docs/zero-schema) using the `definePermissions` function.
Here's an example of limiting deletes to only the creator of an issue:
```ts
// The decoded value of your JWT.
type AuthData = {
// The logged-in user.
sub: string;
};
export const permissions = definePermissions(schema, () => {
const allowIfIssueCreator = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('creatorID', authData.sub);
return {
issue: {
row: {
delete: [allowIfIssueCreator],
},
},
} satisfies PermissionsConfig;
});
```
`definePermission` returns a _policy_ object for each table in the schema. Each policy defines a _ruleset_ for the _operations_ that are possible on a table: `select`, `insert`, `update`, and `delete`.
## Access is Denied by Default
If you don't specify any rules for an operation, it is denied by default. This is an important safety feature that helps ensure data isn't accidentally exposed.
To enable full access to an action (i.e., during development) use the `ANYONE_CAN` helper:
```ts
import {ANYONE_CAN} from '@rocicorp/zero';
const permissions = definePermissions(schema, () => {
return {
issue: {
row: {
select: ANYONE_CAN,
// Other operations are denied by default.
},
},
// Other tables are denied by default.
} satisfies PermissionsConfig;
});
```
To do this for all actions, use `ANYONE_CAN_DO_ANYTHING`:
```ts
import {ANYONE_CAN_DO_ANYTHING} from '@rocicorp/zero';
const permissions = definePermissions(schema, () => {
return {
// All operations on issue are allowed to all users.
issue: ANYONE_CAN_DO_ANYTHING,
// Other tables are denied by default.
} satisfies PermissionsConfig;
});
```
## Permission Evaluation
Zero permissions are "compiled" into a JSON-based format at build-time. This file is stored in the `{ZERO_APP_ID}.permissions` table of your upstream database. Like other tables, it replicates live down to `zero-cache`. `zero-cache` then parses this file, and applies the encoded rules to every read and write operation.
The compilation process is very simple-minded (read: dumb). Despite looking like normal TypeScript functions that receive an `AuthData` parameter, rule functions are not actually invoked at runtime. Instead, they are invoked with a "placeholder" `AuthData` at build time. We track which fields of this placeholder are accessed and construct a ZQL expression that accesses the right field of `AuthData` at runtime.
The end result is that you can't really use most features of JS in these rules. Specifically you cannot:
- Iterate over properties or array elements in the auth token
- Use any JS features beyond property access of `AuthData`
- Use any conditional or global state
Basically only property access is allowed. This is really confusing and we're working on a better solution.
## Permission Deployment
During development, permissions are compiled and uploaded to your database completely automatically as part of the `zero-cache-dev` script.
For production, you need to call `npx zero-deploy-permissions` within your app to update the permissions in the production database whenever they change. You would typically do this as part of your normal schema migration or CI process. For example, the SST deployment script for [zbugs](/docs/samples#zbugs) looks like this:
```ts
new command.local.Command(
'zero-deploy-permissions',
{
create: `npx zero-deploy-permissions -p ../../src/schema.ts`,
// Run the Command on every deploy ...
triggers: [Date.now()],
environment: {
ZERO_UPSTREAM_DB: commonEnv.ZERO_UPSTREAM_DB,
// If the application has a non-default App ID ...
ZERO_APP_ID: commonEnv.ZERO_APP_ID,
},
},
// after the view-syncer is deployed.
{dependsOn: viewSyncer},
);
```
See the [SST Deployment Guide](deployment#guide-multi-node-on-sstaws) for more details.
## Rules
Each operation on a policy has a _ruleset_ containing zero or more _rules_.
A rule is just a TypeScript function that receives the logged in user's `AuthData` and generates a ZQL [where expression](reading-data#compound-filters). At least one rule in a ruleset must return a row for the operation to be allowed.
## Select Permissions
You can limit the data a user can read by specifying a `select` ruleset.
Select permissions act like filters. If a user does not have permission to read a row, it will be filtered out of the result set. It will not generate an error.
For example, imagine a select permission that restricts reads to only issues created by the user:
```ts
definePermissions(schema, () => {
const allowIfIssueCreator = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('creatorID', authData.sub);
return {
issue: {
row: {
select: [allowIfIssueCreator],
},
},
} satisfies PermissionsConfig;
});
```
If the issue table has two rows, one created by the user and one by someone else, the user will only see the row they created in any queries.
## Insert Permissions
You can limit what rows can be inserted and by whom by specifying an `insert` ruleset.
Insert rules are evaluated after the entity is inserted. So if they query the database, they will see the inserted row present. If any rule in the insert ruleset returns a row, the insert is allowed.
Here's an example of an insert rule that disallows inserting users that have the role 'admin'.
```ts
definePermissions(schema, () => {
const allowIfNonAdmin = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('role', '!=', 'admin');
return {
user: {
row: {
insert: [allowIfNonAdmin],
},
},
} satisfies PermissionsConfig;
});
```
## Update Permissions
There are two types of update rulesets: `preMutation` and `postMutation`. Both rulesets must pass for an update to be allowed.
`preMutation` rules see the version of a row _before_ the mutation is applied. This is useful for things like checking whether a user owns an entity before editing it.
`postMutation` rules see the version of a row _after_ the mutation is applied. This is useful for things like ensuring a user can only mark themselves as the creator of an entity and not other users.
Like other rulesets, `preMutation` and `postMutation` default to `NOBODY_CAN`. This means that every table must define both these rulesets in order for any updates to be allowed.
For example, the following ruleset allows an issue's owner to edit, but **not** re-assign the issue. The `postMutation` rule enforces that the current user still own the issue after edit.
```ts
definePermissions(schema, () => {
const allowIfIssueOwner = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('ownerID', authData.sub);
return {
issue: {
row: {
update: {
preMutation: [allowIfIssueOwner],
postMutation: [allowIfIssueOwner],
},
},
},
} satisfies PermissionsConfig;
});
```
This ruleset allows an issue's owner to edit and re-assign the issue:
```ts
definePermissions(schema, () => {
const allowIfIssueOwner = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('ownerID', authData.sub);
return {
issue: {
row: {
update: {
preMutation: [allowIfIssueOwner],
postMutation: ANYONE_CAN,
},
},
},
} satisfies PermissionsConfig;
});
```
And this allows anyone to edit an issue, but only if they also assign it to themselves. Useful for enforcing _"patches welcome"_? 🙃
```ts
definePermissions(schema, () => {
const allowIfIssueOwner = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('ownerID', authData.sub);
return {
issue: {
row: {
update: {
preMutation: ANYONE_CAN,
postMutation: [allowIfIssueOwner],
},
},
},
} satisfies PermissionsConfig;
});
```
## Delete Permissions
Delete permissions work in the same way as `insert` permissions except they run _before_ the delete is applied. So if a delete rule queries the database, it will see that the deleted row is present. If any rule in the ruleset returns a row, the delete is allowed.
## Debugging
See [Debugging Permissions](./debug/permissions).
## Examples
See [hello-zero](https://github.com/rocicorp/hello-zero/blob/main/src/schema.ts) for a simple example of write auth and [zbugs](https://github.com/rocicorp/mono/blob/main/apps/zbugs/shared/schema.ts#L217) for a much more involved one.
--- zql-on-the-server.mdx ---
[Custom Mutators](custom-mutators) use ZQL on the server as an implementation detail, but you can also use ZQL on the server directly, outside of Custom Mutators.
This is useful for a variety of reasons:
* You can use ZQL to implement standard REST endpoints, allowing you to share code with custom mutators.
* You can use ZQL as part of schema migrations.
* In the future ([but not yet implemented](#ssr)), this can support server-side rendering
Here's a basic example:
```ts
import {
PushProcessor,
ZQLDatabase,
PostgresJSConnection,
TransactionProviderInput,
} from "@rocicorp/zero/pg";
const db = new ZQLDatabase(
new PostgresJSConnection(
postgres(
must(
process.env.ZERO_UPSTREAM_DB as string,
"required env var ZERO_UPSTREAM_DB"
)
)
),
schema
);
// This is needed temporarily and will be cleaned up in the future.
const dummyTransactionInput: TransactionProviderInput = {
clientGroupID: "unused",
clientID: "unused",
mutationID: 42,
upstreamSchema: "unused",
};
db.transaction(
async (tx) => {
// await tx.mutate...
// await tx.query...
// await myMutator(tx, ...args);
},
dummyTransactionInput
);
```
If ZQL does not have the featuers you need, you can use `tx.dbTransaction` to [drop down to raw SQL](custom-mutators#dropping-down-to-raw-sql).
`ZQLDatabase` currently does a read of your postgres schema before every transaction. This is fine for most usages, but for high scale it may become a problem. [Let us know](https://bugs.rocicorp.dev/issue/3799) if you need a fix for this.
## SSR
Although you can run ZQL on the server, Zero does not yet have the wiring setup in its bindings layers to support server-side rendering ([patches welcome though!](https://bugs.rocicorp.dev/issue/3491)).
For now, you should use your framework's recommended pattern to prevent SSR execution.
### Next.js
Add the `use client`
[directive](https://nextjs.org/docs/app/api-reference/directives/use-client).
### SolidStart
Wrap components that use Zero with the
[`clientOnly`](https://docs.solidjs.com/solid-start/reference/client/client-only)
higher-order component.
The standard `clientOnly` pattern uses dynamic imports, but note that this
approach (similar to [React's `lazy`](https://react.dev/reference/react/lazy))
works with any function returning a `Promise<{default: () => JSX.Element}>`. If
code splitting is unnecessary, you can skip the dynamic import.
### TanStack Start
Use [React's `lazy`](https://react.dev/reference/react/lazy) for dynamic
imports.
--- deployment.mdx ---
To deploy a Zero app, you need to:
1. Deploy your backend database. Most standard Postgres hosts [work with Zero](connecting-to-postgres).
1. Deploy `zero-cache`. We provide a [Docker image](https://hub.docker.com/r/rocicorp/zero) that can work with most Docker hosts.
1. Deploy your frontend. You can use any hosting service like Vercel or Netlify.
This page describes how to deploy `zero-cache`.
## Architecture
`zero-cache` is a horizontally scalable, stateful web service that maintains a SQLite replica of your Postgres database. It uses this replica to sync ZQL queries to clients over WebSockets.
You don't have to know the details of how `zero-cache` works to run it, but it helps to know the basic structure.
A running `zero-cache` is composed of a single `replication-manager` node and multiple `view-syncer` nodes. It also depends on Postgres, S3, and attached SSD storage.
**Upstream:** Your application's Postgres database.
**Change DB:** A Postgres DB used by Zero to store a recent subset of the Postgres replication log.
**CVR DB:** A Postgres DB used by Zero to store Client View Records (CVRs). CVRs track the state of each synced client.
The Change DB and CVR DBs are typically the same physical Postgres database as the Upstream DB. Zero stores their tables in separate Postgres schemas so they won't conflict with your application data.
We allow separate DBs so that they can be scaled and tuned independently if desired.
**S3:** Stores a canonical copy of the SQLite replica.
**File System:** Used by both node types to store local copies of the SQLite replica. Can be ephemeral – Zero will re-initialize from S3 on startup. Recommended to use attached SSD storage for best performance.
**Replication Manager:** Serves as the single consumer of the Postgres replication log. Stores a recent subset of the Postgres changelog in the _Change DB_ for catching up ViewSyncers when they initialize. Also maintains the canonical replica, which ViewSyncers initialize from.
**View Syncers:** Handle WebSocket connections from clients and run ZQL queries. Updates CVR DB with the latest state of each client as queries run. Uses CVR DB on client connection to compute the initial diff to catch clients up.
## Topology
You should deploy `zero-cache` close to your database because the mutation implementation is chatty.
In the future, mutations will [move out of `zero-cache`](https://bugs.rocicorp.dev/issue/3045#comment-5a3BKxP8RfJ9njHLgx5e3).
When that happens you can deploy `zero-cache` geographically distributed and it will double as a read-replica.
## Updating
When run with multiple View Syncer nodes, `zero-cache` supports rolling, downtime-free updates. A new Replication Manager takes over the replication stream from the old Replication Manager, and connections from the old View Syncers are gradually drained and absorbed by active View Syncers.
## Client/Server Version Compatibility
Servers are compatible with any client of same major version, and with clients one major version back. So for example:
- Server `0.2.*` is compatible with client `0.2.*`
- Server `0.2.*` is compatible with client `0.1.*`
- Server `2.*.*` is compatible with client `2.*.*`
- Server `2.*.*` is compatible with client `1.*.*`
To upgrade Zero to a new major version, first deploy the new zero-cache, then the new frontend.
## Configuration
The `zero-cache` image is configured via environment variables. See [zero-cache Config](./zero-cache-config) for available options.
## Guide: Multi-Node on SST+AWS
[SST](https://sst.dev/) is our recommended way to deploy Zero.
The setup below costs about $35/month. You can scale it up or down as needed by adjusting the amount of vCPUs and memory in each task.
### Setup Upstream
Create an upstream Postgres database server somewhere. See [Connecting to Postgres](connecting-to-postgres) for details. Populate the schema and any initial data for your application.
### Setup AWS
See [AWS setup guide](https://v2.sst.dev/setting-up-aws). The end result should be that you have a dev profile and SSO session defined in your `~/.aws/config` file.
### Initialize SST
```bash
npx sst init --yes
```
Choose "aws" for where to deploy.
Then overwite `/sst.config.ts` with the following code:
```ts
/* eslint-disable */
///
import {execSync} from 'child_process';
export default $config({
app(input) {
return {
name: 'hello-zero',
removal: input?.stage === 'production' ? 'retain' : 'remove',
home: 'aws',
region: process.env.AWS_REGION || 'us-east-1',
providers: {
command: true,
},
};
},
async run() {
const zeroVersion = execSync('npm show @rocicorp/zero version')
.toString()
.trim();
// S3 Bucket
const replicationBucket = new sst.aws.Bucket(`replication-bucket`);
// VPC Configuration
const vpc = new sst.aws.Vpc(`vpc`, {
az: 2,
});
// ECS Cluster
const cluster = new sst.aws.Cluster(`cluster`, {
vpc,
});
const conn = new sst.Secret('PostgresConnectionString');
const zeroAuthSecret = new sst.Secret('ZeroAuthSecret');
// Common environment variables
const commonEnv = {
ZERO_UPSTREAM_DB: conn.value,
ZERO_CVR_DB: conn.value,
ZERO_CHANGE_DB: conn.value,
ZERO_AUTH_SECRET: zeroAuthSecret.value,
ZERO_REPLICA_FILE: 'sync-replica.db',
ZERO_LITESTREAM_BACKUP_URL: $interpolate`s3://${replicationBucket.name}/backup`,
ZERO_IMAGE_URL: `rocicorp/zero:${zeroVersion}`,
ZERO_CVR_MAX_CONNS: '10',
ZERO_UPSTREAM_MAX_CONNS: '10',
};
// Replication Manager Service
const replicationManager = cluster.addService(`replication-manager`, {
cpu: '0.5 vCPU',
memory: '1 GB',
architecture: 'arm64',
image: commonEnv.ZERO_IMAGE_URL,
link: [replicationBucket],
health: {
command: ['CMD-SHELL', 'curl -f http://localhost:4849/ || exit 1'],
interval: '5 seconds',
retries: 3,
startPeriod: '300 seconds',
},
environment: {
...commonEnv,
ZERO_CHANGE_MAX_CONNS: '3',
ZERO_NUM_SYNC_WORKERS: '0',
},
loadBalancer: {
public: false,
ports: [
{
listen: '80/http',
forward: '4849/http',
},
],
},
transform: {
loadBalancer: {
idleTimeout: 3600,
},
target: {
healthCheck: {
enabled: true,
path: '/keepalive',
protocol: 'HTTP',
interval: 5,
healthyThreshold: 2,
timeout: 3,
},
},
},
});
// View Syncer Service
const viewSyncer = cluster.addService(`view-syncer`, {
cpu: '1 vCPU',
memory: '2 GB',
architecture: 'arm64',
image: commonEnv.ZERO_IMAGE_URL,
link: [replicationBucket],
health: {
command: ['CMD-SHELL', 'curl -f http://localhost:4848/ || exit 1'],
interval: '5 seconds',
retries: 3,
startPeriod: '300 seconds',
},
environment: {
...commonEnv,
ZERO_CHANGE_STREAMER_URI: replicationManager.url,
},
logging: {
retention: '1 month',
},
loadBalancer: {
public: true,
rules: [{listen: '80/http', forward: '4848/http'}],
},
transform: {
target: {
healthCheck: {
enabled: true,
path: '/keepalive',
protocol: 'HTTP',
interval: 5,
healthyThreshold: 2,
timeout: 3,
},
stickiness: {
enabled: true,
type: 'lb_cookie',
cookieDuration: 120,
},
loadBalancingAlgorithmType: 'least_outstanding_requests',
},
},
});
// Permissions deployment
// Note: this setup requires your CI/CD pipeline to have access to your
// Postgres database. If you do not want to do this, you can also use
// `npx zero-deploy-permissions --output-format=sql` during build to
// generate a permissions.sql file, then run that file as part of your
// deployment within your VPC. See hello-zero-solid for an example:
// https://github.com/rocicorp/hello-zero-solid/blob/main/sst.config.ts#L141
new command.local.Command(
'zero-deploy-permissions',
{
create: `npx zero-deploy-permissions -p ../../src/schema.ts`,
// Run the Command on every deploy ...
triggers: [Date.now()],
environment: {
ZERO_UPSTREAM_DB: commonEnv.ZERO_UPSTREAM_DB,
},
},
// after the view-syncer is deployed.
{dependsOn: viewSyncer},
);
},
});
```
### Set SST Secrets
Configure SST with your Postgres connection string and [Zero Auth Secret](/docs/auth#server).
Note that if you use JWT-based auth, you'll need to change the environment variables in the `sst.config.ts` file above, then set a different secret here.
```bash
npx sst secret set PostgresConnectionString "YOUR-PG-CONN-STRING"
npx sst secret set ZeroAuthSecret "YOUR-ZERO-AUTH-SECRET"
```
### Deploy
```bash
npx sst deploy
```
This takes about 5-10 minutes.
If successful, you should see a URL for the `view-syncer` service. This is the URL to pass to the `server` parameter of the `Zero` constructor on the client.
If unsuccessful, you can get detailed logs with `npx sst deploy --verbose`. [Come find us on Discord](https://discord.rocicorp.dev/) and we'll help get you sorted out.
## Guide: Single-Node on Fly.io
Let's deploy the [Quickstart](quickstart) app to [Fly.io](https://fly.io). We'll use Fly.io for both the database and `zero-cache`.
### Setup Quickstart
Go through the [Quickstart](quickstart) guide to get the app running locally.
### Setup Fly.io
Create an account on [Fly.io](https://fly.io) and [install the Fly CLI](https://fly.io/docs/flyctl/install/).
### Create Postgres app
**Note:** Fly.io requires app names to be unique across all Fly.io users.
Change the `INITIALS` environment variable below to something unique.
```bash
INITIALS=aa
PG_APP_NAME=$INITIALS-zstart-pg
PG_PASSWORD="$(head -c 256 /dev/urandom | od -An -t x1 | tr -d ' \n' | tr -dc 'a-zA-Z' | head -c 16)"
fly postgres create \
--name $PG_APP_NAME \
--region lax \
--initial-cluster-size 1 \
--vm-size shared-cpu-2x \
--volume-size 40 \
--password=$PG_PASSWORD
```
### Seed Upstream database
Populate the database with initial data and set its `wal_level` to `logical` to support replication to `zero-cache`. Then restart the database to apply the changes.
```bash
(cat ./docker/seed.sql; echo "\q") | fly pg connect -a $PG_APP_NAME
echo "ALTER SYSTEM SET wal_level = logical; \q" | fly pg connect -a $PG_APP_NAME
fly postgres restart --app $PG_APP_NAME
```
### Create `zero-cache` Fly.io app
```bash
CACHE_APP_NAME=$INITIALS-zstart-cache
fly app create $CACHE_APP_NAME
```
### Publish `zero-cache`
Create a `fly.toml` file.
```bash
CONNECTION_STRING="postgres://postgres:$PG_PASSWORD@$PG_APP_NAME.flycast:5432"
ZERO_VERSION=$(npm list @rocicorp/zero | grep @rocicorp/zero | cut -f 3 -d @)
cat < fly.toml
app = "$CACHE_APP_NAME"
primary_region = 'lax'
[build]
image = "registry.hub.docker.com/rocicorp/zero:${ZERO_VERSION}"
[http_service]
internal_port = 4848
force_https = true
auto_stop_machines = 'off'
min_machines_running = 1
[[http_service.checks]]
grace_period = "10s"
interval = "30s"
method = "GET"
timeout = "5s"
path = "/"
[[vm]]
memory = '2gb'
cpu_kind = 'shared'
cpus = 2
[mounts]
source = "sqlite_db"
destination = "/data"
[env]
ZERO_REPLICA_FILE = "/data/sync-replica.db"
ZERO_UPSTREAM_DB="${CONNECTION_STRING}/zstart?sslmode=disable"
ZERO_CVR_DB="${CONNECTION_STRING}/zstart_cvr?sslmode=disable"
ZERO_CHANGE_DB="${CONNECTION_STRING}/zstart_cdb?sslmode=disable"
ZERO_AUTH_SECRET="secretkey"
LOG_LEVEL = "debug"
EOF
```
Then publish `zero-cache`:
```bash
fly deploy
```
### Deploy Permissions
Now `zero-cache` is running on Fly.io, but there are no permissions. If you run the app against this `zero-cache`, you'll see that no data is returned from any query. To fix this, deploy your permissions:
```bash
npx zero-deploy-permissions --schema-path='./src/schema.ts' --output-file='/tmp/permissions.sql'
(cat /tmp/permissions.sql; echo "\q") | fly pg connect -a $PG_APP_NAME -d zstart
```
You will need to redo this step every time you change your app's permissions, likely as part of your
CI/CD pipeline.
### Use Remote `zero-cache`
```bash
VITE_PUBLIC_SERVER="https://${CACHE_APP_NAME}.fly.dev/" npm run dev:ui
```
Now restart the frontend to pick up the env change, and refresh the app. You can stop your local database and `zero-cache` as we're not using them anymore. Open the web inspector to verify the app is talking to the remote `zero-cache`!
You can deploy the frontend to any standard hosting service like Vercel or Netlify, or even to Fly.io!
### Deploy Frontend to Vercel
If you've followed the above guide and deployed `zero-cache` to fly, you can simply run:
```sh
vercel deploy --prod \
-e ZERO_AUTH_SECRET="secretkey" \
-e VITE_PUBLIC_SERVER='https://${CACHE_APP_NAME}.fly.dev/'
```
to deploy your frontend to Vercel.
Explaining the arguments above --
- `ZERO_AUTH_SECRET` - The secret to create and verify JWTs. This is the same secret that was used when deploying zero-cache to fly.
- `VITE_PUBLIC_SERVER` - The URL the frontend will call to talk to the zero-cache server. This is the URL of the fly app.
## Guide: Multi-Node on Raw AWS
### S3 Bucket
Create an S3 bucket. `zero-cache` uses S3 to backup its SQLite replica so that it survives task restarts.
### Fargate Services
Run `zero-cache` as two Fargate services (using the same [rocicorp/zero](https://hub.docker.com/r/rocicorp/zero) docker image):
#### replication-manager
- `zero-cache` [config](https://zero.rocicorp.dev/docs/zero-cache-config):
- `ZERO_LITESTREAM_BACKUP_URL=s3://{bucketName}/{generation}`
- `ZERO_NUM_SYNC_WORKERS=0`
- Task count: **1**
#### view-syncer
- `zero-cache` config:
- `ZERO_LITESTREAM_BACKUP_URL=s3://{bucketName}/{generation}`
- `ZERO_CHANGE_STREAMER_URI=http://{replication-manager}`
- Task count: **N**
- Loadbalancing to port **4848** with
- algorithm: `least_outstanding_requests`
- health check path: `/keepalive`
- health check interval: 5 seconds
- stickiness: `lb_cookie`
- stickiness duration: 3 minutes
### Notes
- Standard rolling restarts are fine for both services
- The `view-syncer` task count is static; update the service to change the count.
- Support for dynamic resizing (i.e. Auto Scaling) is planned
- Set `ZERO_CVR_MAX_CONNS` and `ZERO_UPSTREAM_MAX_CONNS` appropriately so that the total connections from both running and updating `view-syncers` (e.g. DesiredCount \* MaximumPercent) do not exceed your database’s `max_connections`.
- The `{generation}` component of the `s3://{bucketName}/{generation}` URL is an arbitrary path component that can be modified to reset the replica (e.g. a date, a number, etc.). Setting this to a new path is the multi-node equivalent of deleting the replica file to resync.
- Note: `zero-cache` does not manage cleanup of old generations.
- The `replication-manager` serves requests on port **4849**. Routing from the `view-syncer` to the `http://{replication-manager}` can be achieved using the following mechanisms (in order of preference):
- An internal load balancer
- [Service Connect](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect.html)
- [Service Discovery](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html)
- Fargate ephemeral storage is used for the replica.
- The default size is 20GB. This can be increased up to 200GB
- Allocate at least twice the size of the database to support the internal VACUUM operation.
## Guide: $PLATFORM
Where should we deploy Zero next?? Let us know on [Discord](https://discord.rocicorp.dev)!
--- zero-cache-config.mdx ---
`zero-cache` is configured either via CLI flag or environment variable. There is no separate `zero.config` file.
You can also see all available flags by running `zero-cache --help`.
## Required Flags
### Auth
One of [Auth JWK](#auth-jwk), [Auth JWK URL](#auth-jwk-url), or [Auth Secret](#auth-secret) must be specified. See [Authentication](/docs/auth/) for more details.
### Replica File
File path to the SQLite replica that zero-cache maintains. This can be lost, but if it is, zero-cache will have to re-replicate next time it starts up.
flag: `--replica-file`
env: `ZERO_REPLICA_FILE`
required: `true`
### Upstream DB
The "upstream" authoritative postgres database. In the future we will support other types of upstream besides PG.
flag: `--upstream-db`
env: `ZERO_UPSTREAM_DB`
required: `true`
## Optional Flags
### Admin Password
A password used to administer zero-cache server, for example to access the `/statz` endpoint.
flag: `--admin-password`
env: `ZERO_ADMIN_PASSWORD`
required: `false`
### App ID
Unique identifier for the app.
Multiple zero-cache apps can run on a single upstream database, each of which is isolated from the others, with its own permissions, sharding (future feature), and change/cvr databases.
The metadata of an app is stored in an upstream schema with the same name, e.g. `zero`, and the metadata for each app shard, e.g. client and mutation ids, is stored in the `{app-id}_{#}` schema. (Currently there is only a single "0" shard, but this will change with sharding).
The CVR and Change data are managed in schemas named `{app-id}_{shard-num}/cvr` and `{app-id}_{shard-num}/cdc`, respectively, allowing multiple apps and shards to share the same database instance (e.g. a Postgres "cluster") for CVR and Change management.
Due to constraints on replication slot names, an App ID may only consist of lower-case letters, numbers, and the underscore character.
Note that this option is used by both `zero-cache` and `zero-deploy-permissions`.
flag: `--app-id`
env: `ZERO_APP_ID`
default: `zero`
### App Publications
Postgres PUBLICATIONs that define the tables and columns to replicate. Publication names may not begin with an underscore, as zero reserves that prefix for internal use.
If unspecified, zero-cache will create and use an internal publication that publishes all tables in the public schema, i.e.:
```
CREATE PUBLICATION _{app-id}_public_0 FOR TABLES IN SCHEMA public;
```
Note that once an app has begun syncing data, this list of publications cannot be changed, and zero-cache will refuse to start if a specified value differs from what was originally synced. To use a different set of publications, a new app should be created.
flag: `--app-publications`
env: `ZERO_APP_PUBLICATIONS`
default: `[]`
### Auth JWK
A public key in JWK format used to verify JWTs. Only one of jwk, jwksUrl and secret may be set.
flag: `--auth-jwk`
env: `ZERO_AUTH_JWK`
required: `false`
### Auth JWK URL
A URL that returns a JWK set used to verify JWTs. Only one of jwk, jwksUrl and secret may be set.
flag: `--auth-jwks-url`
env: `ZERO_AUTH_JWKS_URL`
required: `false`
### Auto Reset
Automatically wipe and resync the replica when replication is halted. This situation can occur for configurations in which the upstream database provider prohibits event trigger creation, preventing the zero-cache from being able to correctly replicate schema changes. For such configurations, an upstream schema change will instead result in halting replication with an error indicating that the replica needs to be reset. When auto-reset is enabled, zero-cache will respond to such situations by shutting down, and when restarted, resetting the replica and all synced clients. This is a heavy-weight operation and can result in user-visible slowness or downtime if compute resources are scarce.
flag: `--auto-reset`
env: `ZERO_AUTO_RESET`
default: `true`
### Auth Secret
A symmetric key used to verify JWTs. Only one of jwk, jwksUrl and secret may be set.
flag: `--auth-secret`
env: `ZERO_AUTH_SECRET`
required: `false`
### Change DB
The Postgres database used to store recent replication log entries, in order to sync multiple view-syncers without requiring multiple replication slots on the upstream database. If unspecified, the upstream-db will be used.
flag: `--change-db`
env: `ZERO_CHANGE_DB`
required: `false`
### Change Max Connections
The maximum number of connections to open to the change database. This is used by the change-streamer for catching up zero-cache replication subscriptions.
flag: `--change-max-conns`
env: `ZERO_CHANGE_MAX_CONNS`
default: `5`
### Change Streamer Port
The port on which the change-streamer runs. This is an internal protocol between the replication-manager and zero-cache, which runs in the same process in local development. If unspecified, defaults to --port + 1.
flag: `--change-streamer-port`
env: `ZERO_CHANGE_STREAMER_PORT`
required: `false`
### Change Streamer URI
When unset, the zero-cache runs its own replication-manager (i.e. change-streamer). In production, this should be set to the replication-manager URI, which runs a change-streamer on port 4849.
flag: `--change-streamer-uri`
env: `ZERO_CHANGE_STREAMER_URI`
required: `false`
### CVR DB
The Postgres database used to store CVRs. CVRs (client view records) keep track of the data synced to clients in order to determine the diff to send on reconnect. If unspecified, the upstream-db will be used.
flag: `--cvr-db`
env: `ZERO_CVR_DB`
required: `false`
### CVR Max Connections
The maximum number of connections to open to the CVR database. This is divided evenly amongst sync workers.
Note that this number must allow for at least one connection per sync worker, or zero-cache will fail to start. See num-sync-workers.
flag: `--cvr-max-conns`
env: `ZERO_CVR_MAX_CONNS`
default: `30`
### Initial Sync Row Batch Size
The number of rows each table copy worker fetches at a time during initial sync. This can be increased to speed up initial sync, or decreased to reduce the amount of heap memory used during initial sync (e.g. for tables with large rows).
flag: `--initial-sync-row-batch-size`
env: `ZERO_INITIAL_SYNC_ROW_BATCH_SIZE`
default: `10000`
### Initial Sync Table Copy Workers
The number of parallel workers used to copy tables during initial sync. Each worker copies a single table at a time, fetching rows in batches of `initial-sync-row-batch-size`.
flag: `--initial-sync-table-copy-workers`
env: `ZERO_INITIAL_SYNC_TABLE_COPY_WORKERS`
default: `5`
### Lazy Startup
Delay starting the majority of zero-cache until first request.
This is mainly intended to avoid connecting to Postgres replication stream until the first request is received, which can be useful i.e., for preview instances.
Currently only supported in single-node mode.
flag: `--lazy-startup`
env: `ZERO_LAZY_STARTUP`
default: `false`
### Litestream Executable
Path to the litestream executable. This option has no effect if litestream-backup-url is unspecified.
flag: `--litestream-executable`
env: `ZERO_LITESTREAM_EXECUTABLE`
required: `false`
### Litestream Config Path
Path to the litestream yaml config file. zero-cache will run this with its environment variables, which can be referenced in the file via `${ENV}` substitution, for example:
- ZERO_REPLICA_FILE for the db Path
- ZERO_LITESTREAM_BACKUP_LOCATION for the db replica url
- ZERO_LITESTREAM_LOG_LEVEL for the log Level
- ZERO_LOG_FORMAT for the log type
flag: `--litestream-config-path`
env: `ZERO_LITESTREAM_CONFIG_PATH`
default: `./src/services/litestream/config.yml`
### Litestream Log Level
flag: `--litestream-log-level`
env: `ZERO_LITESTREAM_LOG_LEVEL`
default: `warn`
values: `debug`, `info`, `warn`, `error`
### Litestream Backup URL
The location of the litestream backup, usually an s3:// URL. If set, the litestream-executable must also be specified.
flag: `--litestream-backup-url`
env: `ZERO_LITESTREAM_BACKUP_URL`
required: `false`
### Litestream Checkpoint Threshold MB
The size of the WAL file at which to perform an SQlite checkpoint to apply the writes in the WAL to the main database file. Each checkpoint creates a new WAL segment file that will be backed up by litestream. Smaller thresholds may improve read performance, at the expense of creating more files to download when restoring the replica from the backup.
flag: `--litestream-checkpoint-threshold-mb`
env: `ZERO_LITESTREAM_CHECKPOINT_THRESHOLD_MB`
default: `40`
### Litestream Incremental Backup Interval Minutes
The interval between incremental backups of the replica. Shorter intervals reduce the amount of change history that needs to be replayed when catching up a new view-syncer, at the expense of increasing the number of files needed to download for the initial litestream restore.
flag: `--litestream-incremental-backup-interval-minutes`
env: `ZERO_LITESTREAM_INCREMENTAL_BACKUP_INTERVAL_MINUTES`
default: `15`
### Litestream Snapshot Backup Interval Hours
The interval between snapshot backups of the replica. Snapshot backups make a full copy of the database to a new litestream generation. This improves restore time at the expense of bandwidth. Applications with a large database and low write rate can increase this interval to reduce network usage for backups (litestream defaults to 24 hours).
flag: `--litestream-snapshot-backup-interval-hours`
env: `ZERO_LITESTREAM_SNAPSHOT_BACKUP_INTERVAL_HOURS`
default: `12`
### Litestream Restore Parallelism
The number of WAL files to download in parallel when performing the initial restore of the replica from the backup.
flag: `--litestream-restore-parallelism`
env: `ZERO_LITESTREAM_RESTORE_PARALLELISM`
default: `48`
### Log Format
Use text for developer-friendly console logging and json for consumption by structured-logging services.
flag: `--log-format`
env: `ZERO_LOG_FORMAT`
default: `"text"`
values: `text`, `json`
### Log IVM Sampling
How often to collect IVM metrics. 1 out of N requests will be sampled where N is this value.
flag: `--log-ivm-sampling`
env: `ZERO_LOG_IVM_SAMPLING`
default: `5000`
### Log Level
Sets the logging level for the application.
flag: `--log-level`
env: `ZERO_LOG_LEVEL`
default: `"info"`
values: `debug`, `info`, `warn`, `error`
### Log Slow Hydrate Threshold
The number of milliseconds a query hydration must take to print a slow warning.
flag: `--log-slow-hydrate-threshold`
env: `ZERO_LOG_SLOW_HYDRATE_THRESHOLD`
default: `100`
### Log Slow Row Threshold
The number of ms a row must take to fetch from table-source before it is considered slow.
flag: `--log-slow-row-threshold`
env: `ZERO_LOG_SLOW_ROW_THRESHOLD`
default: `2`
### Log Trace Collector
The URL of the trace collector to which to send trace data. Traces are sent over http. Port defaults to 4318 for most collectors.
flag: `--log-trace-collector`
env: `ZERO_LOG_TRACE_COLLECTOR`
required: `false`
### Number of Sync Workers
The number of processes to use for view syncing. Leave this unset to use the maximum available parallelism. If set to 0, the server runs without sync workers, which is the configuration for running the replication-manager.
flag: `--num-sync-workers`
env: `ZERO_NUM_SYNC_WORKERS`
required: `false`
### Per User Mutation Limit Max
The maximum mutations per user within the specified windowMs.
flag: `--per-user-mutation-limit-max`
env: `ZERO_PER_USER_MUTATION_LIMIT_MAX`
required: `false`
### Per User Mutation Limit Window (ms)
The sliding window over which the perUserMutationLimitMax is enforced.
flag: `--per-user-mutation-limit-window-ms`
env: `ZERO_PER_USER_MUTATION_LIMIT_WINDOW_MS`
default: `60000`
### Port
The port for sync connections.
flag: `--port`
env: `ZERO_PORT`
default: `4848`
### Push URL
The URL of the API server to which zero-cache will push mutations. Required if you use [custom mutators](/docs/custom-mutators).
flag: `--push-url`
env: `ZERO_PUSH_URL`
required: `false`
### Query Hydration Stats
Track and log the number of rows considered by each query in the system. This is useful for debugging and performance tuning.
flag: `--query-hydration-stats`
env: `ZERO_QUERY_HYDRATION_STATS`
required: `false`
### Replica Vacuum Interval Hours
Performs a VACUUM at server startup if the specified number of hours has elapsed since the last VACUUM (or initial-sync). The VACUUM operation is heavyweight and requires double the size of the db in disk space. If unspecified, VACUUM operations are not performed.
flag: `--replica-vacuum-interval-hours`
env: `ZERO_REPLICA_VACUUM_INTERVAL_HOURS`
required: `false`
### Server Version
The version string outputted to logs when the server starts up.
flag: `--server-version`
env: `ZERO_SERVER_VERSION`
required: `false`
### Storage DB Temp Dir
Temporary directory for IVM operator storage. Leave unset to use `os.tmpdir()`.
flag: `--storage-db-tmp-dir`
env: `ZERO_STORAGE_DB_TMP_DIR`
required: `false`
### Target Client Row Count
A soft limit on the number of rows Zero will keep on the client. 20k is a good default value for most applications, and we do not recommend exceeding 100k. See [Client Capacity Management](/docs/reading-data#client-capacity-management) for more details.
flag: `--target-client-row-count`
env: `ZERO_TARGET_CLIENT_ROW_COUNT`
default: `20000`
### Task ID
Globally unique identifier for the zero-cache instance. Setting this to a platform specific task identifier can be useful for debugging. If unspecified, zero-cache will attempt to extract the TaskARN if run from within an AWS ECS container, and otherwise use a random string.
flag: `--task-id`
env: `ZERO_TASK_ID`
required: `false`
### Tenants JSON
JSON encoding of per-tenant configs for running the server in multi-tenant mode:
```json
{
/**
* Requests to the main application port are dispatched to the first tenant
* with a matching host and path. If both host and path are specified,
* both must match for the request to be dispatched to that tenant.
*
* Requests can also be sent directly to the ZERO_PORT specified
* in a tenant's env overrides. In this case, no host or path
* matching is necessary.
*/
tenants: {
id: string; // value of the "tid" context key in debug logs
host?: string; // case-insensitive full Host: header match
path?: string; // first path component, with or without leading slash
/**
* Options are inherited from the main application (e.g. args and ENV) by default,
* and are overridden by values in the tenant's env object.
*/
env: {
ZERO_REPLICA_DB_FILE: string
ZERO_UPSTREAM_DB: string
ZERO_CVR_DB: string
ZERO_CHANGE_DB: string
...
};
}[];
}
```
flag: `--tenants-json`
env: `ZERO_TENANTS_JSON`
required: `false`
### Upstream Max Connections
The maximum number of connections to open to the upstream database for committing mutations. This is divided evenly amongst sync workers. In addition to this number, zero-cache uses one connection for the replication stream.
Note that this number must allow for at least one connection per sync worker, or zero-cache will fail to start. See num-sync-workers.
flag: `--upstream-max-conns`
env: `ZERO_UPSTREAM_MAX_CONNS`
default: `20`
--- react.mdx ---
Zero has built-in support for React. Here’s what basic usage looks like:
```tsx
import {useQuery, useZero} from '@rocicorp/zero/react';
import {type Schema} from './schema.ts';
import {type Mutators} from './mutators.ts';
function IssueList() {
const z = useZero();
let issueQuery = z.query.issue
.related('creator')
.related('labels')
.limit(100);
const userID = selectedUserID();
if (userID) {
issueQuery = issueQuery.where('creatorID', '=', userID);
}
const [issues, issuesDetail] = useQuery(issueQuery);
return (
<>
>
);
}
```
## ZeroProvider
The `useZero` hook must be used within a `ZeroProvider` component. The
`ZeroProvider` component is responsible for providing the context to your
components.
```tsx
import {ZeroProvider} from '@rocicorp/zero/react';
export function Root() {
const zero = new Zero(...);
return (
);
}
```
## createUseZero
It is often inconvenient to provide the type parameters to `useZero` repeatedly.
To simplify this, we provide a function that creates a hook with the types you
want.
```tsx
import {createUseZero} from '@rocicorp/zero/react';
import type {Schema} from './schema.ts';
import type {Mutators} from './mutators.ts';
export const useZero = createUseZero();
```
You can then import this `useZero` hook in your components and use it without
having to specify the type parameters.
```tsx
import {useQuery} from "@rocicorp/zero/react";
import {useZero} from './hooks/use-zero.ts';
function IssueList() {
const z = useZero();
...
}
```
Complete quickstart here:
https://github.com/rocicorp/hello-zero
--- solidjs.mdx ---
Zero has built-in support for Solid. Here’s what basic usage looks like:
```tsx
import {createQuery} from '@rocicorp/zero/solid';
const issues = createQuery(() => {
let issueQuery = z.query.issue
.related('creator')
.related('labels')
.limit(100);
const userID = selectedUserID();
if (userID) {
issueQuery = issueQuery.where('creatorID', '=', userID);
}
return issueQuery;
});
```
Complete quickstart here:
https://github.com/rocicorp/hello-zero-solid
--- community.mdx ---
Integrations with various tools, built by the Zero dev community.
If you have made something that should be here, send us a [pull request](https://github.com/rocicorp/zero-docs/pulls).
## UI Frameworks
- [One](https://onestack.dev/) is a full-stack React (and React Native!) framework with built-in Zero support.
- [zero-svelte](https://github.com/stolinski/zero-svelte) and [zero-svelte-query](https://github.com/RobertoSnap/zero-svelte-query) are two different approaches to Zero bindings for Svelte.
- [zero-vue](https://github.com/danielroe/zero-vue) adds Zero bindings to Vue.
- [zero-astro](https://github.com/ferg-cod3s/zero-astro) adds Zero bindings to Astro.
## Database Tools
- [drizzle-zero](https://github.com/BriefHQ/drizzle-zero) generates Zero schemas from Drizzle.
- [prisma-generator-zero](https://github.com/passionfroot/prisma-generator-zero) generates Zero schemas from Prisma.
## Miscellaneous
- [undo](https://github.com/rocicorp/undo) is a simple undo/redo library that was originally built for Replicache, but works just as well with Zero.
--- debug/inspector.mdx ---
The Zero instance provides an API to gather information about the client's current state, such as:
- All active queries
- Query TTL
- Active clients
- Client database contents
This can help figuring out why you hit loading states, how many queries are active at a time, if you have any resource leaks due to failing to clean up queries or if expected data is missing on the client.
## Creating an Inspector
Each `Zero` instance has an `inspect` method that will return the inspector. The `inspect` method is asynchronous because it performs lazy loading of inspect-only related code.
```ts
const z = new Zero({
/*your zero options*/
});
const inspector = await z.inspect();
```
Ensure that code splitting is enabled when bundling your app to prevent
loading inspect-related code by default. The `inspect` API is intended for
debugging purposes only and should not be used in production applications. It
is not efficient and communicates directly with the zero-cache via RPC over a
web socket.
If you are using React, you can use
[`React.lazy`](https://react.dev/reference/react/lazy) to dynamically load
components that depend on the `inspect` API.
Once you have an inspector you can inspect the current client and client group.
The client group represents a web browser profile. A client represents an
instance of `Zero` within that profile. If a user has Chrome open with 5
different tabs, each with one `Zero` instance, there will be 1 client group
and 5 clients.
For example to see active queries for the current client:
```ts
console.table(await inspector.client.queries());
```
To inspect other clients within the group:
```ts
const allClients = await inspector.clients();
```
## Dumping Data
In addition to information about queries, you can see the contents of the client side database.
```ts
const inspector = await zero.inspect();
const client = inspector.client;
// All raw k/v data currently synced to client
console.log('client map:');
console.log(await client.map());
// kv table extracted into tables
// This is same info that is in z.query[tableName].run()
for (const tableName of Object.keys(schema.tables)) {
console.log(`table ${tableName}:`);
console.table(await client.rows(tableName));
}
```
--- debug/slow-queries.mdx ---
In the `zero-cache` logs, you may see statements indicating a query is slow:
```shell
{
"level": "DEBUG",
"worker": "syncer",
"component": "view-syncer",
"hydrationTimeMs": 1339,
"message": "Total rows considered: 146"
},
```
or:
```shell
hash=3rhuw19xt9vry transformationHash=1nv7ot74gxfl7
Slow query materialization 325.46865100000286
```
Or, you may just notice queries taking longer than expected in the UI.
Here are some tips to help debug such slow queries.
## Check `ttl`
If you are seeing unexpected UI flicker when moving between views, it is likely that the queries backing these views have the default `ttl` of `never`. Set the `ttl` to some longer value to [keep data cached across navigations](https://zero.rocicorp.dev/docs/reading-data#background-queries).
You may alternately want to [preload some data](https://zero.rocicorp.dev/docs/reading-data#preloading) at app startup.
## Check Storage
`zero-cache` is effectively a database. It requires fast (low latency and high bandwidth) disk access to perform well. If you're running on network attached storage with high latency, or on AWS with low IOPS, then this is the most likely culprit.
The default deployment of Zero currently uses Fargate which scales IOPS with vCPU. Increasing the vCPU will increase storage throughput and likely resolve the issue.
Fly.io provides physically attached SSDs, even for their smallest VMs. Deploying zero-cache there (or any other provider that offers physically attached SSDs) is another option.
## Locality
If you see log lines like:
```shell
flushed cvr ... (124ms)
```
this indicates that `zero-cache` is likely deployed too far away from your [CVR database](../deployment#architecture). If you did not configure a CVR database URL then this will be your product's Postgres DB. A slow CVR flush can slow down Zero, since it must complete the flush before sending query result(s) to clients.
Try moving `zero-cache` to be deployed as close as possible to the CVR database.
## Query Plan
If neither (1) nor (2) is a problem, then the query itself is the most likely culprit. The `@rocicorp/zero` package ships with a query analyzer to help debug this.
The analyzer should be run in the directory that contains the `.env` file for `zero-cache` as it will use the `.env` file to find your replica.
Example:
```shell
npx analyze-query \
--schema=./shared/schema.ts \
--query='issue.related("comments")'
```
This will output the query plan and time to execute each phase of that plan.
Note that query performance can also be affected by read permissions. See [Debugging Permissions](./permissions) for information on how to analyze queries with read permissions applied.
## /statz
`zero-cache` makes some internal health statistics available via the `/statz` endpoint of `zero-cache`. In order to access this, you must configure an [admin password](/docs/zero-cache-config#admin-password).
--- debug/permissions.mdx ---
Given that permissions are defined in their own file and internally applied to queries, it might be hard to figure out if or why a permission check is failing.
## Read Permissions
You can use the `analyze-query` utility with the `--apply-permissions` flag to see the complete query Zero runs, including read permissions.
```bash
npx analyze-query
--schema='./shared/schema.ts'
--query='issue.related("comments")'
--apply-permissions
--auth-data='{"userId":"user-123"}'
```
If the result looks right, the problem may be that Zero is not receiving the `AuthData` that you think it is. You can retrieve a query hash from websocket or server logs, then ask Zero for the details on that specific query.
Run this command with the same environment you run `zero-cache` with. It will use your `upstream` or `cvr` configuration to look up the query hash in the cvr database.
```bash
npx analyze-query
--schema='./shared/schema.ts'
--hash='3rhuw19xt9vry'
--apply-permissions
--auth-data='{"userId":"user-123"}'
```
The printed query can be different than the source ZQL string, because it is rebuilt from the query AST. But it should be logically equivalent to the query you wrote.
## Write Permissions
Look for a `WARN` level log in the output from `zero-cache` like this:
```
Permission check failed for {"op":"update","tableName":"message",...}, action update, phase preMutation, authData: {...}, rowPolicies: [...], cellPolicies: []
```
Zero prints the row, auth data, and permission policies that was applied to any failed writes.
The ZQL query is printed in AST format. See [Query ASTs](./query-asts) to
convert it to a more readable format.
--- debug/replication.mdx ---
## Resetting
During development we all do strange things (unsafely changing schemas, removing files, etc.). If the replica ever gets wedged (stops replicating, acts strange) you can wipe it and start over.
- If you copied your setup from `hello-zero` or `hello-zero-solid`, you can also run `npm run dev:clean`
- Otherwise you can run `rm /tmp/my-zero-replica.db*` (see your `.env` file for the replica file location) to clear the contents of the replica.
It is always safe to wipe the replica. Wiping will have no impact on your upstream database. Downstream zero-clients will get re-synced when they connect.
## Inspecting
For data to be synced to the client it must first be replicated to `zero-cache`. You can check the contents of `zero-cache` via:
```bash
$ npx zero-sqlite3 /tmp/my-zero-replica.db
```
This will drop you into a `sqlite3` shell with which you can use to explore the contents of the replica.
```sql
sqlite> .tables
_zero.changeLog emoji viewState
_zero.replicationConfig issue zero.permissions
_zero.replicationState issueLabel zero.schemaVersions
_zero.runtimeEvents label zero_0.clients
_zero.versionHistory user
comment userPref
sqlite> .mode qbox
sqlite> SELECT * FROM label;
┌─────────────────────────┬──────────────────────────┬────────────┐
│ id │ name │ _0_version │
├─────────────────────────┼──────────────────────────┼────────────┤
│ 'ic_g-DZTYDApZR_v7Cdcy' │ 'bug' │ '4ehreg' │
...
```
## Miscellaneous
If you see `FATAL: sorry, too many clients already` in logs, it’s because you have two zero-cache instances running against dev. One is probably in a background tab somewhere. In production, `zero-cache` can run horizontally scaled but on dev it doesn’t run in the config that allows that.
--- debug/query-asts.mdx ---
An AST (Abstract Syntax Tree) is a representation of a query that is used internally by Zero. It is not meant to be human readable, but it sometimes shows up in logs and other places.
If you need to read one of these, save the AST to a json file. Then run the following command:
```bash
cat ast.json | npx ast-to-zql
```
The returned ZQL query will be using server names, rather than client names, to identify columns and tables.
If you provide the schema file as an option you will get mapped back to client names:
```bash
cat ast.json | npx ast-to-zql --schema schema.ts
```
This comes into play if, in your schema.ts, you use the `from` feature to have different names on the client than your backend DB.
The `ast-to-zql` process is a de-compilation of sorts. Given that, the ZQL
string you get back will not be identical to the one you wrote in your
application. Regardless, the queries will be semantically equivalent.
--- roadmap.mdx ---
## Alpha (EOY ‘24)
- ~~Schema migration~~
- ~~Write permissions~~
- ~~Solid support~~
- ~~Replica sqlite files not browsable with standard sqlite3 program~~
- ~~Relationship filters - currently you can put relationships in the ‘select’ part of the query, but not the ‘where’ part. Relationship filters commonly needed to, i.e., find all issues with particular label.~~
- ~~Multi-column primary keys~~
- ~~Read permissions~~
- ~~Docs for easily deploying Zero on your own AWS or Fly.io account~~
- ~~Up to 20MB client-side and 1GB server-side per-replica~~
## Beta (Q2 ‘25)
- ~~Custom mutators~~
- Cell-level read permissions (already exist for write)
- First-class support for React Native
- ~~Ability to wait for authoritative results~~
- Aggregations (count, sum, min, max, group-by, etc)
- Consistency.
- See: [Consistency](/docs/reading-data#consistency).
- This will also improve startup perf since apps won’t have to be so conservative in what they preload:
~~Cache size management: evict things from client-side cache to stay under size~~
- Reduce zero client bundle size to < 40KB
- Up to 20 MB client-side and 100 GB server-side per-replica
## GA
- Vector-based text search
- Extensive testing using randomized query generation and dst
- External audit of design and impl
- Ability to lock queries down to only expected forms for security
- Additional databases beside Postgres
- SaaS
--- reporting-bugs.mdx ---
## zbugs
You can use [zbugs](https://bugs.rocicorp.dev/)! (password: `zql`)
Our own bug tracker built from the ground up on Zero.
## Discord
Alternately just pinging us on Discord is great too.
--- release-notes/index.mdx ---
- [Zero 0.19: Many, many bugfixes and cleanups](/docs/release-notes/0.19)
- [Zero 0.18: Custom Mutators](/docs/release-notes/0.18)
- [Zero 0.17: Background Queries](/docs/release-notes/0.17)
- [Zero 0.16: Lambda-Based Permission Deployment](/docs/release-notes/0.16)
- [Zero 0.15: Live Permission Updates](/docs/release-notes/0.15)
- [Zero 0.14: Name Mapping and Multischema](/docs/release-notes/0.14)
- [Zero 0.13: Multinode and SST](/docs/release-notes/0.13)
- [Zero 0.12: Circular Relationships](/docs/release-notes/0.12)
- [Zero 0.11: Windows](/docs/release-notes/0.11)
- [Zero 0.10: Remove Top-Level Await](/docs/release-notes/0.10)
- [Zero 0.9: JWK Support](/docs/release-notes/0.9)
- [Zero 0.8: Schema Autobuild, Result Types, and Enums](/docs/release-notes/0.8)
- [Zero 0.7: Read Perms and Docker](/docs/release-notes/0.7)
- [Zero 0.6: Relationship Filters](/docs/release-notes/0.6)
- [Zero 0.5: JSON Columns](/docs/release-notes/0.5)
- [Zero 0.4: Compound Filters](/docs/release-notes/0.4)
- [Zero 0.3: Schema Migrations and Write Perms](/docs/release-notes/0.3)
- [Zero 0.2: Skip Mode and Computed PKs](/docs/release-notes/0.2)
- [Zero 0.1: First Release](/docs/release-notes/0.1)
--- open-source.mdx ---
Specifically, the Zero client and server are Apache-2 licensed. You can use, modify, host, and distribute them freely:
https://github.com/rocicorp/mono/blob/main/LICENSE
## Business Model
We plan to commercialize Zero in the future by offering a hosted `zero-cache` service for people who do not want to run it themselves. We expect to charge prices for this rougly comparable to today's database hosting services. We'll also offer white-glove service to help enterprises run `zero-cache` within their own infrastructure.
These plans may change as we develop Zero further. For example, we may also build closed-source companion software – similar to how Docker, Inc. charges for team access to Docker Desktop.
But we have no plans to ever change the licensing of the core product: We're building a general-purpose sync engine for the entire web, and we can only do that if the core remains completely open.
--- llms.mdx ---
Are you an LLM?
Do you like long walks through vector space and late-night tokenization?
Or maybe you're a friend of an LLM, just trying to make life a little easier for the contextually challenged?
Either way, you're in the right place! Stream on over to [llms.txt](/llms.txt) for the text-only version of these docs.
--- custom-mutators.mdx ---
_Custom Mutators_ are a new way to write data in Zero that is much more powerful than the original ["CRUD" mutator API](./writing-data).
Instead of having only the few built-in `insert`/`update`/`delete` write operations for each table, custom mutators allow you to _create your own write operations_ using arbitrary code. This makes it possible to do things that are impossible or awkward with other sync engines.
For example, you can create custom mutators that:
- Perform arbitrary server-side validation
- Enforce fine-grained permissions
- Send email notifications
- Query LLMs
- Use Yjs for collaborative editing
- … and much, _much_ more – custom mutators are just code, and they can do anything code can do!
Despite their increased power, custom mutators still participate fully in sync. They execute instantly on the local device, immediately updating all active queries. They are then synced in the background to the server and to other clients.
We're still refining the design of custom mutators. During this phase, the old
CRUD mutators will continue to work. But we do want to deprecate CRUD
mutators, and eventually remove them. So please try out custom mutators and
[let us know](https://discord.rocicorp.dev/) how they work for you, and what
improvements you need before the cutover.
## Understanding Custom Mutators
### Architecture
Custom mutators introduce a new _server_ component to the Zero architecture.

This server is implemented by you, the developer. It's typically just your existing backend, where you already put auth or other server-side functionality.
The server can be a serverless function, a microservice, or a full stateful server. The only real requirement is that it expose a special _push endpoint_ that `zero-cache` can call to process mutations. This endpoint implements the [push protocol](#custom-push-implementation) and contains your custom logic for each mutation.
Zero provides utilities in `@rocicorp/zero` that make it really easy implement this endpoint in TypeScript. But you can also implement it yourself if you want. As long as your endpoint fulfills the push protocol, `zero-cache` doesn't care. You can even write it in a different programming language.
### What Even is a Mutator?
Zero's custom mutators are based on [_server reconciliation_](https://www.gabrielgambetta.com/client-side-prediction-server-reconciliation.html) – a technique for robust sync that has been used by the video game industry for decades.
Our previous sync engine, [Replicache](https://replicache.dev/), also used
server reconciliation. The ability to implement arbitrary mutators was one of
Replicache's most popular features. Custom mutators bring this same power to
Zero, but with a much better developer experience.
A custom mutator is just a function that runs within a database transaction, and which can read and write to the database. Here's an example of a very simple custom mutator written in TypeScript:
```ts
async function updateIssue(
tx: Transaction,
{id, title}: {id: string; title: string},
) {
// Validate title length.
if (title.length > 100) {
throw new Error(`Title is too long`);
}
await tx.mutate.issue.update({id, title});
}
```
Each custom mutator gets **two implementations**: one on the client and one on the server.
The client implementation must be written in TypeScript against the Zero `Transaction` interface, using [ZQL](#read-data-on-the-client) for reads and a [CRUD-style API](#write-data-on-the-client) for writes.
The server implementation runs on your server, in your push endpoint, against your database. In principle, it can be written in any language and use any data access library. For example you could have the following Go-based server implementation of the same mutator:
```go
func updateIssueOnServer(tx *sql.Tx, id string, title string) error {
// Validate title length.
if len(title) > 100 {
return errors.New("Title is too long")
}
_, err := tx.Exec("UPDATE issue SET title = $1 WHERE id = $2", title, id)
return err
}
```
In practice however, most Zero apps use TypeScript on the server. For these users we provide a handy `ServerTransaction` that implements ZQL against Postgres, so that you can share code between client and server mutators naturally.
So on a TypeScript server, that server mutator can just be:
```ts
async function updateIssueOnServer(
tx: ServerTransaction,
{id, title},
{id: string, title: string},
) {
// Delegate to client mutator.
// The `ServerTransaction` here has a different implementation
// that runs the same ZQL queries against Postgres!
await updateIssue(tx, {id, title});
}
```
Even in TypeScript, you can do as little or as much code sharing as you like. In your server mutator, you can [use raw SQL](#dropping-down-to-raw-sql), any data access libraries you prefer, or add as much extra server-specific logic as you need.
Reusing ZQL on the server is a handy – and we expect frequently used – option, but not a requirement.
### Server Authority
You may be wondering what happens if the client and server mutators implementations don't match.
Zero is an example of a _server-authoritative_ sync engine. This means that the server mutator always takes precedence over the client mutator. The result from the client mutator is considered _speculative_ and is discarded as soon as the result from the server mutator is known. This is a very useful feature: it enables server-side validation, permissions, and other server-specific logic.
Imagine that you wanted to use an LLM to detect whether an issue update is spammy, rather than a simple length check. We can just add that to our server mutator:
```ts
async function updateIssueOnServer(
tx: ServerTransaction,
{id, title}: {id: string; title: string},
) {
const response = await llamaSession.prompt(
`Is this title update likely spam?\n\n${title}\n\nResponse "yes" or "no"`,
);
if (/yes/i.test(response)) {
throw new Error(`Title is likely spam`);
}
// delegate rest of implementation to client mutator
await updateIssue(tx, {id, title});
}
```
If the server detects that the mutation is spammy, the client will see the error message and the mutation will be rolled back. If the server mutator succeeds, the client mutator will be rolled back and the server result will be applied.
### Life of a Mutation
Now that we understand what client and server mutations are, let's walk through they work together with Zero to sync changes from a source client to the server and then other clients:
1. When you call a custom mutator on the client, Zero runs your client-side mutator immediately on the local device, updating all active queries instantly.
2. In the background, Zero then sends a _mutation_ (a record of the mutator having run with certain arguments) to your server's push endpoint.
3. Your push endpoint runs the [push protocol](#custom-push-implementation), executing the server-side mutator in a transaction against your database and recording the fact that the mutation ran. Optionally, you use our `PushProcessor` class to handle this for you, but you can also implement it yourself.
4. The changes to the database are replicated to `zero-cache` as normal.
5. `zero-cache` calculates the updates to active queries and sends rows that have changed to each client. It also sends information about the mutations that have been applied to the database.
6. Clients receive row updates and apply them to their local cache. Any pending mutations which have been applied to the server have their local effects rolled back.
7. Client-side queries are updated and the user sees the changes.
## Using Custom Mutators
### Registering Client Mutators
By convention, the client mutators are defined with a function called `createMutators` in a file called `mutators.ts`:
```ts
// mutators.ts
import {CustomMutatorDefs} from '@rocicorp/zero';
import {schema} from './schema';
export function createMutators() {
return {
issue: {
update: async (tx, {id, title}: {id: string; title: string}) => {
// Validate title length. Legacy issues are exempt.
if (title.length > 100) {
throw new Error(`Title is too long`);
}
await tx.mutate.issue.update({id, title});
},
},
} as const satisfies CustomMutatorDefs;
}
```
The `mutators.ts` convention allows mutator implementations to be easily [reused server-side](#setting-up-the-server). The `createMutators` function convention is used so that we can pass authentication information in to [implement permissions](#permissions).
You are free to make different code layout choices – the only real requirement is that you register your map of mutators in the `Zero` constructor:
```ts
// main.tsx
import {Zero} from '@rocicorp/zero';
import {schema} from './schema';
import {createMutators} from './mutators';
const zero = new Zero({
schema,
mutators: createMutators(),
});
```
### Write Data on the Client
The `Transaction` interface passed to client mutators exposes the same `mutate` API as the existing [CRUD-style mutators](./writing-data):
```ts
async function myMutator(tx: Transaction) {
// Insert a new issue
await tx.mutate.issue.insert({
id: 'issue-123',
title: 'New title',
description: 'New description',
});
// Upsert a new issue
await tx.mutate.issue.upsert({
id: 'issue-123',
title: 'New title',
description: 'New description',
});
// Update an issue
await tx.mutate.issue.update({
id: 'issue-123',
title: 'New title',
});
// Delete an issue
await tx.mutate.issue.delete({
id: 'issue-123',
});
}
```
See [the CRUD docs](./writing-data) for detailed semantics on these methods.
### Read Data on the Client
You can read data within a client mutator using [ZQL](./reading-data):
```ts
export function createMutators() {
return {
issue: {
update: async (tx, {id, title}: {id: string; title: string}) => {
// Read existing issue
const prev = await tx.query.issue.where('id', id).one();
// Validate title length. Legacy issues are exempt.
if (!prev.isLegacy && title.length > 100) {
throw new Error(`Title is too long`);
}
await tx.mutate.issue.update({id, title});
},
},
} as const satisfies CustomMutatorDefs;
}
```
You have the full power of ZQL at your disposal, including relationships, filters, ordering, and limits.
Reads and writes within a mutator are transactional, meaning that the datastore is guaranteed to not change while your mutator is running. And if the mutator throws, the entire mutation is rolled back.
Outside of mutators, the `run()` method has a [`type` parameter](reading-data#running-queries-once) that can be used to wait for server results.
This parameter isn't supported within mutators, because waiting for server results makes no sense in an optimistic mutation – it defeats the purpose of running optimistically to begin with.
When a mutator runs on the client (`tx.location === "client"`), ZQL reads only return data already cached on the client. When mutators run on the server (`tx.location === "server"`), ZQL reads always return all data.
You can use `run()` within custom mutators, but the `type` argument does nothing. In the future, passing `type` in this situation will throw an error.
### Invoking Client Mutators
Once you have registered your client mutators, you can call them from your client-side application:
```ts
zero.mutate.issue.update({
id: 'issue-123',
title: 'New title',
});
```
The result of a call to a mutator is a `Promise`. You do not usually need to `await` this promise as Zero mutators run very fast, usually completing in a tiny fraction of one frame.
However because mutators ocassionally need to access browser storage, they are technically `async`. Reading a row that was written by a mutator immediately after it is written may not return the new data, because the mutator may not have completed writing to storage yet.
### Waiting for Mutator Result
We typically recommend that you "fire and forget" mutators.
Optimistic mutations make sense when the common case is that a mutation succeeds. If a mutation frequently fails, then showing the user an optimistic result doesn't make sense, because it will likely be wrong.
That said there are cases where it is useful to know when a write succeeded on either the client or server.
One example is if you need to read a row directly after writing it. Zero's local writes are very fast (almost always < 1 frame), but because Zero is backed by IndexedDB, writes are still *technically* asynchronous and reads directly after a write may not return the new data.
You can use the `.client` promise in this case to wait for a write to complete on the client side:
```ts
try {
const write = zero.mutate.issue.update({
id: 'issue-123',
title: 'New title',
});
// issue-123 not guaranteed to be present here. read1 may be undefined.
const read1 = await zero.query.issue.where('id', 'issue-123').one();
// Await client write – almost always less than 1 frame, and same
// macrotask, so no browser paint will occur here.
await write.client;
// issue-123 definitely can be read now.
const read2 = await zero.query.issue.where('id', 'issue-123').one();
} catch (e) {
console.error("Mutator failed on client", e);
}
```
You can also wait for the server write to succeed:
```ts
try {
await zero.mutate.issue.update({
id: 'issue-123',
title: 'New title',
}).server;
// issue-123 is written to server
} catch (e) {
console.error("Mutator failed on client or server", e);
}
```
If the client-side mutator fails, the `.server` promise is also rejected with the same error. You don't have to listen to both promises, the server promise covers both cases.
There is not yet a way to return data from mutators in the success case – the type of `.clent` and `.server` is always `Promise`. [Let us know](https://discord.rocicorp.dev/) if you need this.
### Setting Up the Server
You will need a server somewhere you can run an endpoint on. This is typically a serverless function on a platform like Vercel or AWS but can really be anything.
Set the push URL with the [`ZERO_PUSH_URL` env var or `--push-url`](./zero-cache-config#push-url).
If there is per-client configuration you need to send to the push endpoint, you can do that with `push.queryParams`:
```ts
const z = new Zero({
push: {
queryParams: {
workspaceID: "42",
},
},
});
```
The push endpoint receives a `PushRequest` as input describing one or more mutations to apply to the backend, and must return a `PushResponse` describing the results of those mutations.
If you are implementing your server in TypeScript, you can use the `PushProcessor` class to trivially implement this endpoint. Here’s an example in a [Hono](https://hono.dev/) app:
```ts
import {Hono} from 'hono';
import {handle} from 'hono/vercel';
import {PushProcessor, ZQLDatabase, PostgresJSConnection} from '@rocicorp/zero/pg';
import postgres from 'postgres';
import {schema} from '../shared/schema';
import {createMutators} from '../shared/mutators';
// PushProcessor is provided by Zero to encapsulate a standard
// implementation of the push protocol.
const processor = new PushProcessor(
new ZQLDatabase(
new PostgresJSConnection(
postgres(process.env.ZERO_UPSTREAM_DB! as string)
),
schema
)
);
export const app = new Hono().basePath('/api');
app.post('/push', async c => {
const result = await processor.process(
createMutators(),
c.req.raw,
);
return await c.json(result);
});
export default handle(app);
```
`PushProcessor` depends on an abstract `Database`. This allows it to implement the push algorithm against any database.
`@rocicorp/zero/pg` includes a `ZQLDatabase` implementation of this interface backed by Postgres. The implementation allows the same mutator functions to run on client and server, by providing an implementation of the ZQL APIs that custom mutators run on the client.
`ZQLDatabase` in turn relies on an abstract `DBConnection` that provides raw access to a Postgres database. This allows you to use any Postgres library you like, as long as you provide a `DBConnection` implementation for it. The `PostgresJSConnection` class implements `DBConnection` for the excellent [`postgres.js`](https://www.npmjs.com/package/postgres) library to connect to Postgres.
To reuse the client mutators exactly as-is on the server just pass the result of the same `createMutators` function to `PushProcessor`.
### Server-Specific Code
To implement server-specific code, just run different mutators in your push endpoint!
An approach we like is to create a separate `server-mutators.ts` file that wraps the client mutators:
```ts
// server-mutators.ts
import { CustomMutatorDefs } from "@rocicorp/zero";
import { schema } from "./schema";
export function createMutators(clientMutators: CustomMutatorDefs) {
return {
// Reuse all client mutators except the ones in `issue`
...clientMutators,
issue: {
// Reuse all issue mutators except `update`
...clientMutators.issue,
update: async (tx, {id, title}: { id: string; title: string }) => {
// Call the shared mutator first
await clientMutators.issue.update(tx, {id, title});
// Record a history of this operation happening in an audit
// log table.
await tx.mutate.auditLog.insert({
// Assuming you have an audit log table with fields for
// `issueId`, `action`, and `timestamp`.
issueId: id,
action: 'update-title',
timestamp: new Date().toISOString(),
});
},
}
} as const satisfies CustomMutatorDefs;
}
```
For simple things, we also expose a `location` field on the transaction object that you can use to branch your code:
```ts
myMutator: (tx) => {
if (tx.location === 'client') {
// Client-side code
} else {
// Server-side code
}
},
```
### Permissions
Because custom mutators are just arbitrary TypeScript functions, there is no need for a special permissions system. Therefore, you won't use Zero's [write permissions](./permissions) when you use custom mutators.
When using custom mutators you will have no [`insert`](permissions#insert-permissions), [`update`](permissions#update-permissions), or [`delete`](permissions#delete-permissions) permissions. You will still have [`select`](permissions#select-permissions) permissions, however.
We hope to build [custom queries](https://bugs.rocicorp.dev/issue/3453) next – a read analog to custom mutators. If we succeed, Zero's permission system will go away completely 🤯.
In order to do permission checks, you'll need to know what user is making the request. You can pass this information to your mutators by adding a `AuthData` parameter to the `createMutators` function:
```ts
type AuthData = {
sub: string;
};
export function createMutators(authData: AuthData | undefined) {
return {
issue: {
launchMissiles: async (tx, args: {target: string}) => {
if (!authData) {
throw new Error('Users must be logged in to launch missiles');
}
const hasPermission = await tx.query.user
.where('id', authData.sub)
.whereExists('permissions', q => q.where('name', 'launch-missiles'))
.one();
if (!hasPermission) {
throw new Error('User does not have permission to launch missiles');
}
},
},
} as const satisfies CustomMutatorDefs;
}
```
The `AuthData` parameter can be any data required for authorization, but is typically just the decoded JWT:
```ts
// app.tsx
const zero = new Zero({
schema,
auth: encodedJWT,
mutators: createMutators(decodedJWT),
});
// hono-server.ts
const processor = new PushProcessor(
schema,
connectionProvider(postgres(process.env.ZERO_UPSTREAM_DB as string)),
);
processor.process(
createMutators(decodedJWT),
c.req.query(),
await c.req.json(),
);
```
### Dropping Down to Raw SQL
On the server, you can use raw SQL in addition or instead of ZQL. This is useful for complex queries, or for using Postgres features that Zero doesn't support yet:
```ts
async function markAllAsRead(tx: Transaction, {userId: string}) {
await tx.dbTransaction.query(
`
UPDATE notification
SET read = true
WHERE user_id = $1
`,
[userId],
);
}
```
### Notifications and Async Work
It is bad practice to hold open database transactions while talking over the network, for example to send notifications. Instead, you should let the db transaction commit and do the work asynchronously.
There is no specific support for this in custom mutators, but since mutators are just code, it’s easy to do:
```ts
// server-mutators.ts
export function createMutators(
authData: AuthData,
asyncTasks: Array<() => Promise>,
) {
return {
issue: {
update: async (tx, {id, title}: {id: string; title: string}) => {
await tx.mutate.issue.update({id, title});
asyncTasks.push(async () => {
await sendEmailToSubscribers(args.id);
});
},
},
} as const satisfies CustomMutatorDefs;
}
```
Then in your push handler:
```ts
app.post('/push', async c => {
const asyncTasks: Array<() => Promise> = [];
const result = await processor.process(
createMutators(authData, asyncTasks),
c.req.query(),
await c.req.json(),
);
await Promise.all(asyncTasks.map(task => task()));
return await c.json(result);
});
```
### Custom Database Connections
You can implement an adapter to a different Postgres library, or even a different database entirely.
To do so, provide a `connectionProvider` to `PushProcessor` that returns a different [`DBConnection`](https://github.com/rocicorp/mono/blob/1a3741fbdad6dbdd56aa1f48cc2cc83938a61b16/packages/zql/src/mutate/custom.ts#L67) implementation. For an example implementation, [see the `postgres` implementation](https://github.com/rocicorp/mono/blob/1a3741fbdad6dbdd56aa1f48cc2cc83938a61b16/packages/zero-pg/src/postgres-connection.ts#L4).
### Custom Push Implementation
You can manually implement the push protocol in any programming language.
This will be documented in the future, but you can refer to the [PushProcessor](https://github.com/rocicorp/mono/blob/1a3741fbdad6dbdd56aa1f48cc2cc83938a61b16/packages/zero-pg/src/web.ts#L33) source code for an example for now.
## Examples
- Zbugs uses [custom mutators](https://github.com/rocicorp/mono/blob/a76c9a61670cc09e1a9fe7ab795749f3eef25577/apps/zbugs/shared/mutators.ts) for all mutations, [write permissions](https://github.com/rocicorp/mono/blob/a76c9a61670cc09e1a9fe7ab795749f3eef25577/apps/zbugs/shared/mutators.ts#L61), and [notifications](https://github.com/rocicorp/mono/blob/a76c9a61670cc09e1a9fe7ab795749f3eef25577/apps/zbugs/server/server-mutators.ts#L35).
- `hello-zero-solid` uses custom mutators for all [mutations](TODO), and for [permissions](TODO).
--- writing-data.mdx ---
Zero generates basic CRUD mutators for every table you sync. Mutators are available at `zero.mutate.`:
```tsx
const z = new Zero(...);
z.mutate.user.insert({
id: nanoid(),
username: 'abby',
language: 'en-us',
});
```
To build mutators with more complex logic or server-specific behavior, see the
new [Custom Mutators API](./custom-mutators).
## Insert
Create new records with `insert`:
```tsx
z.mutate.user.insert({
id: nanoid(),
username: 'sam',
language: 'js',
});
```
Optional fields can be set to `null` to explicitly set the new field to `null`. They can also be set to `undefined` to take the default value (which is often `null` but can also be some generated value server-side).
```tsx
// schema.ts
import {createTableSchema} from '@rocicorp/zero';
const userSchema = createTableSchema({
tableName: 'user',
columns: {
id: {type: 'string'},
name: {type: 'string'},
language: {type: 'string', optional: true},
},
primaryKey: ['id'],
relationships: {},
});
// app.tsx
// Sets language to `null` specifically
z.mutate.user.insert({
id: nanoid(),
username: 'sam',
language: null,
});
// Sets language to the default server-side value. Could be null, or some
// generated or constant default value too.
z.mutate.user.insert({
id: nanoid(),
username: 'sam',
});
// Same as above
z.mutate.user.insert({
id: nanoid(),
username: 'sam',
language: undefined,
});
```
## Upsert
Create new records or update existing ones with `upsert`:
```tsx
z.mutate.user.upsert({
id: samID,
username: 'sam',
language: 'ts',
});
```
`upsert` supports the same `null` / `undefined` semantics for optional fields that `insert` does (see above).
## Update
Update an existing record. Does nothing if the specified record (by PK) does not exist.
You can pass a partial, leaving fields out that you don’t want to change. For example here we leave the username the same:
```tsx
// Leaves username field to previous value.
z.mutate.user.update({
id: samID,
language: 'golang',
});
// Same as above
z.mutate.user.update({
id: samID,
username: undefined,
language: 'haskell',
});
// Reset language field to `null`
z.mutate.user.update({
id: samID,
language: null,
});
```
## Delete
Delete an existing record. Does nothing if specified record does not exist.
```tsx
z.mutate.user.delete({
id: samID,
});
```
## Batch Mutate
You can do multiple CRUD mutates in a single _batch_. If any of the mutations fails, all will. They also all appear together atomically in a single transaction to other clients.
```tsx
z.mutateBatch(async tx => {
const samID = nanoid();
tx.user.insert({
id: samID,
username: 'sam',
});
const langID = nanoid();
tx.language.insert({
id: langID,
userID: samID,
name: 'js',
});
});
```
--- reading-data.mdx ---
ZQL is Zero’s query language.
Inspired by SQL, ZQL is expressed in TypeScript with heavy use of the builder pattern. If you have used [Drizzle](https://orm.drizzle.team/) or [Kysely](https://kysely.dev/), ZQL will feel familiar.
ZQL queries are composed of one or more _clauses_ that are chained together into a _query_.
Unlike queries in classic databases, the result of a ZQL query is a _view_ that updates automatically and efficiently as the underlying data changes. You can call a query’s `materialize()` method to get a view, but more typically you run queries via some framework-specific bindings. For example see `useQuery` for [React](react) or [SolidJS](solidjs).
This means you should not modify the data directly. Instead, clone the data and modify the clone.
ZQL caches values and returns them multiple times. If you modify a value returned from ZQL, you will modify it everywhere it is used. This can lead to subtle bugs.
JavaScript and TypeScript lack true immutable types so we use `readonly` to help enforce it. But it's easy to cast away the `readonly` accidentally.
In the future, we'll [`freeze`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze) all returned data in `dev` mode to help prevent this.
## Select
ZQL queries start by selecting a table. There is no way to select a subset of columns; ZQL queries always return the entire row (modulo column permissions).
```tsx
const z = new Zero(...);
// Returns a query that selects all rows and columns from the issue table.
z.query.issue;
```
This is a design tradeoff that allows Zero to better reuse the row locally for future queries. This also makes it easier to share types between different parts of the code.
## Ordering
You can sort query results by adding an `orderBy` clause:
```tsx
z.query.issue.orderBy('created', 'desc');
```
Multiple `orderBy` clauses can be present, in which case the data is sorted by those clauses in order:
```tsx
// Order by priority descending. For any rows with same priority,
// then order by created desc.
z.query.issue.orderBy('priority', 'desc').orderBy('created', 'desc');
```
All queries in ZQL have a default final order of their primary key. Assuming the `issue` table has a primary key on the `id` column, then:
```tsx
// Actually means: z.query.issue.orderBy('id', 'asc');
z.query.issue;
// Actually means: z.query.issue.orderBy('priority', 'desc').orderBy('id', 'asc');
z.query.issue.orderBy('priority', 'desc');
```
## Limit
You can limit the number of rows to return with `limit()`:
```tsx
z.query.issue.orderBy('created', 'desc').limit(100);
```
## Paging
You can start the results at or after a particular row with `start()`:
```tsx
let start: IssueRow | undefined;
while (true) {
let q = z.query.issue.orderBy('created', 'desc').limit(100);
if (start) {
q = q.start(start);
}
const batch = await q.run();
console.log('got batch', batch);
if (batch.length < 100) {
break;
}
start = batch[batch.length - 1];
}
```
By default `start()` is _exclusive_ - it returns rows starting **after** the supplied reference row. This is what you usually want for paging. If you want _inclusive_ results, you can do:
```tsx
z.query.issue.start(row, {inclusive: true});
```
## Uniqueness
If you want exactly zero or one results, use the `one()` clause. This causes ZQL to return `Row|undefined` rather than `Row[]`.
```tsx
const result = await z.query.issue.where('id', 42).one().run();
if (!result) {
console.error('not found');
}
```
`one()` overrides any `limit()` clause that is also present.
## Relationships
You can query related rows using _relationships_ that are defined in your [Zero schema](/docs/zero-schema).
```tsx
// Get all issues and their related comments
z.query.issue.related('comments');
```
Relationships are returned as hierarchical data. In the above example, each row will have a `comments` field which is itself an array of the corresponding comments row.
You can fetch multiple relationships in a single query:
```tsx
z.query.issue.related('comments').related('reactions').related('assignees');
```
### Refining Relationships
By default all matching relationship rows are returned, but this can be refined. The `related` method accepts an optional second function which is itself a query.
```tsx
z.query.issue.related(
'comments',
// It is common to use the 'q' shorthand variable for this parameter,
// but it is a _comment_ query in particular here, exactly as if you
// had done z.query.comment.
q => q.orderBy('modified', 'desc').limit(100).start(lastSeenComment),
);
```
This _relationship query_ can have all the same clauses that top-level queries can have.
### Nested Relationships
You can nest relationships arbitrarily:
```tsx
// Get all issues, first 100 comments for each (ordered by modified,desc),
// and for each comment all of its reactions.
z.query.issue.related(
'comments', q => q.orderBy('modified', 'desc').limit(100).related(
'reactions')
)
);
```
## Where
You can filter a query with `where()`:
```tsx
z.query.issue.where('priority', '=', 'high');
```
The first parameter is always a column name from the table being queried. Intellisense will offer available options (sourced from your [Zero Schema](/docs/zero-schema)).
### Comparison Operators
Where supports the following comparison operators:
| Operator | Allowed Operand Types | Description |
| ---------------------------------------- | ----------------------------- | ------------------------------------------------------------------------ |
| `=` , `!=` | boolean, number, string | JS strict equal (===) semantics |
| `<` , `<=`, `>`, `>=` | number | JS number compare semantics |
| `LIKE`, `NOT LIKE`, `ILIKE`, `NOT ILIKE` | string | SQL-compatible `LIKE` / `ILIKE` |
| `IN` , `NOT IN` | boolean, number, string | RHS must be array. Returns true if rhs contains lhs by JS strict equals. |
| `IS` , `IS NOT` | boolean, number, string, null | Same as `=` but also works for `null` |
TypeScript will restrict you from using operators with types that don’t make sense – you can’t use `>` with `boolean` for example.
If you don’t see the comparison operator you need, let us know, many are easy
to add.
### Equals is the Default Comparison Operator
Because comparing by `=` is so common, you can leave it out and `where` defaults to `=`.
```tsx
z.query.issue.where('priority', 'high');
```
### Comparing to `null`
As in SQL, ZQL’s `null` is not equal to itself (`null ≠ null`).
This is required to make join semantics work: if you’re joining `employee.orgID` on `org.id` you do **not** want an employee in no organization to match an org that hasn’t yet been assigned an ID.
When you purposely want to compare to `null` ZQL supports `IS` and `IS NOT` operators that work just like in SQL:
```tsx
// Find employees not in any org.
z.query.employee.where('orgID', 'IS', null);
```
TypeScript will prevent you from comparing to `null` with other operators.
### Compound Filters
The argument to `where` can also be a callback that returns a complex expression:
```tsx
// Get all issues that have priority 'critical' or else have both
// priority 'medium' and not more than 100 votes.
z.query.issue.where(({cmp, and, or, not}) =>
or(
cmp('priority', 'critical'),
and(cmp('priority', 'medium'), not(cmp('numVotes', '>', 100))),
),
);
```
`cmp` is short for _compare_ and works the same as `where` at the top-level except that it can’t be chained and it only accepts comparison operators (no relationship filters – see below).
Note that chaining `where()` is also a one-level `and`:
```tsx
// Find issues with priority 3 or higher, owned by aa
z.query.issue.where('priority', '>=', 3).where('owner', 'aa');
```
### Relationship Filters
Your filter can also test properties of relationships. Currently the only supported test is existence:
```tsx
// Find all orgs that have at least one employee
z.query.organization.whereExists('employees');
```
The argument to `whereExists` is a relationship, so just like other relationships it can be refined with a query:
```tsx
// Find all orgs that have at least one cool employee
z.query.organization.whereExists('employees', q =>
q.where('location', 'Hawaii'),
);
```
As with querying relationships, relationship filters can be arbitrarily nested:
```tsx
// Get all issues that have comments that have reactions
z.query.issue.whereExists('comments',
q => q.whereExists('reactions'));
);
```
The `exists` helper is also provided which can be used with `and`, `or`, `cmp`, and `not` to build compound filters that check relationship existence:
```tsx
// Find issues that have at least one comment or are high priority
z.query.issue.where({cmp, or, exists} =>
or(
cmp('priority', 'high'),
exists('comments'),
),
);
```
## Data Lifetime and Reuse
Zero reuses data synced from prior queries to answer new queries when possible. This is what enables instant UI transitions.
But what controls the lifetime of this client-side data? How can you know whether any partiular query will return instant results? How can you know whether those results will be up to date or stale?
The answer is that the data on the client is simply the union of rows returned from queries which are currently syncing. Once a row is no longer returned by any syncing query, it is removed from the client. Thus, there is never any stale data in Zero.
So when you are thinking about whether a query is going to return results instantly, you should think about _what other queries are syncing_, not about what data is local. Data exists locally if and only if there is a query syncing that returns that data.
This is why we often say that despite the name `zero-cache`, Zero is not technically a cache. It's a *replica*.
A cache has a random set of rows with a random set of versions. There is no expectation that the cache any particular rows, or that the rows' have matching versions. Rows are simply updated as they are fetched.
A replica by contrast is eagerly updated, whether or not any client has requested a row. A replica is always very close to up-to-date, and always self-consistent.
Zero is a _partial_ replica because it only replicates rows that are returned by syncing queries.
## Query Lifecycle
Queries can be either _active_ or _backgrounded_. An active query is one that is currently being used by the application. Backgrounded queries are not currently in use, but continue syncing in case they are needed again soon.
Active queries are created one of three ways:
1. The app calls `q.materialize()` to get a `View`.
2. The app uses a platform binding like React's `useQuery(q)`.
3. The app calls [`preload()`](#preloading) to sync larger queries without a view.
Active queries sync until they are _deactivated_. The way this happens depends on how the query was created:
1. For `materialize()` queries, the UI calls `destroy()` on the view.
2. For `useQuery()`, the UI unmounts the component (which calls `destroy()` under the covers).
3. For `preload()`, the UI calls `cleanup()` on the return value of `preload()`.
### Background Queries
By default a deactivated query stops syncing immediately.
But it's often useful to keep queries syncing beyond deactivation in case the UI needs the same or a similar query in the near future. This is accomplished with the `ttl` parameter:
```ts
const [user] = useQuery(z.query.user.where('id', userId), {ttl: '1d'});
```
The `ttl` parameter specifies how long the app developer wishes the query to run in the background. The following formats are allowed (where `%d` is a positive integer):
| Format | Meaning |
| --------- | ------------------------------------------------------------------------------------ |
| `none` | No backgrounding. Query will immediately stop when deactivated. This is the default. |
| `%ds` | Number of seconds. |
| `%dm` | Number of minutes. |
| `%dh` | Number of hours. |
| `%dd` | Number of days. |
| `%dy` | Number of years. |
| `forever` | Query will never be stopped. |
If the UI re-requests a background query, it becomes an active query again. Since the query was syncing in the background, the very first synchronous result that the UI receives after reactivation will be up-to-date with the server (i.e., it will have `resultType` of `complete`).
Just like other types of queries, the data from background queries is available for use by new queries. A common pattern in to [preload](#preloading) a subset of most commonly needed data with `{ttl: 'forever'}` and then do more specific queries from the UI with, e.g., `{ttl: '1d'}`. Most often the preloaded data will be able to answer user queries, but if not, the new query will be answered by the server and backgrounded for a day in case the user revisits it.
### Client Capacity Management
Zero has a default soft limit of 20,000 rows on the client-side, or about 20MB of data assuming 1KB rows.
This limit can be increased with the [`--target-client-row-count`](./zero-cache-config#target-client-row-count) flag, but we do not recommend setting it higher than 100,000.
Contrary to the design of other sync engines, we believe that storing tons of data client-side doesn't make sense. Here are some reasons why:
- Initial sync will be slow, slowing down initial app load.
- Because storage in browser tabs is unreliable, initial sync can occur surprisingly often.
- We want to answer queries _instantly_ as often as possible. This requires client-side data in memory on the main thread. If we have to page to disk, we may as well go to the network and reduce complexity.
- Even though Zero's queries are very efficient, they do still have some cost, expecially hydration. Massive client-side storage would result in hydrating tons of queries that are unlikely to be used every time the app starts.
Most importantly, no matter how much data you store on the client, there will be cases where you have to fallback to the server:
- Some users might have huge amounts of data.
- Some users might have tiny amounts of available client storage.
- You will likely want the app to start fast and sync in the background.
Because you have to be able to fallback to server the question becomes _what is the **right** amount of data to store on the client?_, not _how can I store the absolute max possible data on the client?_
The goal with Zero is to answer 99% of queries on the client from memory. The remaining 1% of queries can fallback gracefully to the server. 20,000 rows was chosen somewhat arbitrarily as a number of rows that was likely to be able to do this for many applications.
There is no hard limit at 20,000 or 100,000. Nothing terrible happens if you go above. The thing to keep in mind is that:
1. All those queries will revalidate everytime your app boots.
2. All data synced to the client is in memory in JS.
Here is how this limit is managed:
1. Active queries are never destroyed, even if the limit is exceeded. Developers are expected to keep active queries well under the limit.
2. The `ttl` value counts from the moment a query deactivates. Backgrounded queries are destroyed immediately when the `ttl` is reached, even if the limit hasn't been reached.
3. If the client exceeds its limit, Zero will destroy backgrounded queries, least-recently-used first, until the store is under the limit again.
### Thinking in Queries
Although IVM is a very efficient way to keep queries up to date relative to re-running them, it isn't free. You still need to think about how many queries you are creating, how long they are kept alive, and how expensive they are.
This is why Zero defaults to _not_ backgrounding queries and doesn't try to aggressively fill its client datastore to capacity. You should put some thought into what queries you want to run in the background, and for how long.
Zero currently provides a few basic tools to understand the cost of your queries:
- The client logs a warning for slow query materializations. Look for `Slow query materialization` in your logs. The default threshold is `5s` (including network) but this is configurable with the `slowMaterializeThreshold` parameter.
- The client logs the materialization time of all queries at the `debug` level. Look for `Materialized query` in your logs.
- The server logs a warning for slow query materializations. Look for `Slow query materialization` in your logs. The default threshold is `5s` but this is configurable with the `log-slow-materialize-threshold` configuration parameter.
We will be adding more tools over time.
## Completeness
Zero returns whatever data it has on the client immediately for a query, then falls back to the server for any missing data. Sometimes it's useful to know the difference between these two types of results. To do so, use the `result` from `useQuery`:
```tsx
const [issues, issuesResult] = useQuery(z.query.issue);
if (issuesResult.type === 'complete') {
console.log('All data is present');
} else {
console.log('Some data is missing');
}
```
The possible values of `result.type` are currently `complete` and `unknown`.
The `complete` value is currently only returned when Zero has received the server result. But in the future, Zero will be able to return this result type when it _knows_ that all possible data for this query is already available locally. Additionally, we plan to add a `prefix` result for when the data is known to be a prefix of the complete result. See [Consistency](#consistency) for more information.
## Preloading
Almost all Zero apps will want to preload some data in order to maximize the feel of instantaneous UI transitions.
In Zero, preloading is done via queries – the same queries you use in the UI and for auth.
However, because preload queries are usually much larger than a screenful of UI, Zero provides a special `preload()` helper to avoid the overhead of materializing the result into JS objects:
```tsx
// Preload the first 1k issues + their creator, assignee, labels, and
// the view state for the active user.
//
// There's no need to render this data, so we don't use `useQuery()`:
// this avoids the overhead of pulling all this data into JS objects.
z.query.issue
.related('creator')
.related('assignee')
.related('labels')
.related('viewState', q => q.where('userID', z.userID).one())
.orderBy('created', 'desc')
.limit(1000)
.preload();
```
## Running Queries Once
Usually subscribing to a query is what you want in a reactive UI, but every so often you'll need to run a query just once. To do this, use the `run()` method:
```tsx
const results = await z.query.issue.where('foo', 'bar').run();
```
By default, `run()` only returns results that are currently available on the client. That is, it returns the data that would be given for [`result.type === 'unknown'`](#completeness).
If you want to wait for the server to return results, pass `{type: 'complete'}` to `run`:
```tsx
const results = await z.query.issue.where('foo', 'bar').run(
{type: 'complete'});
```
As a convenience you can also directly await queries:
```ts
await z.query.issue.where('foo','bar');
```
This is the same as saying `run()` or `run({type: 'unknown'})`.
## Consistency
Zero always syncs a consistent partial replica of the backend database to the client. This avoids many common consistency issues that come up in classic web applications. But there are still some consistency issues to be aware of when using Zero.
For example, imagine that you have a bug database w/ 10k issues. You preload the first 1k issues sorted by created.
The user then does a query of issues assigned to themselves, sorted by created. Among the 1k issues that were preloaded imagine 100 are found that match the query. Since the data we preloaded is in the same order as this query, we are guaranteed that any local results found will be a _prefix_ of the server results.
The UX that result is nice: the user will see initial results to the query instantly. If more results are found server-side, those results are guaranteed to sort below the local results. There's no shuffling of results when the server response comes in.
Now imagine that the user switches the sort to ‘sort by modified’. This new query will run locally, and will again find some local matches. But it is now unlikely that the local results found are a prefix of the server results. When the server result comes in, the user will probably see the results shuffle around.
To avoid this annoying effect, what you should do in this example is also preload the first 1k issues sorted by modified desc. In general for any query shape you intend to do, you should preload the first `n` results for that query shape with no filters, in each sort you intend to use.
Zero will not sync duplicate copies of rows that show up in multiple queries. Zero syncs the *union* of all active queries' results.
So you don't have to worry about syncing many sorts of the same query when it's likely the results will overlap heavily.
In the future, we will be implementing a consistency model that fixes these issues automatically. We will prevent Zero from returning local data when that data is not known to be a prefix of the server result. Once the consistency model is implemented, preloading can be thought of as purely a performance thing, and not required to avoid unsightly flickering.
--- permissions.mdx ---
Permissions are expressed using [ZQL](reading-data) and run automatically with every read and write.
## Define Permissions
Permissions are defined in [`schema.ts`](/docs/zero-schema) using the `definePermissions` function.
Here's an example of limiting deletes to only the creator of an issue:
```ts
// The decoded value of your JWT.
type AuthData = {
// The logged-in user.
sub: string;
};
export const permissions = definePermissions(schema, () => {
const allowIfIssueCreator = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('creatorID', authData.sub);
return {
issue: {
row: {
delete: [allowIfIssueCreator],
},
},
} satisfies PermissionsConfig;
});
```
`definePermission` returns a _policy_ object for each table in the schema. Each policy defines a _ruleset_ for the _operations_ that are possible on a table: `select`, `insert`, `update`, and `delete`.
## Access is Denied by Default
If you don't specify any rules for an operation, it is denied by default. This is an important safety feature that helps ensure data isn't accidentally exposed.
To enable full access to an action (i.e., during development) use the `ANYONE_CAN` helper:
```ts
import {ANYONE_CAN} from '@rocicorp/zero';
const permissions = definePermissions(schema, () => {
return {
issue: {
row: {
select: ANYONE_CAN,
// Other operations are denied by default.
},
},
// Other tables are denied by default.
} satisfies PermissionsConfig;
});
```
To do this for all actions, use `ANYONE_CAN_DO_ANYTHING`:
```ts
import {ANYONE_CAN_DO_ANYTHING} from '@rocicorp/zero';
const permissions = definePermissions(schema, () => {
return {
// All operations on issue are allowed to all users.
issue: ANYONE_CAN_DO_ANYTHING,
// Other tables are denied by default.
} satisfies PermissionsConfig;
});
```
## Permission Evaluation
Zero permissions are "compiled" into a JSON-based format at build-time. This file is stored in the `{ZERO_APP_ID}.permissions` table of your upstream database. Like other tables, it replicates live down to `zero-cache`. `zero-cache` then parses this file, and applies the encoded rules to every read and write operation.
The compilation process is very simple-minded (read: dumb). Despite looking like normal TypeScript functions that receive an `AuthData` parameter, rule functions are not actually invoked at runtime. Instead, they are invoked with a "placeholder" `AuthData` at build time. We track which fields of this placeholder are accessed and construct a ZQL expression that accesses the right field of `AuthData` at runtime.
The end result is that you can't really use most features of JS in these rules. Specifically you cannot:
- Iterate over properties or array elements in the auth token
- Use any JS features beyond property access of `AuthData`
- Use any conditional or global state
Basically only property access is allowed. This is really confusing and we're working on a better solution.
## Permission Deployment
During development, permissions are compiled and uploaded to your database completely automatically as part of the `zero-cache-dev` script.
For production, you need to call `npx zero-deploy-permissions` within your app to update the permissions in the production database whenever they change. You would typically do this as part of your normal schema migration or CI process. For example, the SST deployment script for [zbugs](/docs/samples#zbugs) looks like this:
```ts
new command.local.Command(
'zero-deploy-permissions',
{
create: `npx zero-deploy-permissions -p ../../src/schema.ts`,
// Run the Command on every deploy ...
triggers: [Date.now()],
environment: {
ZERO_UPSTREAM_DB: commonEnv.ZERO_UPSTREAM_DB,
// If the application has a non-default App ID ...
ZERO_APP_ID: commonEnv.ZERO_APP_ID,
},
},
// after the view-syncer is deployed.
{dependsOn: viewSyncer},
);
```
See the [SST Deployment Guide](deployment#guide-multi-node-on-sstaws) for more details.
## Rules
Each operation on a policy has a _ruleset_ containing zero or more _rules_.
A rule is just a TypeScript function that receives the logged in user's `AuthData` and generates a ZQL [where expression](reading-data#compound-filters). At least one rule in a ruleset must return a row for the operation to be allowed.
## Select Permissions
You can limit the data a user can read by specifying a `select` ruleset.
Select permissions act like filters. If a user does not have permission to read a row, it will be filtered out of the result set. It will not generate an error.
For example, imagine a select permission that restricts reads to only issues created by the user:
```ts
definePermissions(schema, () => {
const allowIfIssueCreator = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('creatorID', authData.sub);
return {
issue: {
row: {
select: [allowIfIssueCreator],
},
},
} satisfies PermissionsConfig;
});
```
If the issue table has two rows, one created by the user and one by someone else, the user will only see the row they created in any queries.
## Insert Permissions
You can limit what rows can be inserted and by whom by specifying an `insert` ruleset.
Insert rules are evaluated after the entity is inserted. So if they query the database, they will see the inserted row present. If any rule in the insert ruleset returns a row, the insert is allowed.
Here's an example of an insert rule that disallows inserting users that have the role 'admin'.
```ts
definePermissions(schema, () => {
const allowIfNonAdmin = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('role', '!=', 'admin');
return {
user: {
row: {
insert: [allowIfNonAdmin],
},
},
} satisfies PermissionsConfig;
});
```
## Update Permissions
There are two types of update rulesets: `preMutation` and `postMutation`. Both rulesets must pass for an update to be allowed.
`preMutation` rules see the version of a row _before_ the mutation is applied. This is useful for things like checking whether a user owns an entity before editing it.
`postMutation` rules see the version of a row _after_ the mutation is applied. This is useful for things like ensuring a user can only mark themselves as the creator of an entity and not other users.
Like other rulesets, `preMutation` and `postMutation` default to `NOBODY_CAN`. This means that every table must define both these rulesets in order for any updates to be allowed.
For example, the following ruleset allows an issue's owner to edit, but **not** re-assign the issue. The `postMutation` rule enforces that the current user still own the issue after edit.
```ts
definePermissions(schema, () => {
const allowIfIssueOwner = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('ownerID', authData.sub);
return {
issue: {
row: {
update: {
preMutation: [allowIfIssueOwner],
postMutation: [allowIfIssueOwner],
},
},
},
} satisfies PermissionsConfig;
});
```
This ruleset allows an issue's owner to edit and re-assign the issue:
```ts
definePermissions(schema, () => {
const allowIfIssueOwner = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('ownerID', authData.sub);
return {
issue: {
row: {
update: {
preMutation: [allowIfIssueOwner],
postMutation: ANYONE_CAN,
},
},
},
} satisfies PermissionsConfig;
});
```
And this allows anyone to edit an issue, but only if they also assign it to themselves. Useful for enforcing _"patches welcome"_? 🙃
```ts
definePermissions(schema, () => {
const allowIfIssueOwner = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('ownerID', authData.sub);
return {
issue: {
row: {
update: {
preMutation: ANYONE_CAN,
postMutation: [allowIfIssueOwner],
},
},
},
} satisfies PermissionsConfig;
});
```
## Delete Permissions
Delete permissions work in the same way as `insert` permissions except they run _before_ the delete is applied. So if a delete rule queries the database, it will see that the deleted row is present. If any rule in the ruleset returns a row, the delete is allowed.
## Debugging
See [Debugging Permissions](./debug/permissions).
## Examples
See [hello-zero](https://github.com/rocicorp/hello-zero/blob/main/src/schema.ts) for a simple example of write auth and [zbugs](https://github.com/rocicorp/mono/blob/main/apps/zbugs/shared/schema.ts#L217) for a much more involved one.
--- debug/permissions.mdx ---
Given that permissions are defined in their own file and internally applied to queries, it might be hard to figure out if or why a permission check is failing.
## Read Permissions
You can use the `analyze-query` utility with the `--apply-permissions` flag to see the complete query Zero runs, including read permissions.
```bash
npx analyze-query
--schema='./shared/schema.ts'
--query='issue.related("comments")'
--apply-permissions
--auth-data='{"userId":"user-123"}'
```
If the result looks right, the problem may be that Zero is not receiving the `AuthData` that you think it is. You can retrieve a query hash from websocket or server logs, then ask Zero for the details on that specific query.
Run this command with the same environment you run `zero-cache` with. It will use your `upstream` or `cvr` configuration to look up the query hash in the cvr database.
```bash
npx analyze-query
--schema='./shared/schema.ts'
--hash='3rhuw19xt9vry'
--apply-permissions
--auth-data='{"userId":"user-123"}'
```
The printed query can be different than the source ZQL string, because it is rebuilt from the query AST. But it should be logically equivalent to the query you wrote.
## Write Permissions
Look for a `WARN` level log in the output from `zero-cache` like this:
```
Permission check failed for {"op":"update","tableName":"message",...}, action update, phase preMutation, authData: {...}, rowPolicies: [...], cellPolicies: []
```
Zero prints the row, auth data, and permission policies that was applied to any failed writes.
The ZQL query is printed in AST format. See [Query ASTs](./query-asts) to
convert it to a more readable format.
--- zero-cache-config.mdx ---
`zero-cache` is configured either via CLI flag or environment variable. There is no separate `zero.config` file.
You can also see all available flags by running `zero-cache --help`.
## Required Flags
### Auth
One of [Auth JWK](#auth-jwk), [Auth JWK URL](#auth-jwk-url), or [Auth Secret](#auth-secret) must be specified. See [Authentication](/docs/auth/) for more details.
### Replica File
File path to the SQLite replica that zero-cache maintains. This can be lost, but if it is, zero-cache will have to re-replicate next time it starts up.
flag: `--replica-file`
env: `ZERO_REPLICA_FILE`
required: `true`
### Upstream DB
The "upstream" authoritative postgres database. In the future we will support other types of upstream besides PG.
flag: `--upstream-db`
env: `ZERO_UPSTREAM_DB`
required: `true`
## Optional Flags
### Admin Password
A password used to administer zero-cache server, for example to access the `/statz` endpoint.
flag: `--admin-password`
env: `ZERO_ADMIN_PASSWORD`
required: `false`
### App ID
Unique identifier for the app.
Multiple zero-cache apps can run on a single upstream database, each of which is isolated from the others, with its own permissions, sharding (future feature), and change/cvr databases.
The metadata of an app is stored in an upstream schema with the same name, e.g. `zero`, and the metadata for each app shard, e.g. client and mutation ids, is stored in the `{app-id}_{#}` schema. (Currently there is only a single "0" shard, but this will change with sharding).
The CVR and Change data are managed in schemas named `{app-id}_{shard-num}/cvr` and `{app-id}_{shard-num}/cdc`, respectively, allowing multiple apps and shards to share the same database instance (e.g. a Postgres "cluster") for CVR and Change management.
Due to constraints on replication slot names, an App ID may only consist of lower-case letters, numbers, and the underscore character.
Note that this option is used by both `zero-cache` and `zero-deploy-permissions`.
flag: `--app-id`
env: `ZERO_APP_ID`
default: `zero`
### App Publications
Postgres PUBLICATIONs that define the tables and columns to replicate. Publication names may not begin with an underscore, as zero reserves that prefix for internal use.
If unspecified, zero-cache will create and use an internal publication that publishes all tables in the public schema, i.e.:
```
CREATE PUBLICATION _{app-id}_public_0 FOR TABLES IN SCHEMA public;
```
Note that once an app has begun syncing data, this list of publications cannot be changed, and zero-cache will refuse to start if a specified value differs from what was originally synced. To use a different set of publications, a new app should be created.
flag: `--app-publications`
env: `ZERO_APP_PUBLICATIONS`
default: `[]`
### Auth JWK
A public key in JWK format used to verify JWTs. Only one of jwk, jwksUrl and secret may be set.
flag: `--auth-jwk`
env: `ZERO_AUTH_JWK`
required: `false`
### Auth JWK URL
A URL that returns a JWK set used to verify JWTs. Only one of jwk, jwksUrl and secret may be set.
flag: `--auth-jwks-url`
env: `ZERO_AUTH_JWKS_URL`
required: `false`
### Auto Reset
Automatically wipe and resync the replica when replication is halted. This situation can occur for configurations in which the upstream database provider prohibits event trigger creation, preventing the zero-cache from being able to correctly replicate schema changes. For such configurations, an upstream schema change will instead result in halting replication with an error indicating that the replica needs to be reset. When auto-reset is enabled, zero-cache will respond to such situations by shutting down, and when restarted, resetting the replica and all synced clients. This is a heavy-weight operation and can result in user-visible slowness or downtime if compute resources are scarce.
flag: `--auto-reset`
env: `ZERO_AUTO_RESET`
default: `true`
### Auth Secret
A symmetric key used to verify JWTs. Only one of jwk, jwksUrl and secret may be set.
flag: `--auth-secret`
env: `ZERO_AUTH_SECRET`
required: `false`
### Change DB
The Postgres database used to store recent replication log entries, in order to sync multiple view-syncers without requiring multiple replication slots on the upstream database. If unspecified, the upstream-db will be used.
flag: `--change-db`
env: `ZERO_CHANGE_DB`
required: `false`
### Change Max Connections
The maximum number of connections to open to the change database. This is used by the change-streamer for catching up zero-cache replication subscriptions.
flag: `--change-max-conns`
env: `ZERO_CHANGE_MAX_CONNS`
default: `5`
### Change Streamer Port
The port on which the change-streamer runs. This is an internal protocol between the replication-manager and zero-cache, which runs in the same process in local development. If unspecified, defaults to --port + 1.
flag: `--change-streamer-port`
env: `ZERO_CHANGE_STREAMER_PORT`
required: `false`
### Change Streamer URI
When unset, the zero-cache runs its own replication-manager (i.e. change-streamer). In production, this should be set to the replication-manager URI, which runs a change-streamer on port 4849.
flag: `--change-streamer-uri`
env: `ZERO_CHANGE_STREAMER_URI`
required: `false`
### CVR DB
The Postgres database used to store CVRs. CVRs (client view records) keep track of the data synced to clients in order to determine the diff to send on reconnect. If unspecified, the upstream-db will be used.
flag: `--cvr-db`
env: `ZERO_CVR_DB`
required: `false`
### CVR Max Connections
The maximum number of connections to open to the CVR database. This is divided evenly amongst sync workers.
Note that this number must allow for at least one connection per sync worker, or zero-cache will fail to start. See num-sync-workers.
flag: `--cvr-max-conns`
env: `ZERO_CVR_MAX_CONNS`
default: `30`
### Initial Sync Row Batch Size
The number of rows each table copy worker fetches at a time during initial sync. This can be increased to speed up initial sync, or decreased to reduce the amount of heap memory used during initial sync (e.g. for tables with large rows).
flag: `--initial-sync-row-batch-size`
env: `ZERO_INITIAL_SYNC_ROW_BATCH_SIZE`
default: `10000`
### Initial Sync Table Copy Workers
The number of parallel workers used to copy tables during initial sync. Each worker copies a single table at a time, fetching rows in batches of `initial-sync-row-batch-size`.
flag: `--initial-sync-table-copy-workers`
env: `ZERO_INITIAL_SYNC_TABLE_COPY_WORKERS`
default: `5`
### Lazy Startup
Delay starting the majority of zero-cache until first request.
This is mainly intended to avoid connecting to Postgres replication stream until the first request is received, which can be useful i.e., for preview instances.
Currently only supported in single-node mode.
flag: `--lazy-startup`
env: `ZERO_LAZY_STARTUP`
default: `false`
### Litestream Executable
Path to the litestream executable. This option has no effect if litestream-backup-url is unspecified.
flag: `--litestream-executable`
env: `ZERO_LITESTREAM_EXECUTABLE`
required: `false`
### Litestream Config Path
Path to the litestream yaml config file. zero-cache will run this with its environment variables, which can be referenced in the file via `${ENV}` substitution, for example:
- ZERO_REPLICA_FILE for the db Path
- ZERO_LITESTREAM_BACKUP_LOCATION for the db replica url
- ZERO_LITESTREAM_LOG_LEVEL for the log Level
- ZERO_LOG_FORMAT for the log type
flag: `--litestream-config-path`
env: `ZERO_LITESTREAM_CONFIG_PATH`
default: `./src/services/litestream/config.yml`
### Litestream Log Level
flag: `--litestream-log-level`
env: `ZERO_LITESTREAM_LOG_LEVEL`
default: `warn`
values: `debug`, `info`, `warn`, `error`
### Litestream Backup URL
The location of the litestream backup, usually an s3:// URL. If set, the litestream-executable must also be specified.
flag: `--litestream-backup-url`
env: `ZERO_LITESTREAM_BACKUP_URL`
required: `false`
### Litestream Checkpoint Threshold MB
The size of the WAL file at which to perform an SQlite checkpoint to apply the writes in the WAL to the main database file. Each checkpoint creates a new WAL segment file that will be backed up by litestream. Smaller thresholds may improve read performance, at the expense of creating more files to download when restoring the replica from the backup.
flag: `--litestream-checkpoint-threshold-mb`
env: `ZERO_LITESTREAM_CHECKPOINT_THRESHOLD_MB`
default: `40`
### Litestream Incremental Backup Interval Minutes
The interval between incremental backups of the replica. Shorter intervals reduce the amount of change history that needs to be replayed when catching up a new view-syncer, at the expense of increasing the number of files needed to download for the initial litestream restore.
flag: `--litestream-incremental-backup-interval-minutes`
env: `ZERO_LITESTREAM_INCREMENTAL_BACKUP_INTERVAL_MINUTES`
default: `15`
### Litestream Snapshot Backup Interval Hours
The interval between snapshot backups of the replica. Snapshot backups make a full copy of the database to a new litestream generation. This improves restore time at the expense of bandwidth. Applications with a large database and low write rate can increase this interval to reduce network usage for backups (litestream defaults to 24 hours).
flag: `--litestream-snapshot-backup-interval-hours`
env: `ZERO_LITESTREAM_SNAPSHOT_BACKUP_INTERVAL_HOURS`
default: `12`
### Litestream Restore Parallelism
The number of WAL files to download in parallel when performing the initial restore of the replica from the backup.
flag: `--litestream-restore-parallelism`
env: `ZERO_LITESTREAM_RESTORE_PARALLELISM`
default: `48`
### Log Format
Use text for developer-friendly console logging and json for consumption by structured-logging services.
flag: `--log-format`
env: `ZERO_LOG_FORMAT`
default: `"text"`
values: `text`, `json`
### Log IVM Sampling
How often to collect IVM metrics. 1 out of N requests will be sampled where N is this value.
flag: `--log-ivm-sampling`
env: `ZERO_LOG_IVM_SAMPLING`
default: `5000`
### Log Level
Sets the logging level for the application.
flag: `--log-level`
env: `ZERO_LOG_LEVEL`
default: `"info"`
values: `debug`, `info`, `warn`, `error`
### Log Slow Hydrate Threshold
The number of milliseconds a query hydration must take to print a slow warning.
flag: `--log-slow-hydrate-threshold`
env: `ZERO_LOG_SLOW_HYDRATE_THRESHOLD`
default: `100`
### Log Slow Row Threshold
The number of ms a row must take to fetch from table-source before it is considered slow.
flag: `--log-slow-row-threshold`
env: `ZERO_LOG_SLOW_ROW_THRESHOLD`
default: `2`
### Log Trace Collector
The URL of the trace collector to which to send trace data. Traces are sent over http. Port defaults to 4318 for most collectors.
flag: `--log-trace-collector`
env: `ZERO_LOG_TRACE_COLLECTOR`
required: `false`
### Number of Sync Workers
The number of processes to use for view syncing. Leave this unset to use the maximum available parallelism. If set to 0, the server runs without sync workers, which is the configuration for running the replication-manager.
flag: `--num-sync-workers`
env: `ZERO_NUM_SYNC_WORKERS`
required: `false`
### Per User Mutation Limit Max
The maximum mutations per user within the specified windowMs.
flag: `--per-user-mutation-limit-max`
env: `ZERO_PER_USER_MUTATION_LIMIT_MAX`
required: `false`
### Per User Mutation Limit Window (ms)
The sliding window over which the perUserMutationLimitMax is enforced.
flag: `--per-user-mutation-limit-window-ms`
env: `ZERO_PER_USER_MUTATION_LIMIT_WINDOW_MS`
default: `60000`
### Port
The port for sync connections.
flag: `--port`
env: `ZERO_PORT`
default: `4848`
### Push URL
The URL of the API server to which zero-cache will push mutations. Required if you use [custom mutators](/docs/custom-mutators).
flag: `--push-url`
env: `ZERO_PUSH_URL`
required: `false`
### Query Hydration Stats
Track and log the number of rows considered by each query in the system. This is useful for debugging and performance tuning.
flag: `--query-hydration-stats`
env: `ZERO_QUERY_HYDRATION_STATS`
required: `false`
### Replica Vacuum Interval Hours
Performs a VACUUM at server startup if the specified number of hours has elapsed since the last VACUUM (or initial-sync). The VACUUM operation is heavyweight and requires double the size of the db in disk space. If unspecified, VACUUM operations are not performed.
flag: `--replica-vacuum-interval-hours`
env: `ZERO_REPLICA_VACUUM_INTERVAL_HOURS`
required: `false`
### Server Version
The version string outputted to logs when the server starts up.
flag: `--server-version`
env: `ZERO_SERVER_VERSION`
required: `false`
### Storage DB Temp Dir
Temporary directory for IVM operator storage. Leave unset to use `os.tmpdir()`.
flag: `--storage-db-tmp-dir`
env: `ZERO_STORAGE_DB_TMP_DIR`
required: `false`
### Target Client Row Count
A soft limit on the number of rows Zero will keep on the client. 20k is a good default value for most applications, and we do not recommend exceeding 100k. See [Client Capacity Management](/docs/reading-data#client-capacity-management) for more details.
flag: `--target-client-row-count`
env: `ZERO_TARGET_CLIENT_ROW_COUNT`
default: `20000`
### Task ID
Globally unique identifier for the zero-cache instance. Setting this to a platform specific task identifier can be useful for debugging. If unspecified, zero-cache will attempt to extract the TaskARN if run from within an AWS ECS container, and otherwise use a random string.
flag: `--task-id`
env: `ZERO_TASK_ID`
required: `false`
### Tenants JSON
JSON encoding of per-tenant configs for running the server in multi-tenant mode:
```json
{
/**
* Requests to the main application port are dispatched to the first tenant
* with a matching host and path. If both host and path are specified,
* both must match for the request to be dispatched to that tenant.
*
* Requests can also be sent directly to the ZERO_PORT specified
* in a tenant's env overrides. In this case, no host or path
* matching is necessary.
*/
tenants: {
id: string; // value of the "tid" context key in debug logs
host?: string; // case-insensitive full Host: header match
path?: string; // first path component, with or without leading slash
/**
* Options are inherited from the main application (e.g. args and ENV) by default,
* and are overridden by values in the tenant's env object.
*/
env: {
ZERO_REPLICA_DB_FILE: string
ZERO_UPSTREAM_DB: string
ZERO_CVR_DB: string
ZERO_CHANGE_DB: string
...
};
}[];
}
```
flag: `--tenants-json`
env: `ZERO_TENANTS_JSON`
required: `false`
### Upstream Max Connections
The maximum number of connections to open to the upstream database for committing mutations. This is divided evenly amongst sync workers. In addition to this number, zero-cache uses one connection for the replication stream.
Note that this number must allow for at least one connection per sync worker, or zero-cache will fail to start. See num-sync-workers.
flag: `--upstream-max-conns`
env: `ZERO_UPSTREAM_MAX_CONNS`
default: `20`
--- release-notes/0.19.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.19
```
## Upgrading
* If you use custom mutators, please see [hello-zero-solid](https://github.com/rocicorp/hello-zero-solid/pull/18/files) for how to update your push endpoint.
* If you use SolidJS, please switch to [`createQuery`](https://github.com/rocicorp/hello-zero-solid/pull/18/files).
* If you are `awaiting z.mutate.foo.bar()`, you should [switch to `await z.mutate.foo.bar().client`](/docs/custom-mutators#waiting-for-mutator-result) to be consistent with `.server`.
* If you were using a 0.19 canary, the `.server` property [returns error by rejection again](/docs/custom-mutators#waiting-for-mutator-result) (like 0.18 did). Sorry about the thrash here.
## Features
- Add a `type` param to `query.run()` so it can wait for server results ([doc](/docs/reading-data#running-queries-once), [bug](https://bugs.rocicorp.dev/issue/3243))
- `await z.mutate.foo.bar()` is now `await z.mutate.foo.bar().client` for consistency with `.server`, old API still works but deprecated ([doc](/docs/custom-mutators#waiting-for-mutator-result))
- Improve speed of litestream restore by about 7x
- Increase replication speed when using JSON by about 25%
- Add options to `analyze-query` to apply permissions and auth data ([doc](/docs/debug/permissions#read-permissions)).
- Add option to `--lazy-startup` to `zero-cache` to delay connecting to upstram until first connection ([doc](/docs/zero-cache-config#lazy-startup))
- Add `/statz` endpoint for getting some health statistics from a running Zero instance ([doc](/docs/debug/slow-queries#statz))
## Fixes
- Support passing `Request` to `PushProccesor.process()` ([PR](https://github.com/rocicorp/mono/pull/4214))
- Fix layering in `PushProcessor` to better support custom db implementations (thanks Erik Munson!) ([PR](https://github.com/rocicorp/mono/pull/4251))
- Fix socket disconnects in GCP ([PR](https://github.com/rocicorp/mono/pull/4173))
- Quote Postgres enum types to preserve casing ([report](https://discord.com/channels/830183651022471199/1358217995188437074/1358218))
- `z2s`: Return `undefined` for empty result set when using `query.one()`
- `z2s`: Allow accessing tables in non-public schemas
- `z2s`: Allow `tx.foo.update({bar: undefined})` where `bar` is `optional` to match client behavior
- Fix broken replication when updating a key that is part of a unique (but non-PK) index
- `solid`: Rename `useQuery` to `createQuery` to fit Solid naming conventions (old name deprecated)
- Resync when publications are missing ([PR](https://github.com/rocicorp/mono/pull/4205))
- Fix missing `NOT LIKE` in `query.where()` ([PR](https://github.com/rocicorp/mono/pull/4217))
- Fix timezone shift when writing to `timestamp`/`timestamptz` and server is non-UTC timezone (thanks Tom Jenkinson!) ([PR](https://github.com/rocicorp/mono/pull/4216))
- Bound time spent in incremental updates to 1/2 hydration time
- Fix `ttl` being off by 1000 in some cases 😬 ([PR](https://github.com/rocicorp/mono/pull/4225))
- `z2s`: Relationships nested in a junction relationship were not working correctly ([PR](https://github.com/rocicorp/mono/pull/4221))
- Custom mutators: Due to multitab, client can receive multiple responses for same mutation
- Fix deadlock that could happen when pushing on a closed websocket ([PR](https://github.com/rocicorp/mono/pull/4256))
- Fix incorrect shutdown under heavy CPU load (thanks Erik Munson!) ([PR](https://github.com/rocicorp/mono/pull/4252))
- Fix case where deletes were getting reverted (thanks for reproduction Marc MacLeod!) ([PR](https://github.com/rocicorp/mono/pull/4282))
- `z2s`: Incorrect handling of self-join, and not exists
- `not(exists())` is not supported on the client
- re-auth on 401s returned by push endpoint
- Added `push.queryParams` constructor parameter to allow passing query params to the push endpoint ([doc](/docs/custom-mutators#setting-up-the-server))
## Breaking Changes
- The structure of setting up a `PushProcesor` has changed slightly. See [push endpoint setup](/docs/custom-mutators#setting-up-the-server) or [upgrade guide](#upgrading).
- Not technically a breaking change from 0.18, but if you were using 0.19 canaries, the `.server` property returns error by rejection again (like 0.18 did) ([doc](/docs/custom-mutators#waiting-for-mutator-result)).
--- release-notes/0.18.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.18
```
## Upgrading
To try out custom mutators, see the changes to [hello-zero-solid](https://github.com/rocicorp/hello-zero-solid/pull/17).
## Features
- Custom Mutators! Finally! Define arbitrary write operations in code ([doc](/docs/custom-mutators)).
- Added inspector API for debugging sync, queries, and client storage ([doc](/docs/debug/inspector)).
- Added `analyze-query` tool to debug query performance ([doc](/docs/debug/slow-queries#query-plan)).
- Added `transform-query` tool to debug permissions ([doc](/docs/debug/permissions#read-permissions)).
- Added `ast-to-zql` script to prettify Zero's internal AST format ([doc](/docs/debug/query-asts)).
## Fixes
- Added backpressure to `replication-manager` to protect against Postgres moving faster than we can push to clients ([PR](https://github.com/rocicorp/mono/pull/4089)).
- `@rocicorp/zero/advanced` has been deprecated. `AdvancedQuery` got folded into `Query` and `ZeroAdvancedOptions` got folded into `ZeroOptions` ([PR](https://github.com/rocicorp/mono/pull/4086)).
- Support `ALTER SCHEMA` DDL changes ([PR](https://github.com/rocicorp/mono/pull/4098))
- Allow `replication-manager` to continue running while a new one re-replicates. ([PR](https://github.com/rocicorp/mono/pull/4124)).
- Improve replication performance for some schema changes ([PR](https://github.com/rocicorp/mono/pull/4151)).
- Make the log level of `zero-deploy-permissions` configurable ([PR](https://github.com/rocicorp/mono/pull/4002))
- Bind `exists` to the expression builder ([PR](https://github.com/rocicorp/mono/pull/4010))
- Fix `single output already exists` error ([PR](https://github.com/rocicorp/mono/pull/4020))
- Fix `getBrowserGlobal('window')?.addEventListener not a function` in Expo (thanks `@andrewcoelho`!) ([PR](https://github.com/rocicorp/mono/pull/4037)).
- Fix Vue bindings ref counting bug. Bindings no longer need to pass `RefCountMap` ([PR](https://github.com/rocicorp/mono/pull/4013)).
- Fix CVR ownership takeover race conditions ([PR](https://github.com/rocicorp/mono/pull/4071)).
- Support `REPLICA IDENTITY FULL` in degraded-mode pg providers ([PR](https://github.com/rocicorp/mono/pull/4131)).
- Handle corrupt sqlite db by re-replicating ([PR](https://github.com/rocicorp/mono/pull/4133)).
- Don't send useless pokes to clients that are unchanged ([PR](https://github.com/rocicorp/mono/pull/4149)).
- Add `limit(1)` to queries using a relation that is marked `one()` ([PR](https://github.com/rocicorp/mono/pull/4154)).
- Export `UseQueryOptions`
## Breaking Changes
None.
--- release-notes/0.17.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.17
```
## Upgrading
See the upgrade from [hello-zero](https://github.com/rocicorp/hello-zero/pull/31) or [hello-zero-solid](https://github.com/rocicorp/hello-zero-solid/pull/16) for an example.
## Features
- Queries now take an optional `ttl` argument. This argument _backgrounds_ queries for some time after the app stops using them. Background queries continue syncing so they are instantly ready if the UI re-requests them. The data from background queries is also available to be used by new queries where possible ([doc](/docs/reading-data#query-lifecycle)).
- Structural schema versioning. This is TypeScript, why are we versioning with numbers like cave-people?? We got rid of `schemaVersion` concept entirely and now determine schema compatibility completely automatically, TS-stylie ([doc](/docs/zero-schema/#migrations)).
- Permissions now scoped to _"apps"_. You can now have different Zero "apps" talking to the same upstream database. Each app gets completely separate configuration and permissions. This should also enable previewing `zero-cache` (each preview would be its own app). Apps replace the existing "shard" concept ([doc](/docs/zero-cache-config#app-id)).
- Initial replication is over 5x faster, up to about 50MB/second or 15k row/second in our tests.
- Added warnings for slow hydration in both client and server ([doc](/docs/reading-data#thinking-in-queries)).
- `auto-reset` is now enabled by default for databases that don't support event triggers ([doc](/docs/connecting-to-postgres#schema-changes)).
- Default `cvr` and `change` databases to `upstream`, so that you don't have to specify them in the common case where they are the same as upstream.
- This docs site now has search!
## Fixes
- Certain kinds of many:many joins were causing `node already exists` assertions
- Certain kinds of `or` queries were causing consistency issues
- Support `replica identity full` for PostgreSQL tables
- We now print a stack trace during close at `debug` level to enable debugging errors where Zero is accessed after close.
- We now print a warning when `IndexedDB` is missing rather than throwing. This makes it a little easier to use Zero in SSR setups.
- We now reset `zero-cache` implicitly in a few edge cases rather than halting replication.
- Fixed a deadlock in `change-streamer`.
## Breaking Changes
- `query.run()` now returns its result via promise. This is required for compatibility with upcoming custom mutators, but also will allow us to wait for server results in the future (though that (still 😢) doesn't exist yet).
--- release-notes/0.16.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.16
```
## Upgrading
See the upgrade from [hello-zero](https://github.com/rocicorp/hello-zero/commit/156e829eef91ff8b92e189c800ec6eba7213c383) or [hello-zero-solid](https://github.com/rocicorp/hello-zero-solid/commit/4f8d2b055b32efa9434df6f514007dce5f1d2c0b) for an example.
## Features
- Documented how to use lambdas to deploy permissions in SST, rather than needing CI/CD to have access to Postgres. ([doc](/docs/deployment#initialize-sst) – search for "`permissionsDeployer`").
- Added simple debugging logs for read and write permissions ([doc](/docs/debug/permissions)).
## Fixes
- Improve performance of initial sync about 2x ([PR 1](https://github.com/rocicorp/mono/pull/3836), [PR 2](https://github.com/rocicorp/mono/pull/3835)).
- `IN` should allow `readonly` array arguments ([Report](https://discord.com/channels/830183651022471199/1288232858795769917/1340464538704937082), [PR](https://github.com/rocicorp/mono/pull/3819)).
- Export `ANYONE_CAN_DO_ANYTHING` ([Report](https://discord.com/channels/830183651022471199/1340088459544756276/1340674831347355711)).
- Fix false-positive in schema change detection ([Report](https://discord.com/channels/830183651022471199/1288232858795769917/1341135548944744448), [PR](https://github.com/rocicorp/mono/pull/3828)).
- Fix writes of numeric types ([Report](https://discord.com/channels/830183651022471199/1288232858795769917/1341076949749071955), [PR](https://github.com/rocicorp/mono/pull/3750))
- Fix bug where litestream was creating way too many files in s3 ([PR](https://github.com/rocicorp/mono/pull/3839))
- Fix memory leak in change-streamer noticeable under high write load ([PR](https://github.com/rocicorp/mono/pull/3859))
- Fix `query already registered` error ([PR](https://github.com/rocicorp/mono/pull/3840))
- Correctly handle optional booleans ([PR](https://github.com/rocicorp/mono/pull/3863))
- Ignore indexes with unpublished columns ([PR](https://github.com/rocicorp/mono/pull/3862))
## Breaking Changes
None.
--- release-notes/0.15.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.15
```
## Upgrade Guide
This release changes the way that permissions are sent to the server. Before, permissions were sent to the server by setting the `ZERO_SCHEMA_JSON` or `ZERO_SCHEMA_FILE` environment variables, which include the permissions.
In 0.15, these variables go away and are replaced by a new command: `npx zero-deploy-permissions`. This command writes the permissions to a new table in the upstream database. This design allows live permission updates, without restarting the server. It also solves problems with max env var size that users were seeing.
This release also flips the default permission from `allow` to `deny` for all rules.
To upgrade your app:
1. See the changes to [hello-zero](https://github.com/rocicorp/hello-zero/pull/26) or [hello-zero-solid](https://github.com/rocicorp/hello-zero-solid/pull/14) for how to update your permissions.
2. Remove the `ZERO_SCHEMA_JSON` and `ZERO_SCHEMA_FILE` environment variables from your setup. They aren't used anymore.
3. Use [`npx zero-deploy-permissions`](/docs/permissions#permission-deployment) to deploy permissions when necessary. You can hook this up to your CI to automate it. See the [zbugs implementation](https://github.com/rocicorp/mono/blob/86ab73122a0532e4ec516badc1d8fb82b3465b49/prod/sst/sst.config.ts#L178) as an example.
## Features
- Live-updating permissions ([docs](/docs/permissions#permission-deployment)).
- Permissions now default to **deny** rather than **allow** ([docs](/docs/permissions#access-is-denied-by-default)).
## Fixes
- Multiple `whereExists` in same query not working ([PR](https://github.com/rocicorp/mono/pull/3746))
- Allow overlapped mutators ([bug](https://bugs.rocicorp.dev/issue/3529))
- "Immutable type too deep" error ([PR](https://github.com/rocicorp/mono/pull/3758))
- Log server version at startup ([PR](https://github.com/rocicorp/mono/pull/3737))
- Eliminate quadratic CVR writes ([PR](https://github.com/rocicorp/mono/pull/3736))
- Handle `numeric` in the replication stream ([PR](https://github.com/rocicorp/mono/pull/3750))
- Make the auto-reset required error more prominent ([PR](https://github.com/rocicorp/mono/pull/3794))
- Add `"type":"module"` recommendation when schema load fails ([PR](https://github.com/rocicorp/mono/pull/3797))
- Throw error if multiple auth options set ([PR](https://github.com/rocicorp/mono/pull/3807))
- Handle NULL characters in JSON columns ([PR](https://github.com/rocicorp/mono/pull/3810))
## Breaking Changes
- Making permissions deny by default breaks existing apps. To fix add `ANYONE_CAN` or other appropriate permissions for your tables. See [docs](/docs/permissions#access-is-denied-by-default).
- The `ZERO_SCHEMA_JSON` and `ZERO_SCHEMA_FILE` environment variables are no longer used. Remove them from your setup and use [`npx zero-deploy-permissions`](/docs/permissions#permission-deployment) instead.
--- release-notes/0.14.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.14
```
## Features
- Use `from()` to map column or tables to a different name ([docs](../zero-schema#name-mapping)).
- Sync from muliple Postgres schemas ([docs](../zero-schema#multiple-schemas))
## Fixes
- `useQuery` not working when `server` unset ([bug](https://bugs.rocicorp.dev/issue/3497))
- Error: "single output already exists" in hello-zero-solid ([bug](https://bugs.rocicorp.dev/issue/3488))
- `Row` helper doesn't work with query having `one()` ([bug](https://bugs.rocicorp.dev/issue/3503))
- Partitioned Postgres tables not replicating
## Breaking Changes
None.
--- release-notes/0.13.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.13
```
## Features
- Multinode deployment for horizontal scalability and zero-downtime deploys ([docs](/docs/deployment#architecture)).
- SST Deployment Guide ([docs](/docs/deployment#guide-multi-node-on-sstaws)).
- Plain AWS Deployment Guide ([docs](/docs/deployment#guide-multi-node-on-raw-aws)).
- Various exports for external libraries
- Remove build hash from docker version for consistency with npm ([discussion](https://discord.com/channels/830183651022471199/1325165395015110688/1333906735060226161))
## Fixes
- Move heartbeat monitoring to separate path, not port
- Type instantiation is excessively deep and possibly infinite ([bug](https://bugs.rocicorp.dev/issue/3477)).
- 20x improvement to `whereExists` performance ([discussion](https://github.com/rocicorp/mono/pull/3629#issuecomment-2621976119))
## Breaking Changes
- Removing the hash from the version is a breaking change if you had scripts relying on that.
- Moving the heartbeat monitor to a path is a breaking change for deployments that were using that.
--- release-notes/0.12.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.12
```
## Features
- Schemas now support circular relationships ([docs](/docs/zero-schema#circular-relationships)).
- Added `one()` and `many()` schema helpers to default relationship type ([docs](/docs/zero-schema#table-schemas)).
- Support for syncing tables without a primary key as long as there is a unique index. This enables Prisma's [implicit many-to-many relations](https://www.prisma.io/docs/orm/prisma-schema/data-model/relations/many-to-many-relations#implicit-many-to-many-relations) ([docs](/docs/postgres-support#primary-keys)).
- Zero has been confirmed to work with Aurora and Google Cloud SQL ([docs](/docs/connecting-to-postgres))
- Client bundle size reduced from 55kb to 47kb (-15%).
## Fixes
- Windows: `zero-cache` was spawning emptying terminals and leaving listeners connected on exit.
- Incorrect warning in `zero-cache` about enums not being supported.
- Failure to handle the primary key of Postgres tables changing.
- Incorrect results when `whereExists()` is before `where()` in query ([bug](https://bugs.rocicorp.dev/issue/3417)).
- Error: _The inferred type of '...' cannot be named without a reference to ..._.
- Error: _insufficient upstream connections_.
- Several causes of flicker in React.
- Incorrect values for `ResultType` when unloading and loading a query quickly ([bug](https://bugs.rocicorp.dev/issue/3456)).
- Error: _Postgres is missing the column '...' but that column was part of a row_.
- Pointless initial empty render in React when data is already available in memory.
- Error: _Expected string at ... Got array_ during auth.
- `where()` incorrectly allows comparing to `null` with the `=` operator ([bug](https://bugs.rocicorp.dev/issue/3426)).
- SolidJS: Only call `setState` once per transaction.
## Breaking Changes
- The schema definition syntax has changed to support circular relationships. See the changes to [`hello-zero`](https://github.com/rocicorp/hello-zero/commit/70cd15d5631436c058518d154acb3495b718970e) and [`hello-zero-solid`](https://github.com/rocicorp/hello-zero-solid/commit/c8932a7ff06cbdc02759b3b48592ed61055d4cd3) for upgrade examples.
--- release-notes/0.11.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.11
```
## Features
- Windows should work a lot better now. Thank you very much to [aexylus](https://aexylus.com/) and [Sergio Leon](https://www.cbnsndwch.io/) for the testing and contributions here.
- Support nested property access in JWT auth tokens ([docs](/docs/permissions#rules)).
- Make initial sync configurable ([docs](/docs/zero-cache-config#initial-sync-table-copy-workers)).
- Add query result type to SolidJS ([docs](/docs/reading-data#completeness))
- Docker image now contains native amd64 and arm64 binaries.
- Add `storageKey` constructor parameter to enable multiple `Zero` instances for same `userID`.
## Fixes
Many, many fixes, including:
- Fix downstream replication of primitive values
- Fix replication of `TRUNCATE` messages
- Fix large storage use for idle pg instances
- Add runtime sanity checks for when a table is referenced but not synced
- Fix `zero-cache-dev` for multitenant
## Breaking Changes
- The addition of result types to SolidJS is a breaking API change on SolidJS only. See the changes to [`hello-zero-solid`](https://github.com/rocicorp/hello-zero-solid/commit/7c6c3a47479f037f8323b102013244881c74fe9e) for upgrade example.
--- release-notes/0.10.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.10
```
## Features
- None.
## Fixes
- Remove top-level await from `zero-client`.
- Various logging improvements.
- Don't throw error when `WebSocket` unavailable on server.
- Support building on Windows (running on Windows still doesn't work)
## Breaking Changes
- None.
--- release-notes/0.9.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.9
```
See the changes to [hello-zero](https://github.com/rocicorp/hello-zero/pull/8) or [hello-zero-solid](https://github.com/rocicorp/hello-zero-solid/pull/5) for example updates.
## Features
- **JWK Support**. For auth, you can now specify a JWK containing a public key, or a JWKS url to support autodiscovery of keys. ([docs](../auth/))
- **UUID column**. Zero now supports the `uuid` Postgres column type. ([docs](../postgres-support#column-types))
## Fixes
- **Readonly Values**. Type of values returned from Zero queries are marked `readonly`. The system always considered them readonly, but now the types reflect that. ([docs](../reading-data/))
## Breaking Changes
- The `zero-cache` config `ZERO_JWT_SECRET` has been renamed to `ZERO_AUTH_SECRET` for consistency with the new JWK-related keys. If you were using the old name, you'll need to update your `.env` file.
- All values returned by Zero are now `readonly`. You'll probably have to add this TS modifier various places. If you find yourself casting away `readonly` you probably should be cloing the value instead.
--- release-notes/0.8.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.8
```
See the changes to [hello-zero](https://github.com/rocicorp/hello-zero/pull/8) or [hello-zero-solid](https://github.com/rocicorp/hello-zero-solid/pull/5) for example updates.
## Features
- **Schema Autobuild**. There's now a `zero-cache-dev` script that automatically rebuilds the schema and restarts `zero-cache` on changes to `schema.ts`. ([docs](../zero-schema#building-the-zero-schema))
- **Result Type.** You can now tell whether a query is complete or partial. ([docs](/docs/reading-data#completeness))
- **Enums**. Enums are now supported in Postgres schemas and on client. ([docs](../postgres-support#column-types))
- **Custom Types**. You can define custom JSON types in your schema. ([docs](../zero-schema#custom-json-types))
- **OTEL Tracing.** Initial tracing support. ([docs](/docs/zero-cache-config#log-trace-collector))
- **timestampz.** Add support for `timestampz` Postgres column type. ([docs](../postgres-support#column-types))
- **SSLMode**. You can disable TLS when `zero-cache` connects to DB with `sslmode=disable`. ([docs](../connecting-to-postgres#ssl-mode))
- **Permission Helpers**. `ANYONE_CAN` and `NOBODY_CAN` helpers were added to make these cases more readable. ([docs](../permissions#permissions-denied-by-default))
- **Multitenant Support**. A single `zero-cache` can now front separate Postgres databases. This is useful for customers that have one "dev" database in production per-developer. ([docs](../zero-cache-config#tenants-json))
## Fixes
- **Crash with JSON Columns**. Fixed a crash when a JSON column was used in a Zero app with write permissions ([bug](https://bugs.rocicorp.dev/issue/3215))
- **Better Connection Error Reporting**. Some connection errors would cause `zero-cache` to exit silently. Now they are returned to client and logged.
## Breaking Changes
- `useQuery` in React now returns a 2-tuple of `[rows, result]` where `result` is an object with a `type` field.
- `postProposedMutation` in write permissions for `update` renamed to `postMutation` for consistency.
- `TableScheamToRow` renamed to `Row` to not be so silly long.
--- release-notes/0.7.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.7
```
## Features
- **Read permissions.** You can now control read access to data using ZQL ([docs](../permissions#select-permissions)).
- **Deployment.** We now have a single-node Docker container ([docs](../deployment/)). Future work will add multinode support.
- **Compound FKs.** Zero already supported compound _primary_ keys, but now it also supports compound _foreign_ keys ([docs](../zero-schema#relationships-and-compound-keys)).
- **Schema DX**:
- Columns types can use bare strings now if `optional` is not needed ([example](https://github.com/rocicorp/mono/commit/212379241e27e717f1237946f3384127d06661c3#diff-01e627d4886ffc106a9f60c5ea65f35b3868ad4de898cecf7ae60329b11c22e7R13)).
- PK can be a single string in the common case where it’s non-compound ([example](https://github.com/rocicorp/mono/commit/212379241e27e717f1237946f3384127d06661c3#diff-01e627d4886ffc106a9f60c5ea65f35b3868ad4de898cecf7ae60329b11c22e7R19)).
## Breaking Changes
- Several changes to `schema.ts`. See [update](https://github.com/rocicorp/hello-zero/commits/main/) to `hello-zero` for overview. Details:
- `defineAuthorization` was renamed to `definedPermissions` to avoid confusion with _authentication_.
- The way that many:many relationships are defined has changed to be more general and easy to remember. See example.
- The signature of `definePermissions` and the related rule functions have changed:
- Now rules return an _expression_ instead of full query. This was required to make read permissions work and we did it for write permissions for consitency (see example).
- The `update` policy now has two child policies: `preMutation` and `postMutation`. The rules we used to have were `preMutation`. They run before a change and can be used to validate a user has permission to change a row. The `postMutation` rules run after and can be used to limit the changes a user is allowed to make.
- The `schema.ts` file should export an object having two fields: `schema` and `permissions`.
- The way that `schema.ts` is consumed has also changed. Rather than `zero-cache` directly reading the typescript source, we compile it to JSON and read that.
- `ZERO_SCHEMA_FILE` should now point to a JSON file, not `.ts`. It defaults to `./zero-schema.json` which we’ve found to be pretty useful so you’ll probably just remove this key from your `.env` entirely.
- Use `npx zero-build-schema` to generate the JSON. You must currently do this manually each time you change the schema, we will automate it soon.
We compile the schema to JSON so that we can use it on the server without
needing a TS toolchain there. Also so that we can run a SaaS in the future
without needing to run user code.
## zbugs
- Comments [now have permalinks](https://bugs.rocicorp.dev/issue/3067#comment-qt7YPQxXsBMBqcOkkO1pY). Implementing permalinks in a synced SPA [is fun](https://github.com/rocicorp/mono/commit/384d0955a3998d68d293985b0de89c5302076ec5)!
- Private issues. Zbugs now supports private (to team only) issues. I wonder what’s in them … 👀.
## Docs
- [The docs have moved](https://zero.rocicorp.dev/). Please don’t use Notion anymore, they won’t be updated.
--- release-notes/0.6.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.6
```
## Upgrade Guide
This release is a bit harder to upgrade to than previous alphas. For a step-by-step guide, please refer to the commits that upgrade the React and Solid quickstart apps:
- [Upgrading hello-zero from Zero 0.5 to 0.6](https://github.com/rocicorp/hello-zero/compare/ee837552be8419fbcbe4c5887609c89a7b1c4e07...8a0d29149bac0ab10aa25de3ebdb25ab70bc0d96)
- [Upgrading hello-zero-solid from Zero 0.5 to 0.6](https://github.com/rocicorp/hello-zero-solid/compare/79405b2da06b059a184abec69fdc20f071c58c4d...f4fed1ab7555bdd3bc131536863d60e799de571a)
## Breaking Changes
- Totally new configuration system.
- `zero.config.ts` is no more – config is now via env vars ([documentation](/docs/zero-cache-config)).
- Permissions rules moved into schema ([documentation](/docs/auth#permissions)).
- Renamed CRUD mutators to be consistent with SQL naming ([bug](https://bugs.rocicorp.dev/issue/3144), [documentation](/docs/writing-data)).
- `z.mutate.
.create -> insert`
- `z.mutate.
.put -> upsert`
- Removed `select` from ZQL. It wasn’t doing anything ([documentation](/docs/reading-data))
- Moved batch mutation to its own `mutateBatch` method. Before the `mutate` field also doubled as a method. This made intellisense hard to understand since `z.mutate` had all the tables as fields but also all the fields of a function.
## Features
- Relationship filters. Queries can now include `whereExists` ([bug](https://bugs.rocicorp.dev/issue/3039), [documentation](/docs/reading-data#relationship-filters)).
- Reworked syntax for compound `where` filters, including ergonomically building `or` expressions with dynamic number of clauses ([bug](https://bugs.rocicorp.dev/issue/3104), [documentation](/docs/reading-data#compound-filters)).
- Support using Postgres databases without superuser access for smaller apps ([documentation](/docs/connecting-to-postgres)).
- Support for running `Zero` client under Cloudflare Durable Objects ([documentation](/docs/samples#hello-zero-do)).
- Reworked support for `null` / `undefined` to properly support optional fields ([bug](https://bugs.rocicorp.dev/issue/3114), [documentation](/docs/zero-schema#optional-columns)).
- Added `IS` / `IS NOT` to ZQL to support checking for null ([bug](https://bugs.rocicorp.dev/issue/3028), [documentation](/docs/reading-data#comparing-to-null)).
- Improved intellisense for mutators.
- Added `--port` flag and `ZERO_PORT` environment variable ([bug](https://bugs.rocicorp.dev/issue/3031), [documentation](/docs/zero-cache-config)).
- Default max connections of zero-cache more conservatively so that it should fit with even common small Postgres configurations.
- `zero-cache` now accepts requests with any base path, not just `/api`. The `server` parameter to the `Zero` client constructor can now be a host (`https://myapp-myteam.zero.ms`) or a host with a single path component (`https://myapp-myteam.zero.ms/zero`). These two changes together allow hosting `zero-cache` on same domain with an app that already uses the `/api` prefix ([bug](https://bugs.rocicorp.dev/issue/3115)).
- Allow Postgres columns with default values, but don’t sync them ([documentation](/docs/postgres-support#column-defaults)).
- The `npx zero-sqlite` utility now accepts all the same flags and arguments that `sqlite3` does ([documentation](/docs/debugging/replication)).
## zbugs
- Added tooltip describing who submitted which emoji reactions
- Updated implementation of label, assignee, and owner filters to use relationship filters
- Updated text filter implementation to use `or` to search description and comments too
## Docs
- Added new [ZQL reference](/docs/reading-data)
- Added new [mutators reference](/docs/writing-data)
- Added new [config reference](/docs/zero-cache-config)
--- release-notes/0.5.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.5
```
## Breaking changes
- `createTableSchema` and `createSchema` moved to `@rocicorp/zero/schema` subpackage. This is in preparation to moving authorization into the schema file.
- `SchemaToRow` helper type was renamed `TableSchemaToRow` and moved into `@rocicorp/zero/schema`.
Basically:
```diff
- import { createSchema, createTableSchema, SchemaToRow } from "@rocicorp/zero";
+ import { createSchema, createTableSchema, TableSchemaToRow } from "@rocicorp/zero/schema";
```
## Features
- Added support for JSON columns in Postgres ([documentation](/docs/postgres-support)).
- Zero pacakage now includes `zero-sqlite3`, which can be used to explore our sqlite files ([documentation](/docs/recipes)).
## Fixes
- We were not correctly replicating the `char(n)` type, despite documenting that we were.
## Docs
_nothing notable_
## zbugs
_nothing notable_
--- release-notes/0.4.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.4
```
## Breaking changes
The `or` changes modified the client/server protocol. You’ll need to restart zero-cache and clear browser data after updating.
## Added `or` , `and` , and `not` to ZQL ([documentation](/docs/reading-data)).
- Added `query.run()` method ([documentation](/docs/reading-data#running-queries-once)).
## Fixes
- Use `batch()` method in zero-solid to improve performance when multiple updates happen in same frame. To take advantage of this you must use the `createZero` helper from `@rocicorp/zero/solid`, instead of instantiating Zero directly. See the solid [sample app](https://github.com/rocicorp/hello-zero-solid/blob/main/src/main.tsx#L16).
- Postgres tables that were reserved words in SQLite but not Postgres caused crash during replication.
- `LIKE` was not matching correctly in the case of multiline subjects.
- Upstream database and zero database can now be same Postgres db (don’t need separate ports).
## Docs
_nothing notable_
## zbugs
- Use `or` to run text search over both titles and bodies
- prevent j/k in emoji
- preload emojis
--- release-notes/0.3.mdx ---
## Install
```bash
npm install @rocicorp/zero@0.3
```
## Breaking changes
- zero.config file is now TypeScript, not JSON. See: https://github.com/rocicorp/hello-zero/blob/07c08b1f86b526a96e281ee65af672f52a59bcee/zero.config.ts.
## Features
- **Schema Migrations:** Zero now has first-class support for schema migration ([documentation](/docs/zero-schema/#migrations)).
- **Write Permissions:** First-class write permissions based on ZQL ([documentation](/docs/auth)).
- **Date/Time related types:** Zero now natively supports the TIMESTAMP and DATE Postgres types ([sample app](https://github.com/rocicorp/hello-zero/blob/main/src/schema.ts), [documentation](/docs/postgres-support)).
- **SolidJS:** We now have first-class support for SolidJS ([documentation](/docs/solidjs)).
- **Intellisense for Schema Definition:** Introduce `createSchema` and `createTableSchema` helper functions to enable intellisense when defining shemas. See [Sample App](https://github.com/rocicorp/hello-zero/blob/main/src/schema.ts#L10).
- **`escapeLike()` :** Add helper to properly escape strings for use in `LIKE` filters. See [Sample App](https://github.com/rocicorp/hello-zero/blob/main/src/App.tsx#L37).
- **New QuickStart App:** Entirely rewrote the [setup/sample flow](/docs/quickstart) to (a) make it much faster to get started playing with Zero, and (b) demonstrate more features.
## Fixes
- The `@rocicorp/zero` package now downloads a prebuilt sqlite instead of compiling it locally. This significantly speeds up install.
- Support `rds.force_ssl=1` RDS configuration.
- Fixed bug where sibling subqueries could be lost on edit changes.
- Fixes to error handling to ensure zero-cache prints errors when crashing in multiprocess mode.
- If zero-cache hears from a client with an unknown CVR/cookie, zero-cache forces that client to reset itself and reload automatically. Useful during development when server-state is frequently getting cleared.
## Docs
- Started work to make [real docs](/docs/introduction). Not quite done yet.
## zbugs
https://bugs.rocicorp.dev/ (pw: zql)
- Improve startup perf: ~3s → ~1.5s Hawaii ↔ US East. More work to do here but good progress.
- Responsive design for mobile.
- “Short IDs”: Bugs now have a short numeric ID, not a random hash. See [Demo Video](https://discord.com/channels/830183651022471199/1288232858795769917/1298114323272568852).
- First-class label picker.
- Unread indicators.
- Finish j/k support for paging through issues. It’s now “search-aware”, it pages through issues in order of search you clicked through to detail page in.
- Text search (slash to activate — needs better discoverability)
- Emojis on issues and comments
- Sort controls on list view
- remove fps meter temporarily
- numerous other UI polish
--- release-notes/0.2.mdx ---
## Breaking changes
- None
## Features
- “Skip mode”: zero-cache now skips columns with unsupported datatypes. A warning is printed out when this happens:
This makes it easy to use zero-cache with existing schemas that have columns
Zero can’t handle. You can [pair this with Postgres
triggers](/docs/postgres-support#column-types) to easily translate unsupported
types into something Zero can sync.
- Zero now supports compound primary keys. You no longer need to include an extraneous `id` column on the junction tables.
## Fixes
- Change the way Zero detects unsupported environments to work in One (and any other supported env). Before, Zero was looking for WebSocket and indexedDB early on, but indexedDB won’t be present on RN as SQLite will be used. Instead look for indexedDB only at use.
- Require Node v20 explicitly in package.json to prevent accidentally compiling better-sqlite3 with different Node version than running with.
- Ensure error messages early in startup get printed out before shutting down in multiprocess mode.
## Docs
- [Factored out the sample app](https://github.com/rocicorp/my-first-zapp) from the docs into its own Github repo so you can just download it and poke around if you prefer that.
## Source tree fixes
- Run zero-cache from source. You no longer have to build `zero` before running `zbugs`, it picks up the changes automatically.
## zbugs
- Numerous polish/styling fixes
- Change default to ‘open’ bugs
- Add ‘assignee’ field
--- release-notes/0.1.mdx ---
## Breaking changes
- The name of some config keys in `zero.config.json` changed:
- `upstreamUri` → `upstreamDBConnStr`
- `cvrDbUri` → `cvrDBConnStr`
- `changeDbUri` → `changeDBConnStr`
- `replicaDbFile` → `replicaDBFile`
- Changed default port of `zero-cache` to `4848` . So your app startup should look like `VITE_PUBLIC_SERVER="http://localhost:4848"`.
## Features
- Print a warning to js console when Zero constructor `server` param is `null` or `undefined`
- zero-cache should now correctly bind to both ipv4 and ipv6 loopback addresses. This should fix the issue where using `localhost` to connect to zero-cache on some systems did not work.
- Check for presence of `WebSocket` early in startup of `Zero`. Print a clear error to catch people accidentally running Zero under SSR.
- Fix annoying error in js console in React strict mode from constructing and closing Replicache in quick succession.
## Source tree fixes
These only apply if you were working in the Rocicorp monorepo.
- Fixed issue where zbugs didn’t rebuild when zero dependency changed - generally zbugs build normally again
- The zero binary has the right permissions bit so you don’t have to chmod u+x after build
- Remove overloaded name `snapshot` in use-query.tsx (thanks Scott 🙃)
--- custom-mutators.mdx ---
_Custom Mutators_ are a new way to write data in Zero that is much more powerful than the original ["CRUD" mutator API](./writing-data).
Instead of having only the few built-in `insert`/`update`/`delete` write operations for each table, custom mutators allow you to _create your own write operations_ using arbitrary code. This makes it possible to do things that are impossible or awkward with other sync engines.
For example, you can create custom mutators that:
- Perform arbitrary server-side validation
- Enforce fine-grained permissions
- Send email notifications
- Query LLMs
- Use Yjs for collaborative editing
- … and much, _much_ more – custom mutators are just code, and they can do anything code can do!
Despite their increased power, custom mutators still participate fully in sync. They execute instantly on the local device, immediately updating all active queries. They are then synced in the background to the server and to other clients.
We're still refining the design of custom mutators. During this phase, the old
CRUD mutators will continue to work. But we do want to deprecate CRUD
mutators, and eventually remove them. So please try out custom mutators and
[let us know](https://discord.rocicorp.dev/) how they work for you, and what
improvements you need before the cutover.
## Understanding Custom Mutators
### Architecture
Custom mutators introduce a new _server_ component to the Zero architecture.

This server is implemented by you, the developer. It's typically just your existing backend, where you already put auth or other server-side functionality.
The server can be a serverless function, a microservice, or a full stateful server. The only real requirement is that it expose a special _push endpoint_ that `zero-cache` can call to process mutations. This endpoint implements the [push protocol](#custom-push-implementation) and contains your custom logic for each mutation.
Zero provides utilities in `@rocicorp/zero` that make it really easy implement this endpoint in TypeScript. But you can also implement it yourself if you want. As long as your endpoint fulfills the push protocol, `zero-cache` doesn't care. You can even write it in a different programming language.
### What Even is a Mutator?
Zero's custom mutators are based on [_server reconciliation_](https://www.gabrielgambetta.com/client-side-prediction-server-reconciliation.html) – a technique for robust sync that has been used by the video game industry for decades.
Our previous sync engine, [Replicache](https://replicache.dev/), also used
server reconciliation. The ability to implement arbitrary mutators was one of
Replicache's most popular features. Custom mutators bring this same power to
Zero, but with a much better developer experience.
A custom mutator is just a function that runs within a database transaction, and which can read and write to the database. Here's an example of a very simple custom mutator written in TypeScript:
```ts
async function updateIssue(
tx: Transaction,
{id, title}: {id: string; title: string},
) {
// Validate title length.
if (title.length > 100) {
throw new Error(`Title is too long`);
}
await tx.mutate.issue.update({id, title});
}
```
Each custom mutator gets **two implementations**: one on the client and one on the server.
The client implementation must be written in TypeScript against the Zero `Transaction` interface, using [ZQL](#read-data-on-the-client) for reads and a [CRUD-style API](#write-data-on-the-client) for writes.
The server implementation runs on your server, in your push endpoint, against your database. In principle, it can be written in any language and use any data access library. For example you could have the following Go-based server implementation of the same mutator:
```go
func updateIssueOnServer(tx *sql.Tx, id string, title string) error {
// Validate title length.
if len(title) > 100 {
return errors.New("Title is too long")
}
_, err := tx.Exec("UPDATE issue SET title = $1 WHERE id = $2", title, id)
return err
}
```
In practice however, most Zero apps use TypeScript on the server. For these users we provide a handy `ServerTransaction` that implements ZQL against Postgres, so that you can share code between client and server mutators naturally.
So on a TypeScript server, that server mutator can just be:
```ts
async function updateIssueOnServer(
tx: ServerTransaction,
{id, title},
{id: string, title: string},
) {
// Delegate to client mutator.
// The `ServerTransaction` here has a different implementation
// that runs the same ZQL queries against Postgres!
await updateIssue(tx, {id, title});
}
```
Even in TypeScript, you can do as little or as much code sharing as you like. In your server mutator, you can [use raw SQL](#dropping-down-to-raw-sql), any data access libraries you prefer, or add as much extra server-specific logic as you need.
Reusing ZQL on the server is a handy – and we expect frequently used – option, but not a requirement.
### Server Authority
You may be wondering what happens if the client and server mutators implementations don't match.
Zero is an example of a _server-authoritative_ sync engine. This means that the server mutator always takes precedence over the client mutator. The result from the client mutator is considered _speculative_ and is discarded as soon as the result from the server mutator is known. This is a very useful feature: it enables server-side validation, permissions, and other server-specific logic.
Imagine that you wanted to use an LLM to detect whether an issue update is spammy, rather than a simple length check. We can just add that to our server mutator:
```ts
async function updateIssueOnServer(
tx: ServerTransaction,
{id, title}: {id: string; title: string},
) {
const response = await llamaSession.prompt(
`Is this title update likely spam?\n\n${title}\n\nResponse "yes" or "no"`,
);
if (/yes/i.test(response)) {
throw new Error(`Title is likely spam`);
}
// delegate rest of implementation to client mutator
await updateIssue(tx, {id, title});
}
```
If the server detects that the mutation is spammy, the client will see the error message and the mutation will be rolled back. If the server mutator succeeds, the client mutator will be rolled back and the server result will be applied.
### Life of a Mutation
Now that we understand what client and server mutations are, let's walk through they work together with Zero to sync changes from a source client to the server and then other clients:
1. When you call a custom mutator on the client, Zero runs your client-side mutator immediately on the local device, updating all active queries instantly.
2. In the background, Zero then sends a _mutation_ (a record of the mutator having run with certain arguments) to your server's push endpoint.
3. Your push endpoint runs the [push protocol](#custom-push-implementation), executing the server-side mutator in a transaction against your database and recording the fact that the mutation ran. Optionally, you use our `PushProcessor` class to handle this for you, but you can also implement it yourself.
4. The changes to the database are replicated to `zero-cache` as normal.
5. `zero-cache` calculates the updates to active queries and sends rows that have changed to each client. It also sends information about the mutations that have been applied to the database.
6. Clients receive row updates and apply them to their local cache. Any pending mutations which have been applied to the server have their local effects rolled back.
7. Client-side queries are updated and the user sees the changes.
## Using Custom Mutators
### Registering Client Mutators
By convention, the client mutators are defined with a function called `createMutators` in a file called `mutators.ts`:
```ts
// mutators.ts
import {CustomMutatorDefs} from '@rocicorp/zero';
import {schema} from './schema';
export function createMutators() {
return {
issue: {
update: async (tx, {id, title}: {id: string; title: string}) => {
// Validate title length. Legacy issues are exempt.
if (title.length > 100) {
throw new Error(`Title is too long`);
}
await tx.mutate.issue.update({id, title});
},
},
} as const satisfies CustomMutatorDefs;
}
```
The `mutators.ts` convention allows mutator implementations to be easily [reused server-side](#setting-up-the-server). The `createMutators` function convention is used so that we can pass authentication information in to [implement permissions](#permissions).
You are free to make different code layout choices – the only real requirement is that you register your map of mutators in the `Zero` constructor:
```ts
// main.tsx
import {Zero} from '@rocicorp/zero';
import {schema} from './schema';
import {createMutators} from './mutators';
const zero = new Zero({
schema,
mutators: createMutators(),
});
```
### Write Data on the Client
The `Transaction` interface passed to client mutators exposes the same `mutate` API as the existing [CRUD-style mutators](./writing-data):
```ts
async function myMutator(tx: Transaction) {
// Insert a new issue
await tx.mutate.issue.insert({
id: 'issue-123',
title: 'New title',
description: 'New description',
});
// Upsert a new issue
await tx.mutate.issue.upsert({
id: 'issue-123',
title: 'New title',
description: 'New description',
});
// Update an issue
await tx.mutate.issue.update({
id: 'issue-123',
title: 'New title',
});
// Delete an issue
await tx.mutate.issue.delete({
id: 'issue-123',
});
}
```
See [the CRUD docs](./writing-data) for detailed semantics on these methods.
### Read Data on the Client
You can read data within a client mutator using [ZQL](./reading-data):
```ts
export function createMutators() {
return {
issue: {
update: async (tx, {id, title}: {id: string; title: string}) => {
// Read existing issue
const prev = await tx.query.issue.where('id', id).one();
// Validate title length. Legacy issues are exempt.
if (!prev.isLegacy && title.length > 100) {
throw new Error(`Title is too long`);
}
await tx.mutate.issue.update({id, title});
},
},
} as const satisfies CustomMutatorDefs;
}
```
You have the full power of ZQL at your disposal, including relationships, filters, ordering, and limits.
Reads and writes within a mutator are transactional, meaning that the datastore is guaranteed to not change while your mutator is running. And if the mutator throws, the entire mutation is rolled back.
Outside of mutators, the `run()` method has a [`type` parameter](reading-data#running-queries-once) that can be used to wait for server results.
This parameter isn't supported within mutators, because waiting for server results makes no sense in an optimistic mutation – it defeats the purpose of running optimistically to begin with.
When a mutator runs on the client (`tx.location === "client"`), ZQL reads only return data already cached on the client. When mutators run on the server (`tx.location === "server"`), ZQL reads always return all data.
You can use `run()` within custom mutators, but the `type` argument does nothing. In the future, passing `type` in this situation will throw an error.
### Invoking Client Mutators
Once you have registered your client mutators, you can call them from your client-side application:
```ts
zero.mutate.issue.update({
id: 'issue-123',
title: 'New title',
});
```
The result of a call to a mutator is a `Promise`. You do not usually need to `await` this promise as Zero mutators run very fast, usually completing in a tiny fraction of one frame.
However because mutators ocassionally need to access browser storage, they are technically `async`. Reading a row that was written by a mutator immediately after it is written may not return the new data, because the mutator may not have completed writing to storage yet.
### Waiting for Mutator Result
We typically recommend that you "fire and forget" mutators.
Optimistic mutations make sense when the common case is that a mutation succeeds. If a mutation frequently fails, then showing the user an optimistic result doesn't make sense, because it will likely be wrong.
That said there are cases where it is useful to know when a write succeeded on either the client or server.
One example is if you need to read a row directly after writing it. Zero's local writes are very fast (almost always < 1 frame), but because Zero is backed by IndexedDB, writes are still *technically* asynchronous and reads directly after a write may not return the new data.
You can use the `.client` promise in this case to wait for a write to complete on the client side:
```ts
try {
const write = zero.mutate.issue.update({
id: 'issue-123',
title: 'New title',
});
// issue-123 not guaranteed to be present here. read1 may be undefined.
const read1 = await zero.query.issue.where('id', 'issue-123').one();
// Await client write – almost always less than 1 frame, and same
// macrotask, so no browser paint will occur here.
await write.client;
// issue-123 definitely can be read now.
const read2 = await zero.query.issue.where('id', 'issue-123').one();
} catch (e) {
console.error("Mutator failed on client", e);
}
```
You can also wait for the server write to succeed:
```ts
try {
await zero.mutate.issue.update({
id: 'issue-123',
title: 'New title',
}).server;
// issue-123 is written to server
} catch (e) {
console.error("Mutator failed on client or server", e);
}
```
If the client-side mutator fails, the `.server` promise is also rejected with the same error. You don't have to listen to both promises, the server promise covers both cases.
There is not yet a way to return data from mutators in the success case – the type of `.clent` and `.server` is always `Promise`. [Let us know](https://discord.rocicorp.dev/) if you need this.
### Setting Up the Server
You will need a server somewhere you can run an endpoint on. This is typically a serverless function on a platform like Vercel or AWS but can really be anything.
Set the push URL with the [`ZERO_PUSH_URL` env var or `--push-url`](./zero-cache-config#push-url).
If there is per-client configuration you need to send to the push endpoint, you can do that with `push.queryParams`:
```ts
const z = new Zero({
push: {
queryParams: {
workspaceID: "42",
},
},
});
```
The push endpoint receives a `PushRequest` as input describing one or more mutations to apply to the backend, and must return a `PushResponse` describing the results of those mutations.
If you are implementing your server in TypeScript, you can use the `PushProcessor` class to trivially implement this endpoint. Here’s an example in a [Hono](https://hono.dev/) app:
```ts
import {Hono} from 'hono';
import {handle} from 'hono/vercel';
import {PushProcessor, ZQLDatabase, PostgresJSConnection} from '@rocicorp/zero/pg';
import postgres from 'postgres';
import {schema} from '../shared/schema';
import {createMutators} from '../shared/mutators';
// PushProcessor is provided by Zero to encapsulate a standard
// implementation of the push protocol.
const processor = new PushProcessor(
new ZQLDatabase(
new PostgresJSConnection(
postgres(process.env.ZERO_UPSTREAM_DB! as string)
),
schema
)
);
export const app = new Hono().basePath('/api');
app.post('/push', async c => {
const result = await processor.process(
createMutators(),
c.req.raw,
);
return await c.json(result);
});
export default handle(app);
```
`PushProcessor` depends on an abstract `Database`. This allows it to implement the push algorithm against any database.
`@rocicorp/zero/pg` includes a `ZQLDatabase` implementation of this interface backed by Postgres. The implementation allows the same mutator functions to run on client and server, by providing an implementation of the ZQL APIs that custom mutators run on the client.
`ZQLDatabase` in turn relies on an abstract `DBConnection` that provides raw access to a Postgres database. This allows you to use any Postgres library you like, as long as you provide a `DBConnection` implementation for it. The `PostgresJSConnection` class implements `DBConnection` for the excellent [`postgres.js`](https://www.npmjs.com/package/postgres) library to connect to Postgres.
To reuse the client mutators exactly as-is on the server just pass the result of the same `createMutators` function to `PushProcessor`.
### Server-Specific Code
To implement server-specific code, just run different mutators in your push endpoint!
An approach we like is to create a separate `server-mutators.ts` file that wraps the client mutators:
```ts
// server-mutators.ts
import { CustomMutatorDefs } from "@rocicorp/zero";
import { schema } from "./schema";
export function createMutators(clientMutators: CustomMutatorDefs) {
return {
// Reuse all client mutators except the ones in `issue`
...clientMutators,
issue: {
// Reuse all issue mutators except `update`
...clientMutators.issue,
update: async (tx, {id, title}: { id: string; title: string }) => {
// Call the shared mutator first
await clientMutators.issue.update(tx, {id, title});
// Record a history of this operation happening in an audit
// log table.
await tx.mutate.auditLog.insert({
// Assuming you have an audit log table with fields for
// `issueId`, `action`, and `timestamp`.
issueId: id,
action: 'update-title',
timestamp: new Date().toISOString(),
});
},
}
} as const satisfies CustomMutatorDefs;
}
```
For simple things, we also expose a `location` field on the transaction object that you can use to branch your code:
```ts
myMutator: (tx) => {
if (tx.location === 'client') {
// Client-side code
} else {
// Server-side code
}
},
```
### Permissions
Because custom mutators are just arbitrary TypeScript functions, there is no need for a special permissions system. Therefore, you won't use Zero's [write permissions](./permissions) when you use custom mutators.
When using custom mutators you will have no [`insert`](permissions#insert-permissions), [`update`](permissions#update-permissions), or [`delete`](permissions#delete-permissions) permissions. You will still have [`select`](permissions#select-permissions) permissions, however.
We hope to build [custom queries](https://bugs.rocicorp.dev/issue/3453) next – a read analog to custom mutators. If we succeed, Zero's permission system will go away completely 🤯.
In order to do permission checks, you'll need to know what user is making the request. You can pass this information to your mutators by adding a `AuthData` parameter to the `createMutators` function:
```ts
type AuthData = {
sub: string;
};
export function createMutators(authData: AuthData | undefined) {
return {
issue: {
launchMissiles: async (tx, args: {target: string}) => {
if (!authData) {
throw new Error('Users must be logged in to launch missiles');
}
const hasPermission = await tx.query.user
.where('id', authData.sub)
.whereExists('permissions', q => q.where('name', 'launch-missiles'))
.one();
if (!hasPermission) {
throw new Error('User does not have permission to launch missiles');
}
},
},
} as const satisfies CustomMutatorDefs;
}
```
The `AuthData` parameter can be any data required for authorization, but is typically just the decoded JWT:
```ts
// app.tsx
const zero = new Zero({
schema,
auth: encodedJWT,
mutators: createMutators(decodedJWT),
});
// hono-server.ts
const processor = new PushProcessor(
schema,
connectionProvider(postgres(process.env.ZERO_UPSTREAM_DB as string)),
);
processor.process(
createMutators(decodedJWT),
c.req.query(),
await c.req.json(),
);
```
### Dropping Down to Raw SQL
On the server, you can use raw SQL in addition or instead of ZQL. This is useful for complex queries, or for using Postgres features that Zero doesn't support yet:
```ts
async function markAllAsRead(tx: Transaction, {userId: string}) {
await tx.dbTransaction.query(
`
UPDATE notification
SET read = true
WHERE user_id = $1
`,
[userId],
);
}
```
### Notifications and Async Work
It is bad practice to hold open database transactions while talking over the network, for example to send notifications. Instead, you should let the db transaction commit and do the work asynchronously.
There is no specific support for this in custom mutators, but since mutators are just code, it’s easy to do:
```ts
// server-mutators.ts
export function createMutators(
authData: AuthData,
asyncTasks: Array<() => Promise>,
) {
return {
issue: {
update: async (tx, {id, title}: {id: string; title: string}) => {
await tx.mutate.issue.update({id, title});
asyncTasks.push(async () => {
await sendEmailToSubscribers(args.id);
});
},
},
} as const satisfies CustomMutatorDefs;
}
```
Then in your push handler:
```ts
app.post('/push', async c => {
const asyncTasks: Array<() => Promise> = [];
const result = await processor.process(
createMutators(authData, asyncTasks),
c.req.query(),
await c.req.json(),
);
await Promise.all(asyncTasks.map(task => task()));
return await c.json(result);
});
```
### Custom Database Connections
You can implement an adapter to a different Postgres library, or even a different database entirely.
To do so, provide a `connectionProvider` to `PushProcessor` that returns a different [`DBConnection`](https://github.com/rocicorp/mono/blob/1a3741fbdad6dbdd56aa1f48cc2cc83938a61b16/packages/zql/src/mutate/custom.ts#L67) implementation. For an example implementation, [see the `postgres` implementation](https://github.com/rocicorp/mono/blob/1a3741fbdad6dbdd56aa1f48cc2cc83938a61b16/packages/zero-pg/src/postgres-connection.ts#L4).
### Custom Push Implementation
You can manually implement the push protocol in any programming language.
This will be documented in the future, but you can refer to the [PushProcessor](https://github.com/rocicorp/mono/blob/1a3741fbdad6dbdd56aa1f48cc2cc83938a61b16/packages/zero-pg/src/web.ts#L33) source code for an example for now.
## Examples
- Zbugs uses [custom mutators](https://github.com/rocicorp/mono/blob/a76c9a61670cc09e1a9fe7ab795749f3eef25577/apps/zbugs/shared/mutators.ts) for all mutations, [write permissions](https://github.com/rocicorp/mono/blob/a76c9a61670cc09e1a9fe7ab795749f3eef25577/apps/zbugs/shared/mutators.ts#L61), and [notifications](https://github.com/rocicorp/mono/blob/a76c9a61670cc09e1a9fe7ab795749f3eef25577/apps/zbugs/server/server-mutators.ts#L35).
- `hello-zero-solid` uses custom mutators for all [mutations](TODO), and for [permissions](TODO).
--- writing-data.mdx ---
Zero generates basic CRUD mutators for every table you sync. Mutators are available at `zero.mutate.`:
```tsx
const z = new Zero(...);
z.mutate.user.insert({
id: nanoid(),
username: 'abby',
language: 'en-us',
});
```
To build mutators with more complex logic or server-specific behavior, see the
new [Custom Mutators API](./custom-mutators).
## Insert
Create new records with `insert`:
```tsx
z.mutate.user.insert({
id: nanoid(),
username: 'sam',
language: 'js',
});
```
Optional fields can be set to `null` to explicitly set the new field to `null`. They can also be set to `undefined` to take the default value (which is often `null` but can also be some generated value server-side).
```tsx
// schema.ts
import {createTableSchema} from '@rocicorp/zero';
const userSchema = createTableSchema({
tableName: 'user',
columns: {
id: {type: 'string'},
name: {type: 'string'},
language: {type: 'string', optional: true},
},
primaryKey: ['id'],
relationships: {},
});
// app.tsx
// Sets language to `null` specifically
z.mutate.user.insert({
id: nanoid(),
username: 'sam',
language: null,
});
// Sets language to the default server-side value. Could be null, or some
// generated or constant default value too.
z.mutate.user.insert({
id: nanoid(),
username: 'sam',
});
// Same as above
z.mutate.user.insert({
id: nanoid(),
username: 'sam',
language: undefined,
});
```
## Upsert
Create new records or update existing ones with `upsert`:
```tsx
z.mutate.user.upsert({
id: samID,
username: 'sam',
language: 'ts',
});
```
`upsert` supports the same `null` / `undefined` semantics for optional fields that `insert` does (see above).
## Update
Update an existing record. Does nothing if the specified record (by PK) does not exist.
You can pass a partial, leaving fields out that you don’t want to change. For example here we leave the username the same:
```tsx
// Leaves username field to previous value.
z.mutate.user.update({
id: samID,
language: 'golang',
});
// Same as above
z.mutate.user.update({
id: samID,
username: undefined,
language: 'haskell',
});
// Reset language field to `null`
z.mutate.user.update({
id: samID,
language: null,
});
```
## Delete
Delete an existing record. Does nothing if specified record does not exist.
```tsx
z.mutate.user.delete({
id: samID,
});
```
## Batch Mutate
You can do multiple CRUD mutates in a single _batch_. If any of the mutations fails, all will. They also all appear together atomically in a single transaction to other clients.
```tsx
z.mutateBatch(async tx => {
const samID = nanoid();
tx.user.insert({
id: samID,
username: 'sam',
});
const langID = nanoid();
tx.language.insert({
id: langID,
userID: samID,
name: 'js',
});
});
```
--- reading-data.mdx ---
ZQL is Zero’s query language.
Inspired by SQL, ZQL is expressed in TypeScript with heavy use of the builder pattern. If you have used [Drizzle](https://orm.drizzle.team/) or [Kysely](https://kysely.dev/), ZQL will feel familiar.
ZQL queries are composed of one or more _clauses_ that are chained together into a _query_.
Unlike queries in classic databases, the result of a ZQL query is a _view_ that updates automatically and efficiently as the underlying data changes. You can call a query’s `materialize()` method to get a view, but more typically you run queries via some framework-specific bindings. For example see `useQuery` for [React](react) or [SolidJS](solidjs).
This means you should not modify the data directly. Instead, clone the data and modify the clone.
ZQL caches values and returns them multiple times. If you modify a value returned from ZQL, you will modify it everywhere it is used. This can lead to subtle bugs.
JavaScript and TypeScript lack true immutable types so we use `readonly` to help enforce it. But it's easy to cast away the `readonly` accidentally.
In the future, we'll [`freeze`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze) all returned data in `dev` mode to help prevent this.
## Select
ZQL queries start by selecting a table. There is no way to select a subset of columns; ZQL queries always return the entire row (modulo column permissions).
```tsx
const z = new Zero(...);
// Returns a query that selects all rows and columns from the issue table.
z.query.issue;
```
This is a design tradeoff that allows Zero to better reuse the row locally for future queries. This also makes it easier to share types between different parts of the code.
## Ordering
You can sort query results by adding an `orderBy` clause:
```tsx
z.query.issue.orderBy('created', 'desc');
```
Multiple `orderBy` clauses can be present, in which case the data is sorted by those clauses in order:
```tsx
// Order by priority descending. For any rows with same priority,
// then order by created desc.
z.query.issue.orderBy('priority', 'desc').orderBy('created', 'desc');
```
All queries in ZQL have a default final order of their primary key. Assuming the `issue` table has a primary key on the `id` column, then:
```tsx
// Actually means: z.query.issue.orderBy('id', 'asc');
z.query.issue;
// Actually means: z.query.issue.orderBy('priority', 'desc').orderBy('id', 'asc');
z.query.issue.orderBy('priority', 'desc');
```
## Limit
You can limit the number of rows to return with `limit()`:
```tsx
z.query.issue.orderBy('created', 'desc').limit(100);
```
## Paging
You can start the results at or after a particular row with `start()`:
```tsx
let start: IssueRow | undefined;
while (true) {
let q = z.query.issue.orderBy('created', 'desc').limit(100);
if (start) {
q = q.start(start);
}
const batch = await q.run();
console.log('got batch', batch);
if (batch.length < 100) {
break;
}
start = batch[batch.length - 1];
}
```
By default `start()` is _exclusive_ - it returns rows starting **after** the supplied reference row. This is what you usually want for paging. If you want _inclusive_ results, you can do:
```tsx
z.query.issue.start(row, {inclusive: true});
```
## Uniqueness
If you want exactly zero or one results, use the `one()` clause. This causes ZQL to return `Row|undefined` rather than `Row[]`.
```tsx
const result = await z.query.issue.where('id', 42).one().run();
if (!result) {
console.error('not found');
}
```
`one()` overrides any `limit()` clause that is also present.
## Relationships
You can query related rows using _relationships_ that are defined in your [Zero schema](/docs/zero-schema).
```tsx
// Get all issues and their related comments
z.query.issue.related('comments');
```
Relationships are returned as hierarchical data. In the above example, each row will have a `comments` field which is itself an array of the corresponding comments row.
You can fetch multiple relationships in a single query:
```tsx
z.query.issue.related('comments').related('reactions').related('assignees');
```
### Refining Relationships
By default all matching relationship rows are returned, but this can be refined. The `related` method accepts an optional second function which is itself a query.
```tsx
z.query.issue.related(
'comments',
// It is common to use the 'q' shorthand variable for this parameter,
// but it is a _comment_ query in particular here, exactly as if you
// had done z.query.comment.
q => q.orderBy('modified', 'desc').limit(100).start(lastSeenComment),
);
```
This _relationship query_ can have all the same clauses that top-level queries can have.
### Nested Relationships
You can nest relationships arbitrarily:
```tsx
// Get all issues, first 100 comments for each (ordered by modified,desc),
// and for each comment all of its reactions.
z.query.issue.related(
'comments', q => q.orderBy('modified', 'desc').limit(100).related(
'reactions')
)
);
```
## Where
You can filter a query with `where()`:
```tsx
z.query.issue.where('priority', '=', 'high');
```
The first parameter is always a column name from the table being queried. Intellisense will offer available options (sourced from your [Zero Schema](/docs/zero-schema)).
### Comparison Operators
Where supports the following comparison operators:
| Operator | Allowed Operand Types | Description |
| ---------------------------------------- | ----------------------------- | ------------------------------------------------------------------------ |
| `=` , `!=` | boolean, number, string | JS strict equal (===) semantics |
| `<` , `<=`, `>`, `>=` | number | JS number compare semantics |
| `LIKE`, `NOT LIKE`, `ILIKE`, `NOT ILIKE` | string | SQL-compatible `LIKE` / `ILIKE` |
| `IN` , `NOT IN` | boolean, number, string | RHS must be array. Returns true if rhs contains lhs by JS strict equals. |
| `IS` , `IS NOT` | boolean, number, string, null | Same as `=` but also works for `null` |
TypeScript will restrict you from using operators with types that don’t make sense – you can’t use `>` with `boolean` for example.
If you don’t see the comparison operator you need, let us know, many are easy
to add.
### Equals is the Default Comparison Operator
Because comparing by `=` is so common, you can leave it out and `where` defaults to `=`.
```tsx
z.query.issue.where('priority', 'high');
```
### Comparing to `null`
As in SQL, ZQL’s `null` is not equal to itself (`null ≠ null`).
This is required to make join semantics work: if you’re joining `employee.orgID` on `org.id` you do **not** want an employee in no organization to match an org that hasn’t yet been assigned an ID.
When you purposely want to compare to `null` ZQL supports `IS` and `IS NOT` operators that work just like in SQL:
```tsx
// Find employees not in any org.
z.query.employee.where('orgID', 'IS', null);
```
TypeScript will prevent you from comparing to `null` with other operators.
### Compound Filters
The argument to `where` can also be a callback that returns a complex expression:
```tsx
// Get all issues that have priority 'critical' or else have both
// priority 'medium' and not more than 100 votes.
z.query.issue.where(({cmp, and, or, not}) =>
or(
cmp('priority', 'critical'),
and(cmp('priority', 'medium'), not(cmp('numVotes', '>', 100))),
),
);
```
`cmp` is short for _compare_ and works the same as `where` at the top-level except that it can’t be chained and it only accepts comparison operators (no relationship filters – see below).
Note that chaining `where()` is also a one-level `and`:
```tsx
// Find issues with priority 3 or higher, owned by aa
z.query.issue.where('priority', '>=', 3).where('owner', 'aa');
```
### Relationship Filters
Your filter can also test properties of relationships. Currently the only supported test is existence:
```tsx
// Find all orgs that have at least one employee
z.query.organization.whereExists('employees');
```
The argument to `whereExists` is a relationship, so just like other relationships it can be refined with a query:
```tsx
// Find all orgs that have at least one cool employee
z.query.organization.whereExists('employees', q =>
q.where('location', 'Hawaii'),
);
```
As with querying relationships, relationship filters can be arbitrarily nested:
```tsx
// Get all issues that have comments that have reactions
z.query.issue.whereExists('comments',
q => q.whereExists('reactions'));
);
```
The `exists` helper is also provided which can be used with `and`, `or`, `cmp`, and `not` to build compound filters that check relationship existence:
```tsx
// Find issues that have at least one comment or are high priority
z.query.issue.where({cmp, or, exists} =>
or(
cmp('priority', 'high'),
exists('comments'),
),
);
```
## Data Lifetime and Reuse
Zero reuses data synced from prior queries to answer new queries when possible. This is what enables instant UI transitions.
But what controls the lifetime of this client-side data? How can you know whether any partiular query will return instant results? How can you know whether those results will be up to date or stale?
The answer is that the data on the client is simply the union of rows returned from queries which are currently syncing. Once a row is no longer returned by any syncing query, it is removed from the client. Thus, there is never any stale data in Zero.
So when you are thinking about whether a query is going to return results instantly, you should think about _what other queries are syncing_, not about what data is local. Data exists locally if and only if there is a query syncing that returns that data.
This is why we often say that despite the name `zero-cache`, Zero is not technically a cache. It's a *replica*.
A cache has a random set of rows with a random set of versions. There is no expectation that the cache any particular rows, or that the rows' have matching versions. Rows are simply updated as they are fetched.
A replica by contrast is eagerly updated, whether or not any client has requested a row. A replica is always very close to up-to-date, and always self-consistent.
Zero is a _partial_ replica because it only replicates rows that are returned by syncing queries.
## Query Lifecycle
Queries can be either _active_ or _backgrounded_. An active query is one that is currently being used by the application. Backgrounded queries are not currently in use, but continue syncing in case they are needed again soon.
Active queries are created one of three ways:
1. The app calls `q.materialize()` to get a `View`.
2. The app uses a platform binding like React's `useQuery(q)`.
3. The app calls [`preload()`](#preloading) to sync larger queries without a view.
Active queries sync until they are _deactivated_. The way this happens depends on how the query was created:
1. For `materialize()` queries, the UI calls `destroy()` on the view.
2. For `useQuery()`, the UI unmounts the component (which calls `destroy()` under the covers).
3. For `preload()`, the UI calls `cleanup()` on the return value of `preload()`.
### Background Queries
By default a deactivated query stops syncing immediately.
But it's often useful to keep queries syncing beyond deactivation in case the UI needs the same or a similar query in the near future. This is accomplished with the `ttl` parameter:
```ts
const [user] = useQuery(z.query.user.where('id', userId), {ttl: '1d'});
```
The `ttl` parameter specifies how long the app developer wishes the query to run in the background. The following formats are allowed (where `%d` is a positive integer):
| Format | Meaning |
| --------- | ------------------------------------------------------------------------------------ |
| `none` | No backgrounding. Query will immediately stop when deactivated. This is the default. |
| `%ds` | Number of seconds. |
| `%dm` | Number of minutes. |
| `%dh` | Number of hours. |
| `%dd` | Number of days. |
| `%dy` | Number of years. |
| `forever` | Query will never be stopped. |
If the UI re-requests a background query, it becomes an active query again. Since the query was syncing in the background, the very first synchronous result that the UI receives after reactivation will be up-to-date with the server (i.e., it will have `resultType` of `complete`).
Just like other types of queries, the data from background queries is available for use by new queries. A common pattern in to [preload](#preloading) a subset of most commonly needed data with `{ttl: 'forever'}` and then do more specific queries from the UI with, e.g., `{ttl: '1d'}`. Most often the preloaded data will be able to answer user queries, but if not, the new query will be answered by the server and backgrounded for a day in case the user revisits it.
### Client Capacity Management
Zero has a default soft limit of 20,000 rows on the client-side, or about 20MB of data assuming 1KB rows.
This limit can be increased with the [`--target-client-row-count`](./zero-cache-config#target-client-row-count) flag, but we do not recommend setting it higher than 100,000.
Contrary to the design of other sync engines, we believe that storing tons of data client-side doesn't make sense. Here are some reasons why:
- Initial sync will be slow, slowing down initial app load.
- Because storage in browser tabs is unreliable, initial sync can occur surprisingly often.
- We want to answer queries _instantly_ as often as possible. This requires client-side data in memory on the main thread. If we have to page to disk, we may as well go to the network and reduce complexity.
- Even though Zero's queries are very efficient, they do still have some cost, expecially hydration. Massive client-side storage would result in hydrating tons of queries that are unlikely to be used every time the app starts.
Most importantly, no matter how much data you store on the client, there will be cases where you have to fallback to the server:
- Some users might have huge amounts of data.
- Some users might have tiny amounts of available client storage.
- You will likely want the app to start fast and sync in the background.
Because you have to be able to fallback to server the question becomes _what is the **right** amount of data to store on the client?_, not _how can I store the absolute max possible data on the client?_
The goal with Zero is to answer 99% of queries on the client from memory. The remaining 1% of queries can fallback gracefully to the server. 20,000 rows was chosen somewhat arbitrarily as a number of rows that was likely to be able to do this for many applications.
There is no hard limit at 20,000 or 100,000. Nothing terrible happens if you go above. The thing to keep in mind is that:
1. All those queries will revalidate everytime your app boots.
2. All data synced to the client is in memory in JS.
Here is how this limit is managed:
1. Active queries are never destroyed, even if the limit is exceeded. Developers are expected to keep active queries well under the limit.
2. The `ttl` value counts from the moment a query deactivates. Backgrounded queries are destroyed immediately when the `ttl` is reached, even if the limit hasn't been reached.
3. If the client exceeds its limit, Zero will destroy backgrounded queries, least-recently-used first, until the store is under the limit again.
### Thinking in Queries
Although IVM is a very efficient way to keep queries up to date relative to re-running them, it isn't free. You still need to think about how many queries you are creating, how long they are kept alive, and how expensive they are.
This is why Zero defaults to _not_ backgrounding queries and doesn't try to aggressively fill its client datastore to capacity. You should put some thought into what queries you want to run in the background, and for how long.
Zero currently provides a few basic tools to understand the cost of your queries:
- The client logs a warning for slow query materializations. Look for `Slow query materialization` in your logs. The default threshold is `5s` (including network) but this is configurable with the `slowMaterializeThreshold` parameter.
- The client logs the materialization time of all queries at the `debug` level. Look for `Materialized query` in your logs.
- The server logs a warning for slow query materializations. Look for `Slow query materialization` in your logs. The default threshold is `5s` but this is configurable with the `log-slow-materialize-threshold` configuration parameter.
We will be adding more tools over time.
## Completeness
Zero returns whatever data it has on the client immediately for a query, then falls back to the server for any missing data. Sometimes it's useful to know the difference between these two types of results. To do so, use the `result` from `useQuery`:
```tsx
const [issues, issuesResult] = useQuery(z.query.issue);
if (issuesResult.type === 'complete') {
console.log('All data is present');
} else {
console.log('Some data is missing');
}
```
The possible values of `result.type` are currently `complete` and `unknown`.
The `complete` value is currently only returned when Zero has received the server result. But in the future, Zero will be able to return this result type when it _knows_ that all possible data for this query is already available locally. Additionally, we plan to add a `prefix` result for when the data is known to be a prefix of the complete result. See [Consistency](#consistency) for more information.
## Preloading
Almost all Zero apps will want to preload some data in order to maximize the feel of instantaneous UI transitions.
In Zero, preloading is done via queries – the same queries you use in the UI and for auth.
However, because preload queries are usually much larger than a screenful of UI, Zero provides a special `preload()` helper to avoid the overhead of materializing the result into JS objects:
```tsx
// Preload the first 1k issues + their creator, assignee, labels, and
// the view state for the active user.
//
// There's no need to render this data, so we don't use `useQuery()`:
// this avoids the overhead of pulling all this data into JS objects.
z.query.issue
.related('creator')
.related('assignee')
.related('labels')
.related('viewState', q => q.where('userID', z.userID).one())
.orderBy('created', 'desc')
.limit(1000)
.preload();
```
## Running Queries Once
Usually subscribing to a query is what you want in a reactive UI, but every so often you'll need to run a query just once. To do this, use the `run()` method:
```tsx
const results = await z.query.issue.where('foo', 'bar').run();
```
By default, `run()` only returns results that are currently available on the client. That is, it returns the data that would be given for [`result.type === 'unknown'`](#completeness).
If you want to wait for the server to return results, pass `{type: 'complete'}` to `run`:
```tsx
const results = await z.query.issue.where('foo', 'bar').run(
{type: 'complete'});
```
As a convenience you can also directly await queries:
```ts
await z.query.issue.where('foo','bar');
```
This is the same as saying `run()` or `run({type: 'unknown'})`.
## Consistency
Zero always syncs a consistent partial replica of the backend database to the client. This avoids many common consistency issues that come up in classic web applications. But there are still some consistency issues to be aware of when using Zero.
For example, imagine that you have a bug database w/ 10k issues. You preload the first 1k issues sorted by created.
The user then does a query of issues assigned to themselves, sorted by created. Among the 1k issues that were preloaded imagine 100 are found that match the query. Since the data we preloaded is in the same order as this query, we are guaranteed that any local results found will be a _prefix_ of the server results.
The UX that result is nice: the user will see initial results to the query instantly. If more results are found server-side, those results are guaranteed to sort below the local results. There's no shuffling of results when the server response comes in.
Now imagine that the user switches the sort to ‘sort by modified’. This new query will run locally, and will again find some local matches. But it is now unlikely that the local results found are a prefix of the server results. When the server result comes in, the user will probably see the results shuffle around.
To avoid this annoying effect, what you should do in this example is also preload the first 1k issues sorted by modified desc. In general for any query shape you intend to do, you should preload the first `n` results for that query shape with no filters, in each sort you intend to use.
Zero will not sync duplicate copies of rows that show up in multiple queries. Zero syncs the *union* of all active queries' results.
So you don't have to worry about syncing many sorts of the same query when it's likely the results will overlap heavily.
In the future, we will be implementing a consistency model that fixes these issues automatically. We will prevent Zero from returning local data when that data is not known to be a prefix of the server result. Once the consistency model is implemented, preloading can be thought of as purely a performance thing, and not required to avoid unsightly flickering.
--- permissions.mdx ---
Permissions are expressed using [ZQL](reading-data) and run automatically with every read and write.
## Define Permissions
Permissions are defined in [`schema.ts`](/docs/zero-schema) using the `definePermissions` function.
Here's an example of limiting deletes to only the creator of an issue:
```ts
// The decoded value of your JWT.
type AuthData = {
// The logged-in user.
sub: string;
};
export const permissions = definePermissions(schema, () => {
const allowIfIssueCreator = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('creatorID', authData.sub);
return {
issue: {
row: {
delete: [allowIfIssueCreator],
},
},
} satisfies PermissionsConfig;
});
```
`definePermission` returns a _policy_ object for each table in the schema. Each policy defines a _ruleset_ for the _operations_ that are possible on a table: `select`, `insert`, `update`, and `delete`.
## Access is Denied by Default
If you don't specify any rules for an operation, it is denied by default. This is an important safety feature that helps ensure data isn't accidentally exposed.
To enable full access to an action (i.e., during development) use the `ANYONE_CAN` helper:
```ts
import {ANYONE_CAN} from '@rocicorp/zero';
const permissions = definePermissions(schema, () => {
return {
issue: {
row: {
select: ANYONE_CAN,
// Other operations are denied by default.
},
},
// Other tables are denied by default.
} satisfies PermissionsConfig;
});
```
To do this for all actions, use `ANYONE_CAN_DO_ANYTHING`:
```ts
import {ANYONE_CAN_DO_ANYTHING} from '@rocicorp/zero';
const permissions = definePermissions(schema, () => {
return {
// All operations on issue are allowed to all users.
issue: ANYONE_CAN_DO_ANYTHING,
// Other tables are denied by default.
} satisfies PermissionsConfig;
});
```
## Permission Evaluation
Zero permissions are "compiled" into a JSON-based format at build-time. This file is stored in the `{ZERO_APP_ID}.permissions` table of your upstream database. Like other tables, it replicates live down to `zero-cache`. `zero-cache` then parses this file, and applies the encoded rules to every read and write operation.
The compilation process is very simple-minded (read: dumb). Despite looking like normal TypeScript functions that receive an `AuthData` parameter, rule functions are not actually invoked at runtime. Instead, they are invoked with a "placeholder" `AuthData` at build time. We track which fields of this placeholder are accessed and construct a ZQL expression that accesses the right field of `AuthData` at runtime.
The end result is that you can't really use most features of JS in these rules. Specifically you cannot:
- Iterate over properties or array elements in the auth token
- Use any JS features beyond property access of `AuthData`
- Use any conditional or global state
Basically only property access is allowed. This is really confusing and we're working on a better solution.
## Permission Deployment
During development, permissions are compiled and uploaded to your database completely automatically as part of the `zero-cache-dev` script.
For production, you need to call `npx zero-deploy-permissions` within your app to update the permissions in the production database whenever they change. You would typically do this as part of your normal schema migration or CI process. For example, the SST deployment script for [zbugs](/docs/samples#zbugs) looks like this:
```ts
new command.local.Command(
'zero-deploy-permissions',
{
create: `npx zero-deploy-permissions -p ../../src/schema.ts`,
// Run the Command on every deploy ...
triggers: [Date.now()],
environment: {
ZERO_UPSTREAM_DB: commonEnv.ZERO_UPSTREAM_DB,
// If the application has a non-default App ID ...
ZERO_APP_ID: commonEnv.ZERO_APP_ID,
},
},
// after the view-syncer is deployed.
{dependsOn: viewSyncer},
);
```
See the [SST Deployment Guide](deployment#guide-multi-node-on-sstaws) for more details.
## Rules
Each operation on a policy has a _ruleset_ containing zero or more _rules_.
A rule is just a TypeScript function that receives the logged in user's `AuthData` and generates a ZQL [where expression](reading-data#compound-filters). At least one rule in a ruleset must return a row for the operation to be allowed.
## Select Permissions
You can limit the data a user can read by specifying a `select` ruleset.
Select permissions act like filters. If a user does not have permission to read a row, it will be filtered out of the result set. It will not generate an error.
For example, imagine a select permission that restricts reads to only issues created by the user:
```ts
definePermissions(schema, () => {
const allowIfIssueCreator = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('creatorID', authData.sub);
return {
issue: {
row: {
select: [allowIfIssueCreator],
},
},
} satisfies PermissionsConfig;
});
```
If the issue table has two rows, one created by the user and one by someone else, the user will only see the row they created in any queries.
## Insert Permissions
You can limit what rows can be inserted and by whom by specifying an `insert` ruleset.
Insert rules are evaluated after the entity is inserted. So if they query the database, they will see the inserted row present. If any rule in the insert ruleset returns a row, the insert is allowed.
Here's an example of an insert rule that disallows inserting users that have the role 'admin'.
```ts
definePermissions(schema, () => {
const allowIfNonAdmin = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('role', '!=', 'admin');
return {
user: {
row: {
insert: [allowIfNonAdmin],
},
},
} satisfies PermissionsConfig;
});
```
## Update Permissions
There are two types of update rulesets: `preMutation` and `postMutation`. Both rulesets must pass for an update to be allowed.
`preMutation` rules see the version of a row _before_ the mutation is applied. This is useful for things like checking whether a user owns an entity before editing it.
`postMutation` rules see the version of a row _after_ the mutation is applied. This is useful for things like ensuring a user can only mark themselves as the creator of an entity and not other users.
Like other rulesets, `preMutation` and `postMutation` default to `NOBODY_CAN`. This means that every table must define both these rulesets in order for any updates to be allowed.
For example, the following ruleset allows an issue's owner to edit, but **not** re-assign the issue. The `postMutation` rule enforces that the current user still own the issue after edit.
```ts
definePermissions(schema, () => {
const allowIfIssueOwner = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('ownerID', authData.sub);
return {
issue: {
row: {
update: {
preMutation: [allowIfIssueOwner],
postMutation: [allowIfIssueOwner],
},
},
},
} satisfies PermissionsConfig;
});
```
This ruleset allows an issue's owner to edit and re-assign the issue:
```ts
definePermissions(schema, () => {
const allowIfIssueOwner = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('ownerID', authData.sub);
return {
issue: {
row: {
update: {
preMutation: [allowIfIssueOwner],
postMutation: ANYONE_CAN,
},
},
},
} satisfies PermissionsConfig;
});
```
And this allows anyone to edit an issue, but only if they also assign it to themselves. Useful for enforcing _"patches welcome"_? 🙃
```ts
definePermissions(schema, () => {
const allowIfIssueOwner = (
authData: AuthData,
{cmp}: ExpressionBuilder,
) => cmp('ownerID', authData.sub);
return {
issue: {
row: {
update: {
preMutation: ANYONE_CAN,
postMutation: [allowIfIssueOwner],
},
},
},
} satisfies PermissionsConfig;
});
```
## Delete Permissions
Delete permissions work in the same way as `insert` permissions except they run _before_ the delete is applied. So if a delete rule queries the database, it will see that the deleted row is present. If any rule in the ruleset returns a row, the delete is allowed.
## Debugging
See [Debugging Permissions](./debug/permissions).
## Examples
See [hello-zero](https://github.com/rocicorp/hello-zero/blob/main/src/schema.ts) for a simple example of write auth and [zbugs](https://github.com/rocicorp/mono/blob/main/apps/zbugs/shared/schema.ts#L217) for a much more involved one.
--- debug/permissions.mdx ---
Given that permissions are defined in their own file and internally applied to queries, it might be hard to figure out if or why a permission check is failing.
## Read Permissions
You can use the `analyze-query` utility with the `--apply-permissions` flag to see the complete query Zero runs, including read permissions.
```bash
npx analyze-query
--schema='./shared/schema.ts'
--query='issue.related("comments")'
--apply-permissions
--auth-data='{"userId":"user-123"}'
```
If the result looks right, the problem may be that Zero is not receiving the `AuthData` that you think it is. You can retrieve a query hash from websocket or server logs, then ask Zero for the details on that specific query.
Run this command with the same environment you run `zero-cache` with. It will use your `upstream` or `cvr` configuration to look up the query hash in the cvr database.
```bash
npx analyze-query
--schema='./shared/schema.ts'
--hash='3rhuw19xt9vry'
--apply-permissions
--auth-data='{"userId":"user-123"}'
```
The printed query can be different than the source ZQL string, because it is rebuilt from the query AST. But it should be logically equivalent to the query you wrote.
## Write Permissions
Look for a `WARN` level log in the output from `zero-cache` like this:
```
Permission check failed for {"op":"update","tableName":"message",...}, action update, phase preMutation, authData: {...}, rowPolicies: [...], cellPolicies: []
```
Zero prints the row, auth data, and permission policies that was applied to any failed writes.
The ZQL query is printed in AST format. See [Query ASTs](./query-asts) to
convert it to a more readable format.
--- zero-cache-config.mdx ---
`zero-cache` is configured either via CLI flag or environment variable. There is no separate `zero.config` file.
You can also see all available flags by running `zero-cache --help`.
## Required Flags
### Auth
One of [Auth JWK](#auth-jwk), [Auth JWK URL](#auth-jwk-url), or [Auth Secret](#auth-secret) must be specified. See [Authentication](/docs/auth/) for more details.
### Replica File
File path to the SQLite replica that zero-cache maintains. This can be lost, but if it is, zero-cache will have to re-replicate next time it starts up.
flag: `--replica-file`
env: `ZERO_REPLICA_FILE`
required: `true`
### Upstream DB
The "upstream" authoritative postgres database. In the future we will support other types of upstream besides PG.
flag: `--upstream-db`
env: `ZERO_UPSTREAM_DB`
required: `true`
## Optional Flags
### Admin Password
A password used to administer zero-cache server, for example to access the `/statz` endpoint.
flag: `--admin-password`
env: `ZERO_ADMIN_PASSWORD`
required: `false`
### App ID
Unique identifier for the app.
Multiple zero-cache apps can run on a single upstream database, each of which is isolated from the others, with its own permissions, sharding (future feature), and change/cvr databases.
The metadata of an app is stored in an upstream schema with the same name, e.g. `zero`, and the metadata for each app shard, e.g. client and mutation ids, is stored in the `{app-id}_{#}` schema. (Currently there is only a single "0" shard, but this will change with sharding).
The CVR and Change data are managed in schemas named `{app-id}_{shard-num}/cvr` and `{app-id}_{shard-num}/cdc`, respectively, allowing multiple apps and shards to share the same database instance (e.g. a Postgres "cluster") for CVR and Change management.
Due to constraints on replication slot names, an App ID may only consist of lower-case letters, numbers, and the underscore character.
Note that this option is used by both `zero-cache` and `zero-deploy-permissions`.
flag: `--app-id`
env: `ZERO_APP_ID`
default: `zero`
### App Publications
Postgres PUBLICATIONs that define the tables and columns to replicate. Publication names may not begin with an underscore, as zero reserves that prefix for internal use.
If unspecified, zero-cache will create and use an internal publication that publishes all tables in the public schema, i.e.:
```
CREATE PUBLICATION _{app-id}_public_0 FOR TABLES IN SCHEMA public;
```
Note that once an app has begun syncing data, this list of publications cannot be changed, and zero-cache will refuse to start if a specified value differs from what was originally synced. To use a different set of publications, a new app should be created.
flag: `--app-publications`
env: `ZERO_APP_PUBLICATIONS`
default: `[]`
### Auth JWK
A public key in JWK format used to verify JWTs. Only one of jwk, jwksUrl and secret may be set.
flag: `--auth-jwk`
env: `ZERO_AUTH_JWK`
required: `false`
### Auth JWK URL
A URL that returns a JWK set used to verify JWTs. Only one of jwk, jwksUrl and secret may be set.
flag: `--auth-jwks-url`
env: `ZERO_AUTH_JWKS_URL`
required: `false`
### Auto Reset
Automatically wipe and resync the replica when replication is halted. This situation can occur for configurations in which the upstream database provider prohibits event trigger creation, preventing the zero-cache from being able to correctly replicate schema changes. For such configurations, an upstream schema change will instead result in halting replication with an error indicating that the replica needs to be reset. When auto-reset is enabled, zero-cache will respond to such situations by shutting down, and when restarted, resetting the replica and all synced clients. This is a heavy-weight operation and can result in user-visible slowness or downtime if compute resources are scarce.
flag: `--auto-reset`
env: `ZERO_AUTO_RESET`
default: `true`
### Auth Secret
A symmetric key used to verify JWTs. Only one of jwk, jwksUrl and secret may be set.
flag: `--auth-secret`
env: `ZERO_AUTH_SECRET`
required: `false`
### Change DB
The Postgres database used to store recent replication log entries, in order to sync multiple view-syncers without requiring multiple replication slots on the upstream database. If unspecified, the upstream-db will be used.
flag: `--change-db`
env: `ZERO_CHANGE_DB`
required: `false`
### Change Max Connections
The maximum number of connections to open to the change database. This is used by the change-streamer for catching up zero-cache replication subscriptions.
flag: `--change-max-conns`
env: `ZERO_CHANGE_MAX_CONNS`
default: `5`
### Change Streamer Port
The port on which the change-streamer runs. This is an internal protocol between the replication-manager and zero-cache, which runs in the same process in local development. If unspecified, defaults to --port + 1.
flag: `--change-streamer-port`
env: `ZERO_CHANGE_STREAMER_PORT`
required: `false`
### Change Streamer URI
When unset, the zero-cache runs its own replication-manager (i.e. change-streamer). In production, this should be set to the replication-manager URI, which runs a change-streamer on port 4849.
flag: `--change-streamer-uri`
env: `ZERO_CHANGE_STREAMER_URI`
required: `false`
### CVR DB
The Postgres database used to store CVRs. CVRs (client view records) keep track of the data synced to clients in order to determine the diff to send on reconnect. If unspecified, the upstream-db will be used.
flag: `--cvr-db`
env: `ZERO_CVR_DB`
required: `false`
### CVR Max Connections
The maximum number of connections to open to the CVR database. This is divided evenly amongst sync workers.
Note that this number must allow for at least one connection per sync worker, or zero-cache will fail to start. See num-sync-workers.
flag: `--cvr-max-conns`
env: `ZERO_CVR_MAX_CONNS`
default: `30`
### Initial Sync Row Batch Size
The number of rows each table copy worker fetches at a time during initial sync. This can be increased to speed up initial sync, or decreased to reduce the amount of heap memory used during initial sync (e.g. for tables with large rows).
flag: `--initial-sync-row-batch-size`
env: `ZERO_INITIAL_SYNC_ROW_BATCH_SIZE`
default: `10000`
### Initial Sync Table Copy Workers
The number of parallel workers used to copy tables during initial sync. Each worker copies a single table at a time, fetching rows in batches of `initial-sync-row-batch-size`.
flag: `--initial-sync-table-copy-workers`
env: `ZERO_INITIAL_SYNC_TABLE_COPY_WORKERS`
default: `5`
### Lazy Startup
Delay starting the majority of zero-cache until first request.
This is mainly intended to avoid connecting to Postgres replication stream until the first request is received, which can be useful i.e., for preview instances.
Currently only supported in single-node mode.
flag: `--lazy-startup`
env: `ZERO_LAZY_STARTUP`
default: `false`
### Litestream Executable
Path to the litestream executable. This option has no effect if litestream-backup-url is unspecified.
flag: `--litestream-executable`
env: `ZERO_LITESTREAM_EXECUTABLE`
required: `false`
### Litestream Config Path
Path to the litestream yaml config file. zero-cache will run this with its environment variables, which can be referenced in the file via `${ENV}` substitution, for example:
- ZERO_REPLICA_FILE for the db Path
- ZERO_LITESTREAM_BACKUP_LOCATION for the db replica url
- ZERO_LITESTREAM_LOG_LEVEL for the log Level
- ZERO_LOG_FORMAT for the log type
flag: `--litestream-config-path`
env: `ZERO_LITESTREAM_CONFIG_PATH`
default: `./src/services/litestream/config.yml`
### Litestream Log Level
flag: `--litestream-log-level`
env: `ZERO_LITESTREAM_LOG_LEVEL`
default: `warn`
values: `debug`, `info`, `warn`, `error`
### Litestream Backup URL
The location of the litestream backup, usually an s3:// URL. If set, the litestream-executable must also be specified.
flag: `--litestream-backup-url`
env: `ZERO_LITESTREAM_BACKUP_URL`
required: `false`
### Litestream Checkpoint Threshold MB
The size of the WAL file at which to perform an SQlite checkpoint to apply the writes in the WAL to the main database file. Each checkpoint creates a new WAL segment file that will be backed up by litestream. Smaller thresholds may improve read performance, at the expense of creating more files to download when restoring the replica from the backup.
flag: `--litestream-checkpoint-threshold-mb`
env: `ZERO_LITESTREAM_CHECKPOINT_THRESHOLD_MB`
default: `40`
### Litestream Incremental Backup Interval Minutes
The interval between incremental backups of the replica. Shorter intervals reduce the amount of change history that needs to be replayed when catching up a new view-syncer, at the expense of increasing the number of files needed to download for the initial litestream restore.
flag: `--litestream-incremental-backup-interval-minutes`
env: `ZERO_LITESTREAM_INCREMENTAL_BACKUP_INTERVAL_MINUTES`
default: `15`
### Litestream Snapshot Backup Interval Hours
The interval between snapshot backups of the replica. Snapshot backups make a full copy of the database to a new litestream generation. This improves restore time at the expense of bandwidth. Applications with a large database and low write rate can increase this interval to reduce network usage for backups (litestream defaults to 24 hours).
flag: `--litestream-snapshot-backup-interval-hours`
env: `ZERO_LITESTREAM_SNAPSHOT_BACKUP_INTERVAL_HOURS`
default: `12`
### Litestream Restore Parallelism
The number of WAL files to download in parallel when performing the initial restore of the replica from the backup.
flag: `--litestream-restore-parallelism`
env: `ZERO_LITESTREAM_RESTORE_PARALLELISM`
default: `48`
### Log Format
Use text for developer-friendly console logging and json for consumption by structured-logging services.
flag: `--log-format`
env: `ZERO_LOG_FORMAT`
default: `"text"`
values: `text`, `json`
### Log IVM Sampling
How often to collect IVM metrics. 1 out of N requests will be sampled where N is this value.
flag: `--log-ivm-sampling`
env: `ZERO_LOG_IVM_SAMPLING`
default: `5000`
### Log Level
Sets the logging level for the application.
flag: `--log-level`
env: `ZERO_LOG_LEVEL`
default: `"info"`
values: `debug`, `info`, `warn`, `error`
### Log Slow Hydrate Threshold
The number of milliseconds a query hydration must take to print a slow warning.
flag: `--log-slow-hydrate-threshold`
env: `ZERO_LOG_SLOW_HYDRATE_THRESHOLD`
default: `100`
### Log Slow Row Threshold
The number of ms a row must take to fetch from table-source before it is considered slow.
flag: `--log-slow-row-threshold`
env: `ZERO_LOG_SLOW_ROW_THRESHOLD`
default: `2`
### Log Trace Collector
The URL of the trace collector to which to send trace data. Traces are sent over http. Port defaults to 4318 for most collectors.
flag: `--log-trace-collector`
env: `ZERO_LOG_TRACE_COLLECTOR`
required: `false`
### Number of Sync Workers
The number of processes to use for view syncing. Leave this unset to use the maximum available parallelism. If set to 0, the server runs without sync workers, which is the configuration for running the replication-manager.
flag: `--num-sync-workers`
env: `ZERO_NUM_SYNC_WORKERS`
required: `false`
### Per User Mutation Limit Max
The maximum mutations per user within the specified windowMs.
flag: `--per-user-mutation-limit-max`
env: `ZERO_PER_USER_MUTATION_LIMIT_MAX`
required: `false`
### Per User Mutation Limit Window (ms)
The sliding window over which the perUserMutationLimitMax is enforced.
flag: `--per-user-mutation-limit-window-ms`
env: `ZERO_PER_USER_MUTATION_LIMIT_WINDOW_MS`
default: `60000`
### Port
The port for sync connections.
flag: `--port`
env: `ZERO_PORT`
default: `4848`
### Push URL
The URL of the API server to which zero-cache will push mutations. Required if you use [custom mutators](/docs/custom-mutators).
flag: `--push-url`
env: `ZERO_PUSH_URL`
required: `false`
### Query Hydration Stats
Track and log the number of rows considered by each query in the system. This is useful for debugging and performance tuning.
flag: `--query-hydration-stats`
env: `ZERO_QUERY_HYDRATION_STATS`
required: `false`
### Replica Vacuum Interval Hours
Performs a VACUUM at server startup if the specified number of hours has elapsed since the last VACUUM (or initial-sync). The VACUUM operation is heavyweight and requires double the size of the db in disk space. If unspecified, VACUUM operations are not performed.
flag: `--replica-vacuum-interval-hours`
env: `ZERO_REPLICA_VACUUM_INTERVAL_HOURS`
required: `false`
### Server Version
The version string outputted to logs when the server starts up.
flag: `--server-version`
env: `ZERO_SERVER_VERSION`
required: `false`
### Storage DB Temp Dir
Temporary directory for IVM operator storage. Leave unset to use `os.tmpdir()`.
flag: `--storage-db-tmp-dir`
env: `ZERO_STORAGE_DB_TMP_DIR`
required: `false`
### Target Client Row Count
A soft limit on the number of rows Zero will keep on the client. 20k is a good default value for most applications, and we do not recommend exceeding 100k. See [Client Capacity Management](/docs/reading-data#client-capacity-management) for more details.
flag: `--target-client-row-count`
env: `ZERO_TARGET_CLIENT_ROW_COUNT`
default: `20000`
### Task ID
Globally unique identifier for the zero-cache instance. Setting this to a platform specific task identifier can be useful for debugging. If unspecified, zero-cache will attempt to extract the TaskARN if run from within an AWS ECS container, and otherwise use a random string.
flag: `--task-id`
env: `ZERO_TASK_ID`
required: `false`
### Tenants JSON
JSON encoding of per-tenant configs for running the server in multi-tenant mode:
```json
{
/**
* Requests to the main application port are dispatched to the first tenant
* with a matching host and path. If both host and path are specified,
* both must match for the request to be dispatched to that tenant.
*
* Requests can also be sent directly to the ZERO_PORT specified
* in a tenant's env overrides. In this case, no host or path
* matching is necessary.
*/
tenants: {
id: string; // value of the "tid" context key in debug logs
host?: string; // case-insensitive full Host: header match
path?: string; // first path component, with or without leading slash
/**
* Options are inherited from the main application (e.g. args and ENV) by default,
* and are overridden by values in the tenant's env object.
*/
env: {
ZERO_REPLICA_DB_FILE: string
ZERO_UPSTREAM_DB: string
ZERO_CVR_DB: string
ZERO_CHANGE_DB: string
...
};
}[];
}
```
flag: `--tenants-json`
env: `ZERO_TENANTS_JSON`
required: `false`
### Upstream Max Connections
The maximum number of connections to open to the upstream database for committing mutations. This is divided evenly amongst sync workers. In addition to this number, zero-cache uses one connection for the replication stream.
Note that this number must allow for at least one connection per sync worker, or zero-cache will fail to start. See num-sync-workers.
flag: `--upstream-max-conns`
env: `ZERO_UPSTREAM_MAX_CONNS`
default: `20`