#Database Migration Cookbook

Version: 0.36.0 | Updated: 2026-03-20 | Applies to: ranvier-runtime 0.36+ | Category: Cookbook


#Overview

Ranvier does not embed a migration framework -- it delegates database schema management to ecosystem tools. This cookbook covers practical integration patterns for sqlx migrations and refinery, Docker Compose startup ordering, and CI/CD pipeline configuration.


#1. Choosing a Migration Tool

Tool Approach Pros Cons
sqlx-cli SQL files + CLI Compile-time query validation, async native Requires sqlx-cli installed
refinery Rust embedded Runs from application binary, no external tooling No compile-time query checks
diesel_migrations Rust embedded Mature, CLI available Sync-only, diesel dependency

Recommendation: Use sqlx-cli for projects that already use sqlx (Ranvier's primary DB crate). Use refinery when you want self-contained migrations without external tooling.


#2. sqlx Migration Setup

#Directory Structure

my-app/
β”œβ”€β”€ Cargo.toml
β”œβ”€β”€ migrations/
β”‚   β”œβ”€β”€ 20260101_000001_create_users.sql
β”‚   β”œβ”€β”€ 20260101_000002_create_orders.sql
β”‚   └── 20260115_000001_add_order_status.sql
└── src/
    └── main.rs

#Migration File

-- migrations/20260101_000001_create_users.sql

CREATE TABLE IF NOT EXISTS users (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    email TEXT NOT NULL UNIQUE,
    name TEXT NOT NULL,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

CREATE INDEX idx_users_email ON users(email);

#Running Migrations

# Install sqlx-cli
cargo install sqlx-cli --no-default-features --features postgres

# Run pending migrations
DATABASE_URL=postgres://user:pass@localhost:5432/mydb sqlx migrate run

# Check migration status
sqlx migrate info

#Embedding in Application Startup

use sqlx::PgPool;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
    let pool = PgPool::connect(&std::env::var("DATABASE_URL")?).await?;

    // Run migrations before starting the HTTP server
    sqlx::migrate!("./migrations")
        .run(&pool)
        .await?;

    tracing::info!("migrations complete");

    Ranvier::http::<AppResources>()
        .bind("0.0.0.0:8080")
        .get("/api/users", list_users)
        .run(AppResources { db: pool })
        .await?;

    Ok(())
}

#3. refinery Migration Setup

#Cargo.toml

[dependencies]
refinery = { version = "0.8", features = ["tokio-postgres"] }
tokio-postgres = "0.7"

#Migration Files

migrations/
β”œβ”€β”€ V1__create_users.sql
β”œβ”€β”€ V2__create_orders.sql
└── V3__add_audit_tables.sql

#Embedding in Application

use refinery::config::Config;

mod embedded {
    use refinery::embed_migrations;
    embed_migrations!("./migrations");
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
    let mut config = Config::from_env_var("DATABASE_URL")?;
    embedded::migrations::runner().run_async(&mut config).await?;
    tracing::info!("refinery migrations complete");

    // Start Ranvier server...
    Ok(())
}

#4. Ranvier Persistence Tables

If you use PostgresPersistenceStore for saga checkpointing, include its schema in your migration set:

-- migrations/20260101_000003_ranvier_persistence.sql

CREATE TABLE IF NOT EXISTS ranvier_checkpoints (
    pipeline_id TEXT NOT NULL,
    execution_id UUID NOT NULL,
    node_index INT NOT NULL,
    state_json JSONB NOT NULL,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    PRIMARY KEY (execution_id, node_index)
);

CREATE INDEX idx_checkpoints_pipeline ON ranvier_checkpoints(pipeline_id);

If you use PostgresAuditSink, include:

-- migrations/20260101_000004_ranvier_audit.sql

CREATE TABLE IF NOT EXISTS ranvier_audit_events (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    pipeline_id TEXT NOT NULL,
    execution_id UUID NOT NULL,
    node_label TEXT NOT NULL,
    event_type TEXT NOT NULL,
    payload JSONB,
    hash TEXT NOT NULL,
    prev_hash TEXT,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

CREATE INDEX idx_audit_pipeline ON ranvier_audit_events(pipeline_id, execution_id);
CREATE INDEX idx_audit_created ON ranvier_audit_events(created_at);

#5. Docker Compose Startup Ordering

Ensure the database is ready before the application starts:

# docker/compose.yml
services:
  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: secret
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U appuser -d myapp"]
      interval: 5s
      timeout: 3s
      retries: 10
    volumes:
      - pgdata:/var/lib/postgresql/data

  app:
    build:
      context: .
      dockerfile: Containerfile
    depends_on:
      postgres:
        condition: service_healthy
    environment:
      DATABASE_URL: postgres://appuser:secret@postgres:5432/myapp
      RUST_LOG: info

volumes:
  pgdata:

Key: Use depends_on with condition: service_healthy so the app container waits for PostgreSQL to be ready, not just started.

#Application-Level Retry

Even with healthchecks, add a connection retry loop in your application:

async fn connect_with_retry(url: &str, max_attempts: u32) -> PgPool {
    for attempt in 1..=max_attempts {
        match PgPool::connect(url).await {
            Ok(pool) => return pool,
            Err(e) => {
                tracing::warn!("DB connection attempt {}/{}: {}", attempt, max_attempts, e);
                tokio::time::sleep(std::time::Duration::from_secs(2)).await;
            }
        }
    }
    panic!("Failed to connect to database after {} attempts", max_attempts);
}

#6. CI/CD Pipeline

#GitHub Actions Example

name: Test with Migrations
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:16-alpine
        env:
          POSTGRES_DB: testdb
          POSTGRES_USER: testuser
          POSTGRES_PASSWORD: testpass
        ports:
          - 5432:5432
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5

    env:
      DATABASE_URL: postgres://testuser:testpass@localhost:5432/testdb

    steps:
      - uses: actions/checkout@v4
      - uses: dtolnay/rust-toolchain@stable

      - name: Install sqlx-cli
        run: cargo install sqlx-cli --no-default-features --features postgres

      - name: Run migrations
        run: sqlx migrate run

      - name: Run tests
        run: cargo test --workspace

#7. Migration Best Practices

#Naming Convention

Use timestamp-prefixed names for unambiguous ordering:

YYYYMMDD_NNNNNN_description.sql
20260320_000001_create_users.sql
20260320_000002_create_orders.sql

#Idempotent Migrations

Use IF NOT EXISTS / IF EXISTS guards:

CREATE TABLE IF NOT EXISTS users ( ... );
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
ALTER TABLE orders ADD COLUMN IF NOT EXISTS status TEXT DEFAULT 'pending';

#Separate Application and Ranvier Schemas

Keep Ranvier infrastructure tables (checkpoints, audit) in a dedicated schema or with a ranvier_ prefix to avoid namespace collisions:

CREATE SCHEMA IF NOT EXISTS ranvier;
CREATE TABLE ranvier.checkpoints ( ... );
CREATE TABLE ranvier.audit_events ( ... );

#Never Modify Existing Migrations

Once a migration has been committed, treat it as immutable. Always create new migrations for schema changes, even if they modify tables from earlier migrations.


#See Also

  • Persistence Ops Runbook -- checkpoint and recovery
  • Deployment Guide -- Docker and Kubernetes deployment
  • Saga Compensation Cookbook -- saga with persistence