Surf Environments

A Surf Environment is a reproducible development sandbox for an AI agent.

It describes:

The core idea is simple: one service graph, a small set of primitives, and explicit readiness rules.

In Surf, a database container, a dev server, a background worker, and a hosted cloud resource can all participate in the same dependency graph. They are orchestrated the same way:

This gives you one place to describe the environment your agent wakes up inside, without splitting setup across Docker Compose, shell scripts, CI steps, and platform-specific configuration.

Surf uses a few different primitives for different execution contexts:

The goal is to make them work together predictably.

Table of contents

Execution model

Before the API details, it helps to be precise about what Surf is orchestrating.

The four service kinds

Surf supports four kinds of services:

Kind Runs where Typical use
Cmd.run(...) In the workspace shell Dev servers, workers, CLIs, notebooks
Container.from(...) In an isolated container Databases, caches, browsers, local infra
Container.fromFile(...) In an isolated container built from your repo Custom APIs, internal services, ML backends
Pulumi.resource(...) / adapters In the platform provisioning layer Hosted databases, queues, buckets, cloud infra
fromCompose(...) In Docker Compose, imported into Surf Existing services during migration

These all participate in the same dependency graph, but they do not all expose the same service interfaces.

Machine and services

Each environment has two layers:

machine: {
  specs: {
    cpu: 4,
    memory: "8Gi",
    disk: "40Gi",
    gpu: false,
  },
  runtimes: ["node@20", "python@3.12"],
  setup: [
    "pnpm install",
    "pnpm prisma generate",
  ],
}

Rules:

Shared orchestration model

Every service in Surf follows the same high-level orchestration rules:

  1. It may wait for other services via dependsOn.
  2. It may run service-specific setup hooks before startup.
  3. It may start or provision.
  4. It may pass a readiness check.
  5. It may run post-start hooks.
  6. Only then is it considered ready for downstream services.

This is what Surf unifies: ordering, readiness, and typed references.

What svc means

Each service gets a typed entry in svc, such as svc.db or svc.dev.

A svc entry is a typed handle to a service’s outputs. Some outputs are common, and some are service-specific:

Every service has a typed handle, and Surf makes those handles available when they are safe to use.

Agent-readable environment

Surf environments are not only declarative. Once running, the environment should also be exposed back to the agent through a structured, machine-readable interface.

That read surface should include:

This is how agents and platform tools should understand the active environment at runtime. The config file defines the environment; the read surface exposes the realized environment.

Readiness is explicit

A service is not considered ready just because it exists in config.

Readiness depends on the service kind:

Downstream services only begin once their dependencies are ready.

Workspace vs container boundary

Cmd.run(...) services run in the workspace context. They can read the checked-out repo, use installed runtimes, and operate like normal development processes.

Container.*(...) services run in isolated container filesystems. They do not implicitly share the workspace filesystem. If they need configuration, credentials, or connection details, those must be passed explicitly through Surf’s environment model or container build context.

This distinction is important. Surf gives both kinds of services a place in the same graph, while keeping the boundary between them explicit.

Environment variables are injected, not written to disk

Surf resolves environment variables at startup time and injects them into the process or container being started.

That means:

Surf does not rely on writing .env files into the repo as a primary mechanism.

A note on migration features

fromCompose(...) exists to make migration easier.

Imported services participate in the graph, but they generally have weaker typing and fewer guarantees than native managed services. A Compose import is a bridge, not the ideal end state.

Quick start

import { Environment, Container, Cmd, cred, services } from "surf";

export default new Environment("my-agent", {
  machine: {
    runtimes: ["node@20"],
    setup: ["pnpm install"],
  },

  env: () => ({
    OPENAI_API_KEY: cred("OPENAI_KEY"),
    NODE_ENV: "development",
  }),

  services: services({
    db: Container.from("postgres:16-alpine", { managed: true }),

    dev: Cmd.run("pnpm run dev", {
      port: 3000,
      dependsOn: ["db"],
      env: ({ svc }) => ({
        DATABASE_URL: svc.db.url,
      }),
    }).before("pnpm prisma migrate deploy"),
  }),
});

That’s it. The platform clones your repo, provisions the machine, installs Node 20 via mise, runs machine.setup, starts Postgres, runs migrations, starts your dev server, injects env vars into every process, and hands control to the agent.

Full API reference

Environment

The top-level construct. One per surf.config.ts.

new Environment(name: string, config: EnvironmentConfig)
Field Type Default Description
machine MachineConfig Workspace compute, toolchains, and one-time setup.
services ServiceMap {} All services, containers, and commands. Use the services() helper for type safety.
env () => Record<string, string> or Record<string, string> Shared env vars injected into every service. Use this for static values and broadly shared credentials.
timeout string "10m" Maximum time for all services to reach “ready” state. If the DAG hasn’t fully resolved in this time, boot fails. Format: "30s", "5m", "1h".

machine

The machine defines the workspace substrate that the environment runs on.

machine: {
  specs: {
    cpu: 4,
    memory: "8Gi",
    disk: "40Gi",
    gpu: false,
  },
  runtimes: ["node@20", "python@3.12"],
  setup: [
    "pnpm install",
    "pnpm prisma generate",
  ],
  workdir: "/workspace",
}
Field Type Default Description
specs { cpu?: number, memory?: string, disk?: string, gpu?: boolean } provider default Requested compute for the workspace.
runtimes string[] [] Toolchains to install via mise.
setup string[] [] One-time workspace preparation commands that run before the service DAG.
workdir string /workspace Working directory for the agent and Cmd.run(...) services.

Container

A service that runs as an isolated OCI container with its own filesystem.

Container.from(image, opts?)

Pull an image from a registry and run it.

Container.from("postgres:16-alpine", { managed: true })
Container.from("getmeili/meilisearch:v1.6", { port: 7700 })
Option Type Default Description
managed boolean false If true, the platform recognizes this image and provides health checks, typed outputs (.url, .host, .port, .password, .ws, etc.), and richer UI status.
port number Exposed port. Required for unmanaged containers. Managed containers infer this from the image.
env Record<string, string> or (ctx: { svc }) => Record<string, string> {} Per-service env vars. Merged on top of shared env. When using a callback, svc contains only services listed in dependsOn.
dependsOn K[] [] Sibling service keys this container waits for before starting.
resources { cpu?: string, memory?: string } Resource limits. e.g. { cpu: "2", memory: "4Gi" }

Managed images. Some recognized images expose richer typed outputs and built-in health checks.

For unrecognized images, you get svc.X.host and svc.X.port. You build connection strings manually.

Examples include:

Image pattern Typed outputs
postgres:* .url, .host, .port, .password, .database, .username
redis:*, valkey:* .url, .host, .port
mysql:*, mariadb:* .url, .host, .port, .password, .database, .username
mongo:* .url, .host, .port
browserless/*, mcr.microsoft.com/playwright:* .ws, .host, .port
minio/* .url, .host, .port, .accessKey, .secretKey

This list will grow over time.

Container.fromFile(dockerfile, opts?)

Build a container from a local Dockerfile and run it.

Container.fromFile("./services/api/Dockerfile", {
  context: "./services/api",
  port: 8080,
})
Option Type Default Description
context string Parent directory of the Dockerfile Docker build context directory.
port number Exposed port.
target string Multi-stage build target.
env Record<string, string> or (ctx: { svc }) => Record<string, string> {} Per-service env vars. Merged on top of shared env.
dependsOn K[] [] Sibling service keys to wait for.
args Record<string, string> {} Build arguments (--build-arg).
resources { cpu?: string, memory?: string } Resource limits, when supported by the platform.

fromFile containers are always unmanaged. When a port is declared, Surf exposes svc.X.host, svc.X.port, and a synthesized svc.X.url of the form http://{host}:{port}.

Lifecycle hooks on containers

Container.from("postgres:16-alpine", { managed: true })
  .before("echo 'about to start postgres'")
  .after(
    "pnpm prisma migrate deploy",
    "pnpm prisma db seed",
  )
Method Signature Description
.before(...cmds) (...cmds: string[]) => this Commands to run before the container starts. Runs in the workspace shell, has access to the repo. Use for generating config files the container needs.
.after(...cmds) (...cmds: string[]) => this Commands to run after the container is healthy. The service is not “ready” in the DAG until all .after() hooks exit 0. Use for migrations, seeding, index creation.

Hooks run in order (first .after() arg runs first). Each must exit 0 or boot fails.

Cmd.run(...)

A command that runs in the environment’s shell, sharing the workspace filesystem.

Cmd.run(command, opts?)

Start a long-lived process. Supervised — restarted on crash, killed on environment teardown.

Cmd.run("pnpm run dev", { port: 3000, dependsOn: ["db"] })
Cmd.run("cd docs && bun run dev", { port: 3001 })
Cmd.run("pnpm run worker:dev")
Option Type Default Description
port number If set, the platform sets up port forwarding and waits for this port to accept connections before marking the service as “ready”. In the UI, services with ports get clickable URLs.
dependsOn K[] [] Sibling service keys to wait for.
env Record<string, string> or (ctx: { svc }) => Record<string, string> {} Per-service env vars. Merged on top of shared env.
cwd string machine.workdir Override the working directory for this specific command.

Lifecycle hooks on commands

Cmd.run("pnpm run dev", { port: 3000 })
  .before("pnpm install", "pnpm run codegen")
  .after("curl -s http://localhost:3000/health")
Method Signature Description
.before(...cmds) (...cmds: string[]) => this Commands to run before the process starts. Use for installing dependencies, building, code generation.
.after(...cmds) (...cmds: string[]) => this Commands to run after the process is up (port responding, if specified). Use for post-startup verification or registration.

Pulumi.resource(...)

Cloud infrastructure managed by the platform’s built-in Pulumi engine. Resources are created during boot and (depending on lifecycle) destroyed on teardown.

Pulumi.resource(fn)

Raw Pulumi escape hatch. The callback returns a typed object that becomes the service’s interface.

Pulumi.resource(() => {
  const bucket = new aws.s3.Bucket("artifacts", {
    forceDestroy: true,
  });
  return {
    url: bucket.bucket.apply(b => `s3://${b}`),
    region: "us-east-1",
  };
})

The return type of fn becomes the type of svc.X.

Option Type Default Description
dependsOn K[] [] Sibling service keys to wait for.

Provider credentials must already be available to the platform.

fromCompose(...)

Import services from an existing Docker Compose file. Spreads into the services map.

services: services({
  ...fromCompose<"rabbitmq" | "nginx">("./docker-compose.services.yml"),
  dev: Cmd.run("pnpm run dev", { dependsOn: ["rabbitmq"] }),
})

fromCompose<T>(path, opts?)

Param Type Description
T string union Service names from the Compose file.
path string Path to the Compose file, relative to repo root.
opts.only string[] Import only selected Compose services.
opts.env Record<string, Record<string, string>> Override env vars for specific Compose services.

Compose imports are best treated as migration helpers with fewer guarantees than native Surf services.

Outputs and service references

Surf exposes each service through svc, a typed collection of service references available inside per-service callbacks and other dependency-aware configuration.

dev: Cmd.run("pnpm run dev", {
  dependsOn: ["db", "cache"],
  env: ({ svc }) => ({
    DATABASE_URL: svc.db.url,
    REDIS_URL: svc.cache.url,
  }),
})

Each svc.<name> entry represents the outputs Surf knows about for that service.

Common outputs

Many service types expose common network-oriented fields:

Field Meaning
host Hostname or address other services can use to reach the service
port Primary port, when the service exposes one

These are useful for building connection strings manually when no higher-level field is available.

Service-specific outputs

Some services expose richer outputs in addition to host and port.

Examples:

The exact shape of svc.<name> depends on the service type.

When outputs are available

Surf makes service outputs available according to the service lifecycle.

There are two broad categories of outputs:

A callback may only read outputs that are available by the time that callback runs.

In practice:

This keeps configuration aligned with readiness and avoids references to services that have not finished starting.

Examples

Managed container with rich outputs

services: services({
  db: Container.from("postgres:16-alpine", { managed: true }),
  dev: Cmd.run("pnpm run dev", {
    dependsOn: ["db"],
    env: ({ svc }) => ({
      DATABASE_URL: svc.db.url,
    }),
  }),
}),

Unmanaged container with basic outputs

services: services({
  meili: Container.from("getmeili/meilisearch:v1.6", { port: 7700 }),
  dev: Cmd.run("pnpm run dev", {
    dependsOn: ["meili"],
    env: ({ svc }) => ({
      MEILI_URL: `http://${svc.meili.host}:${svc.meili.port}`,
    }),
  }),
}),

Pulumi resource with custom outputs

services: services({
  bucket: Pulumi.resource(() => {
    const b = new aws.s3.Bucket("artifacts");
    return {
      url: b.bucket.apply(name => `s3://${name}`),
      region: "us-east-1",
    };
  }),
  dev: Cmd.run("pnpm run dev", {
    dependsOn: ["bucket"],
    env: ({ svc }) => ({
      ARTIFACTS_URL: svc.bucket.url,
    }),
  }),
}),

Env resolution and secret scoping

Surf resolves environment variables at startup time and injects them into the service being started.

The env model has two layers:

Per-service env is merged on top of shared env for that service.

Shared env

The top-level env value defines static settings that are broadly useful across the environment.

env: () => ({
  NODE_ENV: "development",
  OPENAI_API_KEY: cred("OPENAI_KEY"),
})

Shared env is a convenient place for values used by multiple application processes when those values do not depend on other services.

Typical examples:

Per-service env

A service can define its own env as either a static record or a callback.

authApi: Container.fromFile("./services/auth/Dockerfile", {
  port: 8081,
  dependsOn: ["db"],
  env: ({ svc }) => ({
    AUTH_DB_URL: svc.db.url,
    AUTH_JWT_SECRET: cred("JWT_SECRET"),
  }),
}),

Per-service env is the right place for values that are specific to one service, especially when they should not be exposed more broadly.

Typical examples:

Merge order

For any given service, Surf resolves env in this order:

  1. shared env
  2. per-service env
  3. inject the merged result into that service

If the same key appears in both places, the per-service value wins.

env: {
  NODE_ENV: "development",
},

analytics: Container.fromFile("./services/analytics/Dockerfile", {
  dependsOn: ["analyticsDb"],
  env: ({ svc }) => ({
    DATABASE_URL: svc.analyticsDb.url,
  }),
}),

In this example, analytics receives its own database URL in addition to the shared env.

Scope and availability

Env resolution follows the dependency graph.

This keeps env resolution aligned with service readiness.

Credentials

cred("NAME") references a credential stored in the platform vault.

env: () => ({
  OPENAI_API_KEY: cred("OPENAI_KEY"),
}),

Credentials are resolved by the platform at boot time. If a required credential is missing, boot fails with a clear error.

Secret scoping

Shared env is best for static values and broadly shared credentials.

Per-service env is best for secrets and narrowly scoped configuration.

Recommended pattern:

Example:

env: {
  NODE_ENV: "development",
},

services: services({
  db: Container.from("postgres:16-alpine", { managed: true }),

  dev: Cmd.run("pnpm run dev", {
    dependsOn: ["db"],
    env: ({ svc }) => ({
      DATABASE_URL: svc.db.url,
      OPENAI_API_KEY: cred("OPENAI_KEY"),
    }),
  }),

  syncWorker: Cmd.run("pnpm run worker:sync", {
    dependsOn: ["db"],
    env: ({ svc }) => ({
      DATABASE_URL: svc.db.url,
      GITHUB_TOKEN: cred("GITHUB_TOKEN"),
    }),
  }),
}),

This keeps secrets closer to the services that use them.

What receives env vars

Service type How env is received
Cmd.run Shell environment for the command and its child processes
Container.from / Container.fromFile Container environment variables at startup
.before() / .after() hooks Inherit the merged env of their parent service
Pulumi.resource Not injected into a runtime process unless the Pulumi provider uses them during provisioning
fromCompose services Passed according to Surf’s compose integration rules

Surf injects env into processes and containers directly rather than writing .env files into the repository.

cred

function cred(name: string): string

References a credential stored in the platform vault by name. Resolved at boot time. If the credential does not exist, boot fails.

Error: Environment "my-agent" requires credential "OPENAI_KEY"
which is not set in the platform vault.

Set it at: https://app.surf.dev/settings/credentials

Credentials are managed entirely in the platform UI. The config file only references them by name.

services helper

function services<const T extends Record<string, ServiceBuilder<string>>>(
  defs: { [K in keyof T]: ServiceBuilder<Extract<keyof T, string>> }
): ResolvedServices<T>

Wraps the services object to enable type-safe dependsOn. TypeScript infers all keys from the object, forms a union, and constrains every dependsOn field to that union.

services({
  db: Container.from("postgres:16-alpine", { managed: true }),
  cache: Container.from("redis:7-alpine", { managed: true }),
  dev: Cmd.run("pnpm run dev", {
    dependsOn: ["db", "cache"],  // ✓ autocomplete works
    //          ["db", "oops"]   // ✗ Type error: "oops" is not assignable
  }),
})

You don’t have to use it. If you pass a plain object to services, dependsOn accepts string[] — you lose autocomplete but everything still works. Boot-time validation catches invalid references either way.

Service context

Services can also carry stable agent-facing context through a chainable .context(...) builder.

frontend: Cmd.run("pnpm run dev", { port: 3000 })
  .context(
    docs("./docs/frontend-testing.md"),
    login({
      username: cred("E2E_USER"),
      password: cred("E2E_PASS"),
    }),
  )

authApi: Container.fromFile("./services/auth/Dockerfile", { port: 8081 })
  .context(
    openapi("/openapi.json"),
    auth({
      headers: {
        Authorization: `Bearer ${cred("INTERNAL_API_TOKEN")}`,
      },
    }),
    docs("./services/auth/README.md"),
  )

.context(...) is persistent service metadata. It does not affect boot, readiness, env resolution, or dependency resolution.

Supported descriptors:

Rules:

Hosted services

Hosted databases, caches, queues, and other cloud resources can also appear in services.

Surf supports this in two ways:

Both approaches expose typed outputs through svc.<name> and participate in the same dependency graph as local services.

services: services({
  db: SomeHostedPostgres(),
  cache: SomeHostedRedis(),
  dev: Cmd.run("pnpm run dev", {
    dependsOn: ["db", "cache"],
    env: ({ svc }) => ({
      DATABASE_URL: svc.db.url,
      REDIS_URL: svc.cache.url,
    }),
  }),
}),

The core environment model does not require you to know how a hosted service is named or provisioned internally. It only needs the service to produce typed outputs that downstream services can consume.

Service lifecycle

Every service in the services map moves through the same broad phases:

  1. wait for declared dependencies
  2. resolve the env available to this service
  3. run any .before() hooks
  4. start or provision the service
  5. check readiness
  6. run any .after() hooks
  7. mark the service as ready for downstream services

The details depend on the service kind, but the readiness contract is the same: downstream services do not begin until upstream services are ready.

Container lifecycle

  1. Wait for dependsOn — all referenced services must already be ready.
  2. Resolve env — build the merged env for this container from shared env plus per-service env.
  3. Run .before() hooks — sequentially, in the workspace context, using the env resolved for this service.
  4. Start container — pull or build the image, inject env, and start the container.
  5. Check readiness — managed containers use engine-specific health checks; other containers use the configured port or other platform-observable readiness signals.
  6. Run .after() hooks — sequentially, using the same env as the parent service.
  7. Ready — downstream services may now start.

Cmd.run(...) lifecycle

  1. Wait for dependsOn — all referenced services must already be ready.
  2. Resolve env — build the merged env for this command from shared env plus per-service env.
  3. Run .before() hooks — sequentially, in the workspace context, using the env resolved for this service.
  4. Start process — run the command in the workspace with the resolved env.
  5. Check readiness — if port is specified, wait for that port to respond; otherwise the process is considered started once the command is running.
  6. Run .after() hooks — sequentially, using the same env as the parent service.
  7. Ready — downstream services may now start.

Pulumi.resource(...) lifecycle

  1. Wait for dependsOn — all referenced services must already be ready.
  2. Resolve inputs — gather any values this resource needs from credentials and dependency outputs.
  3. Run .before() hooks — sequentially, if configured.
  4. Provision — run the Pulumi program.
  5. Collect outputs — make the resource outputs available through svc.<name>.
  6. Run .after() hooks — sequentially, if configured.
  7. Ready — downstream services may now start.

fromCompose(...) service lifecycle

Imported Compose services participate in the Surf graph, but Surf has fewer guarantees than for native service definitions.

In general, Surf can:

  1. wait for declared dependencies
  2. apply configured env overrides
  3. start the imported Compose service
  4. observe readiness using the signals available for that imported service
  5. mark it ready once those checks pass

Compose imports are best treated as migration bridges rather than the strongest execution model Surf provides.

Dependency resolution

Surf builds a directed acyclic graph from service definitions and their dependsOn relationships.

The graph is defined at the service level. Lifecycle hooks such as .before() and .after() affect when a service becomes ready, but they are not separate graph nodes.

The dependency algorithm is:

  1. Collect service keys — read all entries in the services map.
  2. Collect dependency edges — read each service’s dependsOn array.
  3. Validate references — ensure every referenced service exists.
  4. Detect cycles — fail if any dependency chain loops back on itself.
  5. Schedule execution — start services as soon as all of their dependencies are ready.
  6. Propagate readiness — once a service completes its lifecycle and becomes ready, downstream services may proceed.

This gives Surf two useful properties:

Cycle detection

A cycle means there is no valid startup order.

Example:

api -> worker -> db-migrator -> api

Surf fails fast with a clear error rather than attempting partial startup.

Missing reference detection

If a service declares a dependency that does not exist, Surf fails during validation before starting any services.

Example:

Service "dev" depends on "database", but no service named "database" exists.
Available services: db, cache, browser.

Readiness vs dependency

A dependency edge does not just mean “start earlier.” It means “be ready before the dependent service begins.”

That distinction matters because readiness may include:

A service with dependsOn: ["db"] waits for the database to be ready, not merely created.

Boot sequence

The full boot sequence for an environment:

  1. Clone repo — the platform clones the connected repository into machine.workdir.
  2. Prepare the machine — Surf provisions the workspace with the requested compute and default runtime support.
  3. Install toolchains — Surf installs the declared runtimes via mise and makes them available to workspace processes.
  4. Run machine.setup — one-time workspace preparation completes before any service starts.
  5. Resolve credentials — required cred() references are checked against the platform vault.
  6. Build the dependency graph — Surf validates service names, dependsOn edges, and cycles.
  7. Start services through the DAG — each service runs through its lifecycle: wait for dependencies, resolve env, run .before(), start or provision, check readiness, run .after(), mark ready.
  8. Enter ready state — once all required services are ready, the environment is considered ready.
  9. Agent enters — the agent starts in machine.workdir with the environment fully prepared.

Boot fails if any required credential is missing, any hook exits non-zero, any readiness check times out, or the dependency graph cannot be resolved.

Examples

Minimal — static site agent

import { Environment, Cmd, cred, services } from "surf";

export default new Environment("docs-agent", {
  machine: {
    runtimes: ["node@20"],
    setup: ["pnpm install"],
  },

  services: services({
    dev: Cmd.run("pnpm run dev", { port: 3000 }),
  }),

  env: () => ({
    NODE_ENV: "development",
    OPENAI_API_KEY: cred("OPENAI_KEY"),
  }),
});

Standard — Node.js fullstack

import { Environment, Container, Cmd, cred, services } from "surf";

export default new Environment("fullstack-agent", {
  machine: {
    runtimes: ["node@20"],
    setup: ["pnpm install"],
  },

  env: {
    NODE_ENV: "development",
  },

  services: services({
    db: Container.from("postgres:16-alpine", { managed: true }),

    cache: Container.from("redis:7-alpine", { managed: true }),

    dev: Cmd.run("pnpm run dev", {
      port: 3000,
      dependsOn: ["db", "cache"],
      env: ({ svc }) => ({
        DATABASE_URL: svc.db.url,
        REDIS_URL: svc.cache.url,
      }),
    }).before(
      "pnpm prisma migrate deploy",
      "pnpm prisma db seed",
    ),

    worker: Cmd.run("pnpm run worker:dev", {
      dependsOn: ["db", "cache"],
      env: ({ svc }) => ({
        DATABASE_URL: svc.db.url,
        REDIS_URL: svc.cache.url,
      }),
    }),
  }),
});

Migrating from Docker Compose

import { Environment, Cmd, services, fromCompose } from "surf";

export default new Environment("legacy-app", {
  machine: {
    runtimes: ["node@18"],
    setup: ["pnpm install"],
  },

  env: {
    NODE_ENV: "development",
  },

  services: services({
    ...fromCompose<"postgres" | "redis" | "rabbitmq" | "nginx">(
      "./docker-compose.yml"
    ),

    dev: Cmd.run("pnpm run dev", {
      port: 3000,
      dependsOn: ["postgres", "redis", "rabbitmq"],
      env: ({ svc }) => ({
        DATABASE_URL: `postgresql://user:pass@${svc.postgres.host}:${svc.postgres.port}/mydb`,
        REDIS_URL: `redis://${svc.redis.host}:${svc.redis.port}`,
        AMQP_URL: `amqp://${svc.rabbitmq.host}:${svc.rabbitmq.port}`,
      }),
    }),
  }),
});

Service discovery

Surf provides service discovery in two forms:

svc references

svc.<name> is the primary discovery mechanism inside Surf configuration.

Use svc when you need:

Example:

worker: Cmd.run("pnpm run worker", {
  dependsOn: ["db", "rabbitmq"],
  env: ({ svc }) => ({
    DATABASE_URL: svc.db.url,
    AMQP_URL: `amqp://${svc.rabbitmq.host}:${svc.rabbitmq.port}`,
  }),
})

DNS hostnames

Networked services may also be reachable by service key as a hostname within the environment’s internal network.

Examples:

Service key Hostname
db db
cache cache
authApi authApi

This is useful for service-to-service communication when hostname-plus-port is sufficient.

When to prefer each approach

Use svc in configuration and DNS hostnames at runtime when hostname-plus-port is enough.

Compose imports and discovery

Services imported through fromCompose(...) can participate in internal discovery, but they expose fewer guarantees than native Surf services.

Error handling

The platform provides clear errors at every stage of boot:

Error When Message
Missing credential Credential resolution Environment "X" requires credential "Y" which is not set.
Cycle detected DAG validation Circular dependency: a → b → c → a
Missing dependency DAG validation Service "dev" depends on "database", but it doesn't exist. Did you mean "db"?
Hook failure Before/after hooks Service "db" after hook failed (exit 1): "pnpm prisma migrate deploy". Logs: ...
Health timeout Container health Service "db" failed health check after 60s. Container logs: ...
Port timeout Cmd port check Service "dev" port 3000 not responding after 30s. Process logs: ...
Compose mismatch fromCompose boot Compose file has no service "rabbitmq". Available: postgres, redis, nginx.
Pulumi failure Cloud provisioning Pulumi stack "X" failed: ... (full Pulumi error)
Boot timeout Global Environment "X" did not reach ready state within 15m. Stuck services: migrate (waiting for db).
Env resolution failure Env callback Shared env callback failed: svc.db.url is not available. Is "db" in services?