# @ayronforge/envil — Full Documentation > Type-safe environment variables powered by Effect Schema. ============================================================ ## Getting Started — Introduction ============================================================ `@ayronforge/envil` is a type-safe environment variable library built on [Effect Schema](https://effect.website/docs/schema/introduction). It validates your env vars at startup, gives you full TypeScript inference, and prevents client-side access to server secrets. ## Why envil? Environment variables are stringly-typed by default. A missing `DATABASE_URL` or a malformed `PORT` shouldn't crash your app deep in a request handler — it should fail **immediately** at startup with a clear error message. **envil** gives you: - **Runtime validation** — every env var is parsed through Effect Schema at startup - **Full type inference** — your `env` object is fully typed from your schema definitions, no manual annotations needed - **Client/server separation** — define `server`, `client`, and `shared` buckets; accessing a server variable on the client throws at runtime - **Framework presets** — pre-configured prefix rules for Next.js, Vite, and Expo - **Cloud secret resolution** — pull env vars from AWS Secrets Manager, GCP, Azure Key Vault, or 1Password - **Composable configs** — split env definitions across modules and merge them with `extends` ## Quick example ```ts import { createEnv, port, requiredString, boolean } from "@ayronforge/envil" import { Schema } from "effect" export const env = createEnv({ server: { DATABASE_URL: requiredString, PORT: port, DEBUG: boolean, }, client: { NEXT_PUBLIC_API_URL: requiredString, }, shared: { NODE_ENV: Schema.Literal("development", "production", "test"), }, }) // Fully typed: env.DATABASE_URL is string, env.PORT is number console.log(env.DATABASE_URL) ``` If any variable is missing or invalid, you get a clear error at startup: ``` EnvValidationError: Invalid environment variables: DATABASE_URL: Expected a string with a length of at least 1, but got undefined PORT: Expected Port (1-65535), but got "not-a-number" ``` - **Quickstart**: Install envil and create your first type-safe env config in minutes. - **Core Concepts**: Learn about validation, client/server separation, and the proxy model. - **Built-in Schemas**: Explore all the ready-to-use schemas for strings, numbers, URLs, and more. - **Resolvers**: Pull secrets from AWS, GCP, Azure, or 1Password at startup. - **Safe Parsing**: Validate env vars without throwing — get typed result objects instead. ## Acknowledgements This project is heavily inspired by [T3 Env](https://env.t3.gg) by [T3 OSS](https://github.com/t3-oss/t3-env). Thanks to the T3 team for their work and contributions to open source. ============================================================ ## Getting Started — Quickstart ============================================================ ## Prerequisites - **Node.js 18+** (or Bun / Deno) - **ESM only** — this package does not ship CommonJS builds - [**effect**](https://effect.website) **^3.19.11** as a peer dependency ## Installation ```bash npm install @ayronforge/envil effect ``` ## Step 1: Create your env config Create a file called `env.ts` (or `env.mjs`) at the root of your project: ```ts // env.ts import { createEnv, requiredString, port } from "@ayronforge/envil" import { Schema } from "effect" export const env = createEnv({ server: { DATABASE_URL: requiredString, API_SECRET: Schema.Redacted(Schema.String), PORT: port, }, client: { NEXT_PUBLIC_APP_URL: requiredString, }, shared: { NODE_ENV: Schema.Literal("development", "production", "test"), }, }) ``` ## Step 2: Use your env Import the `env` object anywhere in your app. It's fully typed: ```ts import { env } from "./env" // TypeScript knows: // env.DATABASE_URL → string // env.API_SECRET → Redacted // env.PORT → number // env.NODE_ENV → "development" | "production" | "test" console.log(`Server running on port ${env.PORT}`) ``` ## Step 3: Set your environment variables Create a `.env` file or set variables in your hosting provider: ```bash DATABASE_URL=postgresql://user:pass@localhost:5432/mydb API_SECRET=super-secret-key PORT=3000 NEXT_PUBLIC_APP_URL=http://localhost:3000 NODE_ENV=development ``` ## What happens on invalid env? If any variable is missing or fails validation, `createEnv` throws an `EnvValidationError` immediately: ``` EnvValidationError: Invalid environment variables: DATABASE_URL: Expected a string with a length of at least 1, but got undefined PORT: Expected Port (1-65535), but got "abc" ``` This crashes your app at startup — before any request is handled — so you catch issues immediately. > **Framework presets** > Using Next.js, Vite, or Expo? Check out [Framework Presets](/envil/docs/framework-presets) for pre-configured prefix rules so you don't have to configure `NEXT_PUBLIC_` or `VITE_` prefixes manually. ## Next steps - Learn about [Core Concepts](/envil/docs/core-concepts) like client/server separation and the proxy model - Explore the [Built-in Schemas](/envil/docs/schemas) for URLs, numbers, booleans, and more - Set up [Cloud Secret Resolution](/envil/docs/resolvers) for production ============================================================ ## Getting Started — Core Concepts ============================================================ ## Validation flow When you call `createEnv()`, the following happens: 1. **Environment source** — reads from `process.env` (or your `runtimeEnv` override) 2. **Empty string handling** — if `emptyStringAsUndefined` is enabled, empty strings become `undefined` 3. **Prefix resolution** — prepends the configured prefix to each key when reading the env 4. **Schema parsing** — each variable is decoded through its Effect Schema 5. **Error collection** — all validation errors are collected (not fail-fast) and thrown together 6. **Proxy creation** — the validated result is wrapped in a Proxy that enforces client/server access rules If any variable fails validation, `createEnv` throws an `EnvValidationError` containing all failures at once, so you can fix them all in one pass. ## Client/server separation The `server`, `client`, and `shared` buckets control where variables can be accessed: | Bucket | Validated on server? | Validated on client? | Accessible on client? | | -------- | -------------------- | -------------------- | ------------------------------- | | `server` | Yes | No | No — throws `ClientAccessError` | | `client` | Yes | Yes | Yes | | `shared` | Yes | Yes | Yes | **How it works:** The returned object is a `Proxy`. When running on the client (detected via `typeof window !== "undefined"`), accessing a key defined only in `server` throws a `ClientAccessError`. ```ts const env = createEnv({ server: { DATABASE_URL: requiredString, // Only accessible on server }, client: { NEXT_PUBLIC_API_URL: requiredString, // Accessible everywhere }, }) // On the client: env.NEXT_PUBLIC_API_URL // ✅ works env.DATABASE_URL // ❌ throws ClientAccessError ``` > Server-only variables are **not validated** on the client to avoid requiring their values to be bundled. They are simply blocked from access. ### Overriding server detection By default, `isServer` is `typeof window === "undefined"`. You can override this: ```ts const env = createEnv({ isServer: process.env.NEXT_RUNTIME !== "edge", // ... }) ``` ## Prefix handling Prefixes map your schema keys to actual environment variable names. There are two formats: ### String prefix Applies the same prefix to all buckets: ```ts createEnv({ prefix: "MYAPP_", server: { DB_URL: requiredString }, // reads MYAPP_DB_URL from env }) ``` ### Prefix map Different prefixes per bucket: ```ts createEnv({ prefix: { client: "NEXT_PUBLIC_", server: "", // no prefix }, server: { DB_URL: requiredString }, // reads DB_URL client: { APP_URL: requiredString }, // reads NEXT_PUBLIC_APP_URL }) ``` This is what [framework presets](/envil/docs/framework-presets) configure for you. ## Redacted values When you use `Schema.Redacted(Schema.String)` (or the `redacted` helper), the value is wrapped in Effect's `Redacted` type during validation. This means the value is safe to log, serialize, and spread — secrets never leak accidentally. ```ts import { Redacted } from "effect" const env = createEnv({ server: { API_SECRET: redacted(Schema.String), }, }) // env.API_SECRET is Redacted — safe to log, serialize, spread console.log(env.API_SECRET) // JSON.stringify(env) // {"API_SECRET":""} // Explicitly unwrap when you need the plain value const secret: string = Redacted.value(env.API_SECRET) ``` ## Empty string handling Some hosting providers set environment variables to empty strings instead of leaving them undefined. Enable `emptyStringAsUndefined` to treat them as missing: ```ts createEnv({ emptyStringAsUndefined: true, server: { OPTIONAL_VAR: Schema.optional(Schema.String), }, }) ``` ## Runtime env override By default, `createEnv` reads from `process.env`. You can provide a custom source: ```ts createEnv({ runtimeEnv: { DATABASE_URL: "postgresql://...", PORT: "3000", }, server: { DATABASE_URL: requiredString, PORT: port, }, }) ``` This is useful for testing or for frameworks like Vite where `import.meta.env` is the source. ## Testing Combine `runtimeEnv` with `isServer` to write deterministic tests without touching real environment variables: ```ts import { createEnv, requiredString, postgresUrl, port, withDefault } from "@ayronforge/envil" import { expect, test } from "vitest" test("env parses correctly", () => { const env = createEnv({ server: { DATABASE_URL: postgresUrl, PORT: withDefault(port, 3000), }, runtimeEnv: { DATABASE_URL: "postgresql://user:pass@localhost:5432/testdb", }, isServer: true, // Force server mode so server vars are validated }) expect(env.DATABASE_URL).toBe("postgresql://user:pass@localhost:5432/testdb") expect(env.PORT).toBe(3000) }) ``` > Setting `isServer: true` ensures server-only variables are validated and accessible, even if your test runner runs in an environment where `typeof window !== "undefined"` (e.g. jsdom). ## Validation error callback You can hook into validation errors before the exception is thrown: ```ts createEnv({ onValidationError: (errors) => { // Log to your error tracking service Sentry.captureMessage("Env validation failed", { extra: { errors } }) }, server: { DATABASE_URL: requiredString, }, }) ``` > The `onValidationError` callback fires **before** the `EnvValidationError` is thrown. The error is still thrown after the callback runs. ============================================================ ## CLI — Envil CLI ============================================================ The `envil` CLI provides a two-way workflow between `.env.example` files and type-safe `env.ts` definitions: - **`envil add env`** reads a `.env.example`, infers schemas from values, and writes an `env.ts` file. - **`envil add example`** imports your `env.ts`, introspects the schemas, and regenerates `.env.example`. No manifest or metadata comments are needed in the generated code. The schemas themselves carry all the information required for round-tripping. ## `envil add env` Generates an `env.ts` file from a `.env.example`. ```bash envil add env [options] ``` **Options:** | Flag | Description | Default | | ------------------------- | --------------------------------------------------------------------- | -------------------------------------------- | | `--input ` | Input `.env.example` path | `.env.example` | | `--output ` | Output `env.ts` path | `src/env.ts` if `src/` exists, else `env.ts` | | `--framework ` | Prefix preset: `nextjs`, `vite`, `expo`, `nuxt`, `sveltekit`, `astro` | | | `--client-prefix ` | Client prefix override | | | `--server-prefix ` | Server prefix override | | | `--shared-prefix ` | Shared prefix override | | | `--force` | Overwrite output file if it exists | | ### How inference works Every assigned value in `.env.example` is used for two things: 1. **Type inference** — the value determines which schema to use (e.g. `3000` on a `PORT` key becomes `port`, `true` becomes `boolean`, a URL becomes `url`). 2. **Default value** — the value is wrapped in `withDefault(schema, value)` in the generated code, so the variable is optional at runtime. For example, this `.env.example`: ```bash # @server PORT=3000 DATABASE_URL=postgres://user:pass@localhost:5432/app # @client NEXT_PUBLIC_API_URL=https://example.com ``` Generates: ```ts import { createEnv, postgresUrl, port, url, withDefault } from "@ayronforge/envil"; export const envDefinition = { prefix: { server: "", client: "", shared: "", }, server: { DATABASE_URL: withDefault(postgresUrl, "postgres://user:pass@localhost:5432/app"), PORT: withDefault(port, 3000), }, client: { NEXT_PUBLIC_API_URL: withDefault(url, "https://example.com"), }, shared: { }, } as const; export const env = createEnv(envDefinition); ``` ### Inference rules | Value pattern | Inferred schema | | ----------------------------------- | ----------------------- | | `true`, `false`, `1`, `0` | `boolean` | | Integer on a `*PORT*` key (1-65535) | `port` | | Integer | `integer` | | Decimal number | `number` | | `http://` or `https://` URL | `url` | | `postgres://` or `postgresql://` | `postgresUrl` | | `redis://` or `rediss://` | `redisUrl` | | `mongodb://` or `mongodb+srv://` | `mongoUrl` | | `mysql://` or `mysqls://` | `mysqlUrl` | | Comma-separated values | `commaSeparated` | | Comma-separated numbers | `commaSeparatedNumbers` | | Comma-separated URLs | `commaSeparatedUrls` | | JSON object or array | `json` | | Anything else | `requiredString` | You can override inference using [directives](/envil/docs/directives). ## `envil add example` Regenerates a `.env.example` from an existing `env.ts` file using schema introspection. It dynamically imports your `env.ts`, reads the `envDefinition` export, and walks the Effect Schema AST to extract type information, defaults, and wrapper metadata. ```bash envil add example [options] ``` **Options:** | Flag | Description | Default | | ----------------- | ---------------------------------- | ------------------------------------ | | `--input ` | Input `env.ts` path | `env.ts`, then `src/env.ts` fallback | | `--output ` | Output `.env.example` path | `.env.example` | | `--force` | Overwrite output file if it exists | | > Your input file must be importable by the current runtime and export an `envDefinition` object with `prefix`, `server`, `client`, and `shared` fields. ============================================================ ## CLI — Directives ============================================================ Directives are special comments in `.env.example` files that control how the CLI interprets variables. They are entirely optional — [inference](/envil/docs/envil-cli#inference-rules) handles most cases automatically. Use directives when you need to override the inferred behavior. ## Section directives Section directives assign all following variables to a bucket until the next section appears. You can optionally include a prefix after the section name: ```bash # @server SERVER_ SERVER_DATABASE_URL=postgres://user:pass@localhost:5432/app SERVER_PORT=3000 # @client NEXT_PUBLIC_ NEXT_PUBLIC_API_URL=https://example.com # @shared NODE_ENV=development ``` The three sections correspond to the `server`, `client`, and `shared` fields in your `envDefinition`. Variables without a section default to `server`. When a prefix is provided (e.g. `# @server SERVER_`), the CLI strips the prefix from each key to produce the schema key. The generated `envDefinition` stores the prefix config: ```ts export const envDefinition = { prefix: { server: "SERVER_", client: "NEXT_PUBLIC_", shared: "", }, server: { DATABASE_URL: withDefault(postgresUrl, "postgres://user:pass@localhost:5432/app"), PORT: withDefault(port, 3000), }, client: { API_URL: withDefault(url, "https://example.com"), }, // ... } ``` When `envil add example` regenerates the `.env.example`, it reads the prefix from the `envDefinition` and emits the combined section+prefix form automatically. > CLI flags (`--client-prefix`, `--server-prefix`, `--shared-prefix`, `--framework`) take priority over section-level prefix directives. ## Per-variable directives Per-variable directives override inference for individual variables. Place them on the line above the assignment, or inline after the value: ```bash # Above the variable # @type integer TIMEOUT=30 # Inline VERBOSE=true # @optional @redacted ``` Multiple directives can be combined on a single line. ### `@type` Overrides the inferred schema kind: ```bash # @type number RATE=3.14 # @type requiredString CODE=12345 ``` Accepted values: `requiredString`, `boolean`, `integer`, `number`, `port`, `url`, `postgresUrl`, `redisUrl`, `mongoUrl`, `mysqlUrl`, `commaSeparated`, `commaSeparatedNumbers`, `commaSeparatedUrls`, `json`. Aliases are also accepted: `string` for `requiredString`, `bool` for `boolean`, `int` for `integer`. #### `@type enum` A special form of `@type` that generates a `stringEnum` schema. List the allowed values as a comma-separated list: ```bash # @type enum dev,staging,prod NODE_ENV=dev ``` This generates `NODE_ENV: withDefault(stringEnum(["dev", "staging", "prod"]), "dev")`. ### `@optional` Marks the variable as optional. The generated code wraps it with `optional(schema)`, making it accept `undefined`: ```bash # @optional DEBUG_HOST=localhost ``` Produces `DEBUG_HOST: withDefault(optional(requiredString), "localhost")` — the variable has a default but is explicitly typed as optional. Pass `false` to disable: `# @optional false`. ### `@redacted` Marks the variable as sensitive. The generated code wraps it with `redacted(schema)`, producing a `Redacted` value that won't leak in logs or serialization: ```bash # @redacted API_SECRET=my-secret ``` Pass `false` to disable: `# @redacted false`. ### `@bucket` Overrides the bucket for a single variable, regardless of the active section: ```bash # @server PORT=3000 # This variable goes to shared even though we're in the server section # @bucket shared NODE_ENV=development ``` ### `@no-default` By default, variables with an assigned value use that value as the default. Use `@no-default` to opt out, making the variable required at runtime: ```bash # @no-default PORT=3000 ``` This generates `PORT: port` with no `withDefault` wrapper. The assigned value `3000` is still used for type inference, but the variable is required at runtime. ## Bucket resolution order When determining which bucket a variable belongs to, the CLI checks in this order: 1. Inline `@bucket` directive on the variable 2. Active section (`# @server`, `# @client`, `# @shared`) 3. Prefix inference (e.g. a key starting with `NEXT_PUBLIC_` maps to `client` when that prefix is configured) 4. Falls back to `server` ## Full example ```bash # @server SERVER_ SERVER_PORT=3000 SERVER_DATABASE_URL=postgres://user:pass@localhost:5432/app # @redacted SERVER_API_SECRET=change-me # @type integer # @no-default SERVER_MAX_RETRIES=3 # @type enum dev,staging,prod SERVER_NODE_ENV=dev # @client NEXT_PUBLIC_ NEXT_PUBLIC_API_URL=https://api.example.com # @shared # @optional DEBUG=false ``` ============================================================ ## Schemas — Built-in Schemas ============================================================ envil ships with a set of ready-to-use schemas for common environment variable patterns. All schemas are exported from the main package. ```ts import { requiredString, port, boolean, url } from "@ayronforge/envil" ``` ## String schemas | Name | Type | Description | | --- | --- | --- | | `requiredString` | `string` | Non-empty string. Fails if the value is undefined or empty. | ```ts import { requiredString, optional } from "@ayronforge/envil" import { Schema } from "effect" createEnv({ server: { API_KEY: requiredString, // must be present and non-empty OPTIONAL_FLAG: optional(Schema.String), // can be undefined }, }) ``` ## Number schemas | Name | Type | Description | | --- | --- | --- | | `number` | `number` | Any number parsed from string via NumberFromString. | | `positiveNumber` | `number` | Positive number parsed from string. Must be > 0. | | `integer` | `number` | Integer parsed from string. No decimals allowed. | | `nonNegativeNumber` | `number` | Non-negative number parsed from string. Must be >= 0. | | `port` | `number` | Port number (1–65535), parsed from string. | ```ts import { number, port, positiveNumber, integer } from "@ayronforge/envil" createEnv({ server: { PORT: port, // 1-65535 MAX_RETRIES: integer, // whole number RATE_LIMIT: positiveNumber, // > 0 THRESHOLD: number, // any number (e.g. "3.14" → 3.14) }, }) ``` ## Boolean schema | Name | Type | Description | | --- | --- | --- | | `boolean` | `boolean` | Parses 'true', 'false', '1', '0' (case-insensitive) into a boolean. | ```ts import { boolean } from "@ayronforge/envil" createEnv({ server: { DEBUG: boolean, // "true" → true, "0" → false VERBOSE: boolean, }, }) ``` ## URL schemas | Name | Type | Description | | --- | --- | --- | | `url` | `string` | Valid HTTP or HTTPS URL. | | `postgresUrl` | `string` | PostgreSQL connection URL (postgres:// or postgresql://). | | `redisUrl` | `string` | Redis connection URL (redis:// or rediss://). | | `mongoUrl` | `string` | MongoDB connection URL (mongodb:// or mongodb+srv://). | | `mysqlUrl` | `string` | MySQL connection URL (mysql:// or mysqls://). | | `commaSeparatedUrls` | `string[]` | Comma-separated list of valid HTTP/HTTPS URLs. | ```ts import { url, postgresUrl, redisUrl } from "@ayronforge/envil" createEnv({ server: { DATABASE_URL: postgresUrl, // postgresql://user:pass@host:5432/db REDIS_URL: redisUrl, // redis://localhost:6379/0 WEBHOOK_URL: url, // https://example.com/webhook }, }) ``` ## Comma-separated schemas | Name | Type | Description | | --- | --- | --- | | `commaSeparated` | `string[]` | Splits a comma-separated string into a trimmed string array. | | `commaSeparatedNumbers` | `number[]` | Splits a comma-separated string into a number array. Throws if any item isn't a valid number. | ```ts import { commaSeparated, commaSeparatedNumbers } from "@ayronforge/envil" createEnv({ server: { ALLOWED_ORIGINS: commaSeparated, // "a.com, b.com" → ["a.com", "b.com"] RETRY_DELAYS: commaSeparatedNumbers, // "100,200,500" → [100, 200, 500] }, }) ``` ## Parameterized schemas ### `stringEnum` Creates a schema that accepts only the specified literal values: ```ts import { stringEnum } from "@ayronforge/envil" createEnv({ shared: { LOG_LEVEL: stringEnum(["debug", "info", "warn", "error"]), // Type: "debug" | "info" | "warn" | "error" }, }) ``` ### `json` Parses a JSON string and validates the result against an inner schema: ```ts import { json } from "@ayronforge/envil" import { Schema } from "effect" createEnv({ server: { // JSON_CONFIG='{"retries":3,"timeout":5000}' JSON_CONFIG: json(Schema.Struct({ retries: Schema.Number, timeout: Schema.Number, })), }, }) ``` ## Using Effect Schema directly Any Effect Schema works as a validator. The built-in schemas are just convenience wrappers: ```ts import { Schema } from "effect" createEnv({ server: { PORT: Schema.NumberFromString, NODE_ENV: Schema.Literal("development", "production", "test"), API_KEY: Schema.String.pipe(Schema.minLength(32)), CUSTOM: Schema.transform( Schema.String, Schema.Number, { decode: (s) => parseInt(s, 16), encode: (n) => n.toString(16) }, ), }, }) ``` > If you need a schema that doesn't exist as a built-in, just use Effect Schema directly. envil accepts any `Schema.Schema`. ============================================================ ## Schemas — Schema Helpers ============================================================ envil provides four helper functions for common patterns when defining environment variable schemas. ```ts import { withDefault, optional, redacted, json } from "@ayronforge/envil" ``` ## withDefault Adds a default value to a schema, making the variable optional. If the env var is missing, the default value is used instead. `withDefault` supports both data-first and pipe-style usage: ### Data-first style ```ts import { withDefault, port } from "@ayronforge/envil" createEnv({ server: { PORT: withDefault(port, 3000), // If PORT is not set, defaults to 3000 }, }) ``` ### Pipe style ```ts import { withDefault, port } from "@ayronforge/envil" createEnv({ server: { PORT: port.pipe(withDefault(3000)), }, }) ``` Both styles produce the same result. Use whichever reads better in your codebase. > `withDefault` wraps the schema in `Schema.UndefinedOr(...)` and applies a transform that substitutes `undefined` with the default value. The resulting type reflects the schema's output type — e.g., `withDefault(port, 3000)` produces `number`, not `number | undefined`. ## optional Makes any schema accept `undefined`, allowing the env var to be missing without causing a validation error. This is a shorthand for `Schema.UndefinedOr(schema)`. ```ts import { optional } from "@ayronforge/envil" import { Schema } from "effect" createEnv({ server: { // All of these are optional — missing vars become undefined DEBUG_HOST: optional(Schema.String), TRACE_SAMPLE_RATE: optional(Schema.Number), }, }) ``` > Unlike `withDefault`, `optional` does not substitute a fallback — the resulting type includes `undefined`. Use `withDefault` when you need a guaranteed value. ## redacted Wraps a schema with Effect's `Redacted` type. This prevents accidental logging, serialization, or spreading of sensitive values. ```ts import { redacted } from "@ayronforge/envil" import { Redacted, Schema } from "effect" const env = createEnv({ server: { API_SECRET: redacted(Schema.String), // Equivalent to: Schema.Redacted(Schema.String) }, }) // env.API_SECRET is Redacted console.log(env.API_SECRET) // JSON.stringify(env) // {"API_SECRET":""} // Explicitly unwrap when you need the plain value const secret: string = Redacted.value(env.API_SECRET) ``` The `redacted` helper is a shorthand for `Schema.Redacted(schema)`. Values wrapped in `Redacted` appear as `` in logs and serialized output, and remain wrapped when spread or iterated. ## json Parses a JSON string env var and validates the parsed result against an inner schema: ```ts import { json } from "@ayronforge/envil" import { Schema } from "effect" createEnv({ server: { // FEATURE_FLAGS='{"darkMode":true,"newUI":false}' FEATURE_FLAGS: json(Schema.Struct({ darkMode: Schema.Boolean, newUI: Schema.Boolean, })), }, }) ``` The `json` helper is equivalent to `Schema.parseJson(schema)`. It first parses the string as JSON, then validates the result against the provided schema. ### Validation behavior If the env var is not valid JSON, or if the parsed JSON doesn't match the inner schema, validation fails: ``` FEATURE_FLAGS: Expected valid JSON, but got '{ invalid json' ``` ## Composing helpers Helpers can be composed together: ```ts import { withDefault, optional, redacted, json } from "@ayronforge/envil" import { Schema } from "effect" createEnv({ server: { // Optional JSON config with a default APP_CONFIG: withDefault( json(Schema.Struct({ debug: Schema.Boolean })), { debug: false }, ), // Optional with a default LOG_LEVEL: optional(Schema.String).pipe(withDefault("info")), // Redacted string with a default API_KEY: redacted(Schema.String).pipe(withDefault("dev-key")), }, }) ``` ============================================================ ## Configuration — Framework Presets ============================================================ Framework presets provide pre-configured `prefix` settings for popular frameworks. They handle the client-side prefix convention so you don't have to configure it manually. ```ts import { nextjs, vite, expo, nuxt, sveltekit, astro } from "@ayronforge/envil/presets" ``` ## Available presets ## Usage Spread the preset into your `createEnv` call: ### Next.js ```ts import { createEnv, requiredString } from "@ayronforge/envil" import { nextjs } from "@ayronforge/envil/presets" import { Schema } from "effect" export const env = createEnv({ ...nextjs, server: { DATABASE_URL: requiredString, }, client: { NEXT_PUBLIC_API_URL: requiredString, // Reads NEXT_PUBLIC_API_URL from env (prefix auto-applied) }, }) ``` ### Vite ```ts import { createEnv, requiredString } from "@ayronforge/envil" import { vite } from "@ayronforge/envil/presets" export const env = createEnv({ ...vite, server: { SECRET_KEY: requiredString, }, client: { APP_URL: requiredString, // Reads VITE_APP_URL from env }, runtimeEnv: import.meta.env, // Vite uses import.meta.env }) ``` > **Vite runtime env** > Vite doesn't expose environment variables on `process.env` by default. Pass `runtimeEnv: import.meta.env` to read from Vite's env source. ### Expo ```ts import { createEnv, requiredString } from "@ayronforge/envil" import { expo } from "@ayronforge/envil/presets" export const env = createEnv({ ...expo, client: { API_URL: requiredString, // Reads EXPO_PUBLIC_API_URL from env }, }) ``` ### Nuxt ```ts import { createEnv, requiredString } from "@ayronforge/envil" import { nuxt } from "@ayronforge/envil/presets" export const env = createEnv({ ...nuxt, client: { API_URL: requiredString, // Reads NUXT_PUBLIC_API_URL from env }, }) ``` ### SvelteKit ```ts import { createEnv, requiredString } from "@ayronforge/envil" import { sveltekit } from "@ayronforge/envil/presets" export const env = createEnv({ ...sveltekit, client: { API_URL: requiredString, // Reads PUBLIC_API_URL from env }, }) ``` ### Astro ```ts import { createEnv, requiredString } from "@ayronforge/envil" import { astro } from "@ayronforge/envil/presets" export const env = createEnv({ ...astro, client: { API_URL: requiredString, // Reads PUBLIC_API_URL from env }, runtimeEnv: import.meta.env, // Astro exposes env via import.meta.env }) ``` ## How presets work Presets are plain objects with a `prefix` property. For example, the Next.js preset is: ```ts export const nextjs = { prefix: { client: "NEXT_PUBLIC_" } } as const ``` When you spread `...nextjs` into `createEnv`, it sets the `prefix` option to `{ client: "NEXT_PUBLIC_" }`. This means: - **Server variables** have no prefix (read as-is) - **Client variables** are prefixed with `NEXT_PUBLIC_` - **Shared variables** have no prefix ## Creating custom presets You can create your own preset for any prefix convention: ```ts const myPreset = { prefix: { client: "PUBLIC_", server: "SERVER_", shared: "SHARED_", }, } as const createEnv({ ...myPreset, server: { DB_URL: requiredString }, // reads SERVER_DB_URL client: { API_URL: requiredString }, // reads PUBLIC_API_URL shared: { NODE_ENV: requiredString }, // reads SHARED_NODE_ENV }) ``` ============================================================ ## Configuration — Environment Composition ============================================================ As your application grows, you may want to split environment variable definitions across modules — a database config, an auth config, a feature flags config, etc. The `extends` option lets you compose these into a single typed env object. ## Why compose? Instead of one massive `createEnv` call, you can: - Define env vars close to the code that uses them - Reuse common configs across packages in a monorepo - Keep each module's env requirements self-contained ## Using extends The `extends` option takes an array of existing env objects and merges them into the new one: ```ts // db.env.ts import { createEnv, requiredString, port } from "@ayronforge/envil" export const dbEnv = createEnv({ server: { DATABASE_URL: requiredString, DB_PORT: port, }, }) // auth.env.ts import { createEnv, requiredString } from "@ayronforge/envil" export const authEnv = createEnv({ server: { JWT_SECRET: requiredString, SESSION_TTL: requiredString, }, }) // env.ts — the combined env import { createEnv, requiredString } from "@ayronforge/envil" import { dbEnv } from "./db.env" import { authEnv } from "./auth.env" export const env = createEnv({ extends: [dbEnv, authEnv], server: { APP_NAME: requiredString, }, }) // env has: DATABASE_URL, DB_PORT, JWT_SECRET, SESSION_TTL, APP_NAME ``` ## Merge semantics When multiple configs define the same key, the **last definition wins**: 1. Extended envs are merged in array order (first to last) 2. Keys from the current `createEnv` call override extended keys ```ts const base = createEnv({ server: { PORT: port }, // PORT = 3000 }) const app = createEnv({ extends: [base], server: { PORT: port }, // This PORT overrides the one from base }) ``` > Each env in `extends` is validated independently when it's created. The `extends` mechanism only merges the final validated values — it does not re-validate them. ## Full multi-module example ```ts // packages/shared/env.ts import { createEnv } from "@ayronforge/envil" import { Schema } from "effect" export const sharedEnv = createEnv({ shared: { NODE_ENV: Schema.Literal("development", "production", "test"), }, }) // packages/api/env.ts import { createEnv, requiredString, port } from "@ayronforge/envil" import { sharedEnv } from "@shared/env" export const apiEnv = createEnv({ extends: [sharedEnv], server: { DATABASE_URL: requiredString, PORT: port, }, }) // packages/web/env.ts import { createEnv, requiredString } from "@ayronforge/envil" import { nextjs } from "@ayronforge/envil/presets" import { sharedEnv } from "@shared/env" export const webEnv = createEnv({ ...nextjs, extends: [sharedEnv], client: { API_URL: requiredString, }, }) ``` Each package gets a fully typed env with only the variables it needs, while sharing common definitions through `extends`. ============================================================ ## Configuration — Error Handling ============================================================ envil uses three distinct error types to signal different failure modes. All error classes are exported from the main package. ```ts import { EnvValidationError, ClientAccessError } from "@ayronforge/envil" ``` ## EnvValidationError Thrown when one or more environment variables fail schema validation. | Name | Type | Description | | --- | --- | --- | | `errors` | `ReadonlyArray` | Array of human-readable validation error messages. | | `message` | `string` | Formatted error message with all failures. | ### Example ```ts import { createEnv, requiredString, port, EnvValidationError } from "@ayronforge/envil" try { const env = createEnv({ server: { DATABASE_URL: requiredString, PORT: port, }, }) } catch (e) { if (e instanceof EnvValidationError) { console.error("Validation failed:") for (const error of e.errors) { console.error(` - ${error}`) } // Output: // - DATABASE_URL: Expected a string with a length of at least 1, but got undefined // - PORT: Expected Port (1-65535), but got "abc" } } ``` > `EnvValidationError` collects **all** validation failures, not just the first one. This lets you fix all issues in a single pass rather than playing whack-a-mole. ### onValidationError callback You can hook into validation errors before the exception is thrown: ```ts createEnv({ onValidationError: (errors) => { // errors is string[] — same as EnvValidationError.errors logger.error("Environment validation failed", { errors }) }, server: { DATABASE_URL: requiredString, }, }) ``` The callback fires **before** the error is thrown. The `EnvValidationError` is still thrown after the callback completes. ## ClientAccessError Thrown when client-side code attempts to access a server-only environment variable. | Name | Type | Description | | --- | --- | --- | | `variableName` | `string` | The name of the server variable that was accessed. | | `message` | `string` | Descriptive error message. | ### Example ```ts import { createEnv, requiredString, ClientAccessError } from "@ayronforge/envil" const env = createEnv({ server: { DATABASE_URL: requiredString, }, client: { NEXT_PUBLIC_API_URL: requiredString, }, isServer: false, // simulate client-side }) try { env.DATABASE_URL // throws on client } catch (e) { if (e instanceof ClientAccessError) { console.error(e.variableName) // "DATABASE_URL" // "Attempted to access server-side env var "DATABASE_URL" on client" } } ``` ## ResolverError Thrown when a [resolver](/envil/docs/resolvers) fails to initialize or fetch secrets. This is an Effect tagged error, used in the Effect error channel. | Name | Type | Description | | --- | --- | --- | | `resolver` | `string` | Name of the resolver that failed (e.g., "aws", "gcp"). | | `message` | `string` | Human-readable error message. | | `cause` | `unknown` | The underlying error, if any. | `ResolverError` is a `Data.TaggedError` from Effect, meaning you can use it with Effect's error handling: ```ts import { Effect } from "effect" import { createEnv, requiredString } from "@ayronforge/envil" import { fromAwsSecrets, ResolverError } from "@ayronforge/envil/aws" const envEffect = createEnv({ server: { DATABASE_URL: requiredString, }, resolvers: [ fromAwsSecrets({ secrets: { DATABASE_URL: "prod/db-url" }, }), ], }) // Handle resolver errors specifically const program = envEffect.pipe( Effect.catchTag("ResolverError", (err) => { console.error(`Resolver "${err.resolver}" failed: ${err.message}`) return Effect.fail(err) }), ) ``` > When using resolvers, `createEnv` returns an `Effect.Effect` instead of a plain object. You must run it with `Effect.runPromise` or `Effect.runSync` (or use it within an Effect pipeline). ## Safe alternative If you prefer handling errors without `try`/`catch` or Effect error channels, use [`safeCreateEnv`](/envil/docs/safe-parsing). It captures errors in a discriminated result object instead of raising exceptions: ```ts import { safeCreateEnv, requiredString } from "@ayronforge/envil" const result = safeCreateEnv({ server: { DATABASE_URL: requiredString }, isServer: true, }) if (!result.success) { // result.error is EnvValidationError console.error(result.error.errors) } ``` > With resolvers, `safeCreateEnv` returns an Effect that **never fails** — both `ResolverError` and `EnvValidationError` are captured in the result object. See [Safe Parsing](/envil/docs/safe-parsing) for details. ============================================================ ## Configuration — Safe Parsing ============================================================ `safeCreateEnv` is a safe alternative to `createEnv`. Rather than crashing on validation failure, it returns a discriminated result object you can inspect. ```ts import { safeCreateEnv } from "@ayronforge/envil" ``` ## Without resolvers When no resolvers are configured, `safeCreateEnv` returns a result object synchronously — just like `createEnv` returns a plain object. ```ts import { safeCreateEnv, requiredString, port, EnvValidationError } from "@ayronforge/envil" const result = safeCreateEnv({ server: { DATABASE_URL: requiredString, PORT: port, }, isServer: true, }) if (result.success) { // result.data is fully typed — same as createEnv's return console.log(result.data.DATABASE_URL) console.log(result.data.PORT) } else { // result.error is EnvValidationError console.error("Validation failed:", result.error.errors) } ``` ### Return type ```ts type Result = | { success: true; data: EnvResult<...> } | { success: false; error: EnvValidationError } ``` ## With resolvers When resolvers are present, `safeCreateEnv` returns an `Effect` that **never fails**. Both resolver errors and validation errors are captured in the result object. ```ts import { Effect, Redacted } from "effect" import { safeCreateEnv, requiredString } from "@ayronforge/envil" import { fromAwsSecrets } from "@ayronforge/envil/aws" const program = safeCreateEnv({ server: { DATABASE_URL: requiredString, API_KEY: requiredString, }, resolvers: [ fromAwsSecrets({ secrets: { API_KEY: "prod/api-key" }, }), ], isServer: true, }) // The Effect never fails — errors are in the result const result = await Effect.runPromise(program) if (result.success) { console.log(result.data.DATABASE_URL) // API_KEY is auto-redacted from resolver console.log(Redacted.value(result.data.API_KEY)) } else { // result.error is ResolverError | EnvValidationError console.error(result.error.message) } ``` ### Return type ```ts type Result = Effect.Effect< | { success: true; data: EnvResult<...> } | { success: false; error: ResolverError | EnvValidationError }, never // Effect error channel is never — it cannot fail > ``` > Because the Effect error channel is `never`, you don't need `Effect.catchTag` or `Effect.catchAll` — the result object handles all error cases for you. ## autoRedactResolver `safeCreateEnv` supports the same `autoRedactResolver` option as `createEnv`. When resolvers are present, resolver-provided values are auto-wrapped in `Redacted` by default. Pass `autoRedactResolver: false` to disable this: ```ts const result = await Effect.runPromise( safeCreateEnv({ server: { SECRET: requiredString, PLAIN: requiredString, }, resolvers: [myResolver], autoRedactResolver: false, isServer: true, }), ) if (result.success) { // SECRET and PLAIN are plain strings, not Redacted result.data.SECRET // string } ``` ## Exported types All result types are exported from the main package: ```ts import type { SafeCreateEnvResult, SafeCreateEnvSuccess, SafeCreateEnvFailure, } from "@ayronforge/envil" ``` ## When to use safeCreateEnv Use `safeCreateEnv` when you want to handle validation failures gracefully rather than crashing at startup: - **Graceful degradation** — fall back to defaults or partial configs - **Custom error reporting** — format and send errors to your logging pipeline - **Conditional startup** — decide at runtime whether a missing variable is fatal - **Testing** — assert on specific validation failures without try/catch For most applications, `createEnv` with its fail-fast behavior is the right choice. Use `safeCreateEnv` when you need control over the error path. > `safeCreateEnv` accepts the exact same options as `createEnv` — the only difference is how errors are surfaced. ============================================================ ## Resolvers — Overview ============================================================ Resolvers let you pull environment variable values from cloud secret managers instead of (or in addition to) `process.env`. This is useful in production environments where secrets are stored in services like AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, or 1Password. ## How resolvers work When you pass a `resolvers` array to `createEnv`, the return type changes from a plain object to an `Effect.Effect`: ```ts import { createEnv, requiredString } from "@ayronforge/envil" import { fromAwsSecrets } from "@ayronforge/envil/aws" import { Effect } from "effect" // With resolvers → returns Effect const envEffect = createEnv({ server: { DATABASE_URL: requiredString, API_KEY: requiredString, }, resolvers: [ fromAwsSecrets({ secrets: { DATABASE_URL: "prod/database-url", API_KEY: "prod/api-key", }, }), ], }) // Run the Effect to get the env object const env = await Effect.runPromise(envEffect) ``` ### Resolution flow 1. All resolvers run **concurrently** (unbounded concurrency) 2. Results are merged: `process.env` is the base, resolver results override 3. The merged env is passed through schema validation 4. Resolver initialization/configuration failures surface as `ResolverError` in the Effect error channel 5. Per-secret fetch misses are returned as `undefined` and then handled by schema validation ### Merge behavior Resolver results are merged left-to-right on top of `process.env` (or `runtimeEnv`): ```ts // Final env = { ...process.env, ...resolver1Results, ...resolver2Results } resolvers: [resolver1, resolver2] ``` Later resolvers override earlier ones for the same key. ## Auto-redaction By default, all values provided by resolvers are automatically wrapped in Effect's `Redacted` type. This means secrets fetched from cloud providers are safe from accidental leaks through logging, serialization, or spreading. ```ts const env = await Effect.runPromise( createEnv({ server: { DB_HOST: requiredString, DB_PASS: requiredString, }, resolvers: [ fromAwsSecrets({ secrets: { DB_PASS: "prod/db-password" }, }), ], runtimeEnv: { DB_HOST: "localhost" }, }), ) // DB_PASS from resolver → Redacted (type-safe) console.log(env.DB_PASS) // JSON.stringify(env) // {"DB_HOST":"localhost","DB_PASS":""} Redacted.value(env.DB_PASS) // "actual-password" // DB_HOST from runtimeEnv → string (not redacted) console.log(env.DB_HOST) // "localhost" ``` Types are inferred automatically: resolver-provided keys are typed as `Redacted`, while non-resolver keys remain `T`. ### Disabling auto-redaction Set `autoRedactResolver: false` to disable automatic wrapping: ```ts const env = await Effect.runPromise( createEnv({ server: { DB_PASS: requiredString }, resolvers: [fromAwsSecrets({ secrets: { DB_PASS: "prod/db-pass" } })], autoRedactResolver: false, }), ) env.DB_PASS // string (not Redacted) ``` > Even with `autoRedactResolver: false`, you can still use `redacted()` in your schema to explicitly mark individual keys as `Redacted`. ## Resolver failure modes Built-in resolvers default to **lenient** per-secret fetching: if an individual fetch fails, the resolver returns `undefined` for that key and schema validation decides whether that is acceptable. Set `strict: true` on a resolver to fail immediately with `ResolverError` on fetch/parsing errors: ```ts fromAwsSecrets({ secrets: { DB_PASS: "prod/db-pass" }, strict: true, }) ``` ## Available resolvers | Name | Type | Description | | --- | --- | --- | | `fromAwsSecrets` | `@ayronforge/envil/aws` | AWS Secrets Manager. Peer dep: @aws-sdk/client-secrets-manager | | `fromGcpSecrets` | `@ayronforge/envil/gcp` | GCP Secret Manager. Peer dep: @google-cloud/secret-manager | | `fromAzureKeyVault` | `@ayronforge/envil/azure` | Azure Key Vault. Peer deps: @azure/keyvault-secrets, @azure/identity | | `fromOnePassword` | `@ayronforge/envil/1password` | 1Password. Peer dep: @1password/sdk | | `fromRemoteSecrets` | `@ayronforge/envil` | Custom / remote secrets. No peer dep — bring your own client. | > Each built-in resolver dynamically imports its cloud SDK only when needed. You must install the peer dependency for the resolver you use. `fromRemoteSecrets` requires no peer dependencies — you provide the client directly. - **AWS Secrets Manager**: Batch-fetch secrets with JSON key extraction and automatic SDK initialization. - **GCP Secret Manager**: Access secrets with full resource name support and automatic Uint8Array decoding. - **Azure Key Vault**: Fetch secrets with DefaultAzureCredential and managed identity support. - **1Password**: Resolve secrets from 1Password vaults using the SDK's batch resolution. - **Custom / Remote Secrets**: Generic resolver for custom integrations, testing, and unsupported providers. ============================================================ ## Resolvers — AWS Secrets Manager ============================================================ ## Installation Install the AWS SDK peer dependency: ```bash npm install @aws-sdk/client-secrets-manager ``` ## Basic usage ```ts import { createEnv, requiredString } from "@ayronforge/envil" import { fromAwsSecrets } from "@ayronforge/envil/aws" import { Effect } from "effect" const envEffect = createEnv({ server: { DATABASE_URL: requiredString, API_KEY: requiredString, }, resolvers: [ fromAwsSecrets({ secrets: { DATABASE_URL: "prod/database-url", API_KEY: "prod/api-key", }, region: "us-east-1", }), ], }) const env = await Effect.runPromise(envEffect) ``` ## Options | Name | Type | Description | | --- | --- | --- | | `secrets` | `Record` | Map of env var names to AWS secret IDs. Supports #jsonKey syntax for JSON extraction. | | `region` | `string` | AWS region for the Secrets Manager client. | | `strict` | `boolean` | When true, secret fetch/parsing errors fail with ResolverError instead of returning undefined. | ## JSON key extraction If a secret stores a JSON object, you can extract a specific field using the `#key` syntax: ```ts fromAwsSecrets({ secrets: { // Secret "prod/database" contains: {"url": "postgres://...", "pool": "10"} DATABASE_URL: "prod/database#url", DB_POOL_SIZE: "prod/database#pool", }, }) ``` This fetches the `prod/database` secret once and extracts the `url` and `pool` fields separately. ============================================================ ## Resolvers — GCP Secret Manager ============================================================ ## Installation Install the GCP SDK peer dependency: ```bash npm install @google-cloud/secret-manager ``` ## Basic usage ```ts import { createEnv, requiredString } from "@ayronforge/envil" import { fromGcpSecrets } from "@ayronforge/envil/gcp" import { Effect } from "effect" const envEffect = createEnv({ server: { DATABASE_URL: requiredString, API_KEY: requiredString, }, resolvers: [ fromGcpSecrets({ secrets: { DATABASE_URL: "database-url", API_KEY: "api-key", }, projectId: "my-gcp-project", }), ], }) const env = await Effect.runPromise(envEffect) ``` ## Options | Name | Type | Description | | --- | --- | --- | | `secrets` | `Record` | Map of env var names to GCP secret names (or full resource paths). | | `projectId` | `string` | GCP project ID. Required when using short secret names. | | `version` | `string` | Secret version to access. | | `strict` | `boolean` | When true, secret fetch errors fail with ResolverError instead of returning undefined. | ## Full resource names You can use either short secret names or full GCP resource paths: ```ts fromGcpSecrets({ secrets: { // Short name — requires projectId DATABASE_URL: "database-url", // Full resource path — projectId not needed API_KEY: "projects/my-project/secrets/api-key/versions/latest", }, projectId: "my-gcp-project", }) ``` Short names are expanded to: `projects/{projectId}/secrets/{name}/versions/{version}` ## Custom version Access a specific version of a secret instead of `latest`: ```ts fromGcpSecrets({ secrets: { API_KEY: "api-key" }, projectId: "my-project", version: "3", // Access version 3 specifically }) ``` ============================================================ ## Resolvers — Azure Key Vault ============================================================ ## Installation Install the Azure SDK peer dependencies: ```bash npm install @azure/keyvault-secrets @azure/identity ``` ## Basic usage ```ts import { createEnv, requiredString } from "@ayronforge/envil" import { fromAzureKeyVault } from "@ayronforge/envil/azure" import { Effect } from "effect" const envEffect = createEnv({ server: { DATABASE_URL: requiredString, API_KEY: requiredString, }, resolvers: [ fromAzureKeyVault({ secrets: { DATABASE_URL: "database-url", API_KEY: "api-key", }, vaultUrl: "https://my-vault.vault.azure.net", }), ], }) const env = await Effect.runPromise(envEffect) ``` ## Options | Name | Type | Description | | --- | --- | --- | | `secrets` | `Record` | Map of env var names to Azure Key Vault secret names. | | `vaultUrl` | `string` | Azure Key Vault URL. | | `credential` | `unknown` | Azure credential. Defaults to DefaultAzureCredential. | | `strict` | `boolean` | When true, secret fetch errors fail with ResolverError instead of returning undefined. | ## Default credentials By default, the resolver uses `DefaultAzureCredential` from `@azure/identity`, which supports: - Managed identity (Azure VMs, App Service, Functions) - Azure CLI credentials (local development) - Environment variables (`AZURE_CLIENT_ID`, `AZURE_CLIENT_SECRET`, `AZURE_TENANT_ID`) You can provide a custom credential: ```ts import { ClientSecretCredential } from "@azure/identity" fromAzureKeyVault({ secrets: { API_KEY: "api-key" }, vaultUrl: "https://my-vault.vault.azure.net", credential: new ClientSecretCredential(tenantId, clientId, clientSecret), }) ``` ============================================================ ## Resolvers — 1Password ============================================================ ## Installation Install the 1Password SDK peer dependency: ```bash npm install @1password/sdk ``` ## Basic usage ```ts import { createEnv, requiredString } from "@ayronforge/envil" import { fromOnePassword } from "@ayronforge/envil/1password" import { Effect } from "effect" const envEffect = createEnv({ server: { DATABASE_URL: requiredString, API_KEY: requiredString, }, resolvers: [ fromOnePassword({ secrets: { DATABASE_URL: "op://vault/database/url", API_KEY: "op://vault/api/credential", }, }), ], }) const env = await Effect.runPromise(envEffect) ``` ## Options | Name | Type | Description | | --- | --- | --- | | `secrets` | `Record` | Map of env var names to 1Password secret references (op:// URIs). | | `serviceAccountToken` | `string` | 1Password service account token. Falls back to OP_SERVICE_ACCOUNT_TOKEN env var. | | `strict` | `boolean` | When true, batch resolution errors fail with ResolverError instead of returning undefined. | > Either `serviceAccountToken` or the `OP_SERVICE_ACCOUNT_TOKEN` environment variable must be available. If neither is provided, the resolver fails with a `ResolverError`. ## Secret references 1Password secret references use the `op://` URI format: ``` op://vault-name/item-name/field-name ``` For example: - `op://Production/Database/password` — the `password` field from the `Database` item in the `Production` vault - `op://Shared/API Keys/credential` — the `credential` field from the `API Keys` item ## Service account token The resolver authenticates using a 1Password service account token. You can provide it in two ways: 1. **Directly in options:** ```ts fromOnePassword({ secrets: { ... }, serviceAccountToken: "ops_...", }) ``` 2. **Via environment variable:** ```bash export OP_SERVICE_ACCOUNT_TOKEN="ops_..." ``` ============================================================ ## Resolvers — Custom / Remote ============================================================ ## Overview `fromRemoteSecrets` is the generic escape hatch for custom integrations, testing, and secret providers not covered by the built-in resolvers. You provide your own `SecretClient` implementation. ```ts import { createEnv, requiredString, fromRemoteSecrets } from "@ayronforge/envil" import type { SecretClient } from "@ayronforge/envil" import { Effect } from "effect" const client: SecretClient = { getSecret: async (id) => { // Fetch from your custom provider const response = await fetch(`https://secrets.example.com/v1/${id}`) if (!response.ok) return undefined return response.text() }, } const envEffect = createEnv({ server: { DATABASE_URL: requiredString, API_KEY: requiredString, }, resolvers: [ fromRemoteSecrets({ secrets: { DATABASE_URL: "prod/database-url", API_KEY: "prod/api-key", }, client, }), ], }) const env = await Effect.runPromise(envEffect) ``` No peer dependencies required — `fromRemoteSecrets` is exported directly from `@ayronforge/envil`. ## Options | Name | Type | Description | | --- | --- | --- | | `secrets` | `Record` | Map of env var names to secret identifiers passed to the client. | | `client` | `SecretClient` | Your custom client implementing the SecretClient interface. | | `strict` | `boolean` | When true, client fetch errors fail with ResolverError instead of returning undefined. | ## SecretClient interface ```ts interface SecretClient { getSecret: (secretId: string) => Promise getSecrets?: (secretIds: string[]) => Promise> } ``` - `getSecret` — required. Fetches a single secret by ID. - `getSecrets` — optional. When provided and there are multiple secrets to resolve, the resolver uses this for batch fetching instead of making concurrent `getSecret` calls. ## Batch optimization If your `SecretClient` implements `getSecrets`, the resolver automatically uses it when resolving more than one secret. For a single secret, `getSecret` is always used. ```ts const batchClient: SecretClient = { getSecret: async (id) => { const res = await fetch(`https://secrets.example.com/v1/${id}`) return res.ok ? res.text() : undefined }, getSecrets: async (ids) => { const res = await fetch("https://secrets.example.com/v1/batch", { method: "POST", body: JSON.stringify({ ids }), }) const data = await res.json() return new Map(Object.entries(data)) }, } ``` ============================================================ ## Reference — API Reference ============================================================ ## createEnv The main function for creating a type-safe environment object. ### Without resolvers ```ts function createEnv( opts: EnvOptions ): EnvResult ``` Returns a fully typed, readonly environment object immediately. ### With resolvers ```ts function createEnv( opts: EnvOptions & { resolvers: readonly Effect.Effect[] } ): Effect.Effect, ResolverError | EnvValidationError> ``` Returns an `Effect` that resolves secrets, validates all variables, and produces the env object. ### Options | Name | Type | Description | | --- | --- | --- | | `server` | `Record` | Server-only environment variable schemas. Not validated on client. | | `client` | `Record` | Client-safe environment variable schemas. Validated everywhere. | | `shared` | `Record` | Shared environment variable schemas. Validated and accessible everywhere. | | `extends` | `readonly AnyEnv[]` | Array of existing env objects to merge into this one. | | `prefix` | `string | PrefixMap` | Prefix applied to env var names when reading from the environment. | | `runtimeEnv` | `Record` | Override process.env with a custom environment source. | | `isServer` | `boolean` | Override server detection. Defaults to typeof window === 'undefined'. | | `emptyStringAsUndefined` | `boolean` | Treat empty string values as undefined. | | `onValidationError` | `(errors: string[]) => void` | Callback fired before EnvValidationError is thrown. | | `resolvers` | `readonly Effect.Effect[]` | Array of resolver Effects. When present, createEnv returns an Effect. | | `autoRedactResolver` | `boolean` | Default true. Auto-wraps resolver-provided values in Redacted. Types reflect this automatically. | ### PrefixMap ```ts interface PrefixMap { server?: string client?: string shared?: string } ``` --- ## safeCreateEnv A safe alternative to `createEnv` that returns discriminated result objects instead of raising exceptions. ### Without resolvers ```ts function safeCreateEnv( opts: EnvOptions ): SafeCreateEnvResult, EnvValidationError> ``` Returns `{ success: true, data }` or `{ success: false, error }` synchronously. ### With resolvers ```ts function safeCreateEnv( opts: EnvOptions & { resolvers: readonly Effect.Effect[] } ): Effect.Effect< SafeCreateEnvResult, ResolverError | EnvValidationError>, never > ``` Returns an `Effect` that **never fails** — both resolver and validation errors are captured in the result. ### Result types | Name | Type | Description | | --- | --- | --- | | `SafeCreateEnvSuccess` | `{ success: true; data: T }` | Success branch of the result. | | `SafeCreateEnvFailure` | `{ success: false; error: E }` | Failure branch of the result. | | `SafeCreateEnvResult` | `SafeCreateEnvSuccess | SafeCreateEnvFailure` | Discriminated union of success and failure. | | `requiredString` | `Schema` | Non-empty string (minLength: 1). | ### Number schemas | Name | Type | Description | | --- | --- | --- | | `number` | `Schema` | Any number parsed from string via NumberFromString. | | `positiveNumber` | `Schema` | Positive number parsed from string (> 0). | | `integer` | `Schema` | Integer parsed from string. | | `nonNegativeNumber` | `Schema` | Non-negative number parsed from string (>= 0). | | `port` | `Schema` | Port number 1–65535, parsed from string. | ### Boolean schema | Name | Type | Description | | --- | --- | --- | | `boolean` | `Schema` | Parses 'true', 'false', '1', '0' into boolean. | ### URL schemas | Name | Type | Description | | --- | --- | --- | | `url` | `Schema` | Valid HTTP or HTTPS URL. | | `postgresUrl` | `Schema` | PostgreSQL connection URL. | | `redisUrl` | `Schema` | Redis connection URL. | | `mongoUrl` | `Schema` | MongoDB connection URL. | | `mysqlUrl` | `Schema` | MySQL connection URL. | | `commaSeparatedUrls` | `Schema` | Comma-separated HTTP/HTTPS URLs. | ### Collection schemas | Name | Type | Description | | --- | --- | --- | | `commaSeparated` | `Schema` | Comma-separated string to trimmed string array. | | `commaSeparatedNumbers` | `Schema` | Comma-separated string to number array. | ### Parameterized schemas | Name | Type | Description | | --- | --- | --- | | `stringEnum(values)` | `(values: string[]) => Schema` | Creates a literal union schema from the provided values. | | `json(schema)` | `(schema: Schema) => Schema` | Parses JSON string and validates against the inner schema. | --- ## Helpers | Name | Type | Description | | --- | --- | --- | | `withDefault(schema, value)` | `(schema: S, default: T) => Schema` | Makes a schema optional with a default value. Supports data-first and pipe style. | | `optional(schema)` | `(schema: S) => Schema` | Makes a schema accept undefined. Shorthand for Schema.UndefinedOr(schema). | | `redacted(schema)` | `(schema: S) => Schema>` | Wraps schema output in Effect Redacted. Use Redacted.value() to unwrap. | --- ## Error classes ### EnvValidationError ```ts class EnvValidationError extends Error { readonly _tag: "EnvValidationError" readonly errors: ReadonlyArray } ``` Thrown when environment variable validation fails. Contains all validation error messages. ### ClientAccessError ```ts class ClientAccessError extends Error { readonly _tag: "ClientAccessError" readonly variableName: string } ``` Thrown when client code accesses a server-only variable. ### ResolverError ```ts class ResolverError extends Data.TaggedError("ResolverError")<{ readonly resolver: string readonly message: string readonly cause?: unknown }> ``` Effect tagged error for resolver failures. --- ## Presets Exported from `@ayronforge/envil/presets`. | Name | Type | Description | | --- | --- | --- | | `nextjs` | `{ prefix: { client: "NEXT_PUBLIC_" } }` | Next.js client prefix preset. | | `vite` | `{ prefix: { client: "VITE_" } }` | Vite client prefix preset. | | `expo` | `{ prefix: { client: "EXPO_PUBLIC_" } }` | Expo client prefix preset. | | `nuxt` | `{ prefix: { client: "NUXT_PUBLIC_" } }` | Nuxt client prefix preset. | | `sveltekit` | `{ prefix: { client: "PUBLIC_" } }` | SvelteKit client prefix preset. | | `astro` | `{ prefix: { client: "PUBLIC_" } }` | Astro client prefix preset. | --- ## Resolvers ### fromAwsSecrets ```ts import { fromAwsSecrets } from "@ayronforge/envil/aws" ``` | Name | Type | Description | | --- | --- | --- | | `secrets` | `Record` | Env var names → secret IDs. Supports #jsonKey syntax. | | `region` | `string` | AWS region. | | `strict` | `boolean` | When true, secret fetch/parsing errors fail with ResolverError instead of returning undefined. | ### fromGcpSecrets ```ts import { fromGcpSecrets } from "@ayronforge/envil/gcp" ``` | Name | Type | Description | | --- | --- | --- | | `secrets` | `Record` | Env var names → secret names or full resource paths. | | `projectId` | `string` | GCP project ID. | | `version` | `string` | Secret version. | | `strict` | `boolean` | When true, secret fetch errors fail with ResolverError instead of returning undefined. | ### fromAzureKeyVault ```ts import { fromAzureKeyVault } from "@ayronforge/envil/azure" ``` | Name | Type | Description | | --- | --- | --- | | `secrets` | `Record` | Env var names → Key Vault secret names. | | `vaultUrl` | `string` | Azure Key Vault URL. | | `credential` | `unknown` | Azure credential. Defaults to DefaultAzureCredential. | | `strict` | `boolean` | When true, secret fetch errors fail with ResolverError instead of returning undefined. | ### fromOnePassword ```ts import { fromOnePassword } from "@ayronforge/envil/1password" ``` | Name | Type | Description | | --- | --- | --- | | `secrets` | `Record` | Env var names → 1Password secret references (op:// URIs). | | `serviceAccountToken` | `string` | 1Password service account token. | | `strict` | `boolean` | When true, batch resolution errors fail with ResolverError instead of returning undefined. | ### fromRemoteSecrets ```ts import { fromRemoteSecrets } from "@ayronforge/envil" ``` | Name | Type | Description | | --- | --- | --- | | `secrets` | `Record` | Env var names → secret identifiers passed to the client. | | `client` | `SecretClient` | Custom client implementing the SecretClient interface. | | `strict` | `boolean` | When true, client fetch errors fail with ResolverError instead of returning undefined. | --- ## SecretClient The client interface for `fromRemoteSecrets` and custom integrations. ```ts import type { SecretClient } from "@ayronforge/envil" interface SecretClient { getSecret: (secretId: string) => Promise getSecrets?: (secretIds: string[]) => Promise> } ``` - `getSecret` — required. Fetches a single secret by ID. - `getSecrets` — optional. When provided and there are multiple secrets, enables batch fetching. ============================================================ ## LLMs — llms.txt ============================================================ `@ayronforge/envil` is a type-safe environment variable library built on [Effect Schema](https://effect.website/docs/schema/introduction). It validates your env vars at startup, gives you full TypeScript inference, and prevents client-side access to server secrets. ## Why envil? Environment variables are stringly-typed by default. A missing `DATABASE_URL` or a malformed `PORT` shouldn't crash your app deep in a request handler — it should fail **immediately** at startup with a clear error message. **envil** gives you: - **Runtime validation** — every env var is parsed through Effect Schema at startup - **Full type inference** — your `env` object is fully typed from your schema definitions, no manual annotations needed - **Client/server separation** — define `server`, `client`, and `shared` buckets; accessing a server variable on the client throws at runtime - **Framework presets** — pre-configured prefix rules for Next.js, Vite, and Expo - **Cloud secret resolution** — pull env vars from AWS Secrets Manager, GCP, Azure Key Vault, or 1Password - **Composable configs** — split env definitions across modules and merge them with `extends` ## Quick example ```ts import { createEnv, port, requiredString, boolean } from "@ayronforge/envil" import { Schema } from "effect" export const env = createEnv({ server: { DATABASE_URL: requiredString, PORT: port, DEBUG: boolean, }, client: { NEXT_PUBLIC_API_URL: requiredString, }, shared: { NODE_ENV: Schema.Literal("development", "production", "test"), }, }) // Fully typed: env.DATABASE_URL is string, env.PORT is number console.log(env.DATABASE_URL) ``` If any variable is missing or invalid, you get a clear error at startup: ``` EnvValidationError: Invalid environment variables: DATABASE_URL: Expected a string with a length of at least 1, but got undefined PORT: Expected Port (1-65535), but got "not-a-number" ``` - **Quickstart**: Install envil and create your first type-safe env config in minutes. - **Core Concepts**: Learn about validation, client/server separation, and the proxy model. - **Built-in Schemas**: Explore all the ready-to-use schemas for strings, numbers, URLs, and more. - **Resolvers**: Pull secrets from AWS, GCP, Azure, or 1Password at startup. - **Safe Parsing**: Validate env vars without throwing — get typed result objects instead. ## Acknowledgements This project is heavily inspired by [T3 Env](https://env.t3.gg) by [T3 OSS](https://github.com/t3-oss/t3-env). Thanks to the T3 team for their work and contributions to open source. ============================================================ ## LLMs — llms-full.txt ============================================================ `@ayronforge/envil` is a type-safe environment variable library built on [Effect Schema](https://effect.website/docs/schema/introduction). It validates your env vars at startup, gives you full TypeScript inference, and prevents client-side access to server secrets. ## Why envil? Environment variables are stringly-typed by default. A missing `DATABASE_URL` or a malformed `PORT` shouldn't crash your app deep in a request handler — it should fail **immediately** at startup with a clear error message. **envil** gives you: - **Runtime validation** — every env var is parsed through Effect Schema at startup - **Full type inference** — your `env` object is fully typed from your schema definitions, no manual annotations needed - **Client/server separation** — define `server`, `client`, and `shared` buckets; accessing a server variable on the client throws at runtime - **Framework presets** — pre-configured prefix rules for Next.js, Vite, and Expo - **Cloud secret resolution** — pull env vars from AWS Secrets Manager, GCP, Azure Key Vault, or 1Password - **Composable configs** — split env definitions across modules and merge them with `extends` ## Quick example ```ts import { createEnv, port, requiredString, boolean } from "@ayronforge/envil" import { Schema } from "effect" export const env = createEnv({ server: { DATABASE_URL: requiredString, PORT: port, DEBUG: boolean, }, client: { NEXT_PUBLIC_API_URL: requiredString, }, shared: { NODE_ENV: Schema.Literal("development", "production", "test"), }, }) // Fully typed: env.DATABASE_URL is string, env.PORT is number console.log(env.DATABASE_URL) ``` If any variable is missing or invalid, you get a clear error at startup: ``` EnvValidationError: Invalid environment variables: DATABASE_URL: Expected a string with a length of at least 1, but got undefined PORT: Expected Port (1-65535), but got "not-a-number" ``` - **Quickstart**: Install envil and create your first type-safe env config in minutes. - **Core Concepts**: Learn about validation, client/server separation, and the proxy model. - **Built-in Schemas**: Explore all the ready-to-use schemas for strings, numbers, URLs, and more. - **Resolvers**: Pull secrets from AWS, GCP, Azure, or 1Password at startup. - **Safe Parsing**: Validate env vars without throwing — get typed result objects instead. ## Acknowledgements This project is heavily inspired by [T3 Env](https://env.t3.gg) by [T3 OSS](https://github.com/t3-oss/t3-env). Thanks to the T3 team for their work and contributions to open source.