# Yieldless
Yieldless is a TypeScript library with zero dependencies. It gives you tuple-based errors, structured concurrency, and resource management — built on `Promise`, `AbortController`, `AsyncLocalStorage`, and `Symbol.asyncDispose`.
Quick install [#quick-install]
```bash
pnpm add yieldless
```
What it looks like [#what-it-looks-like]
```ts
import { safeTry } from "yieldless/error";
import { runTaskGroup } from "yieldless/task";
const [repoError, repo] = await safeTry(loadRepository(repoId));
if (repoError) {
return [repoError, null] as const;
}
return await runTaskGroup(async (group) => {
const refs = group.spawn((signal) => loadRefs(repo.path, signal));
const status = group.spawn((signal) => loadStatus(repo.path, signal));
return {
repo,
refs: await refs,
status: await status,
};
});
```
What Yieldless covers [#what-yieldless-covers]
Yieldless is intentionally modular. You can adopt one boundary at a time:
* `error` and `result` for tuple creation, folding, and composition
* `task`, `all`, `iterable`, `queue`, `pubsub`, and `limiter` for async coordination and load control
* `retry`, `schedule`, `signal`, `timer`, and `breaker` for resilient boundaries
* `cache`, `batcher`, and `singleflight` for repeated, keyed, and duplicate work
* `fetch`, `node`, `event`, and `ipc` for common platform boundaries
* `schema`, `env`, and `router` for application edges
* `resource`, `di`, `context`, and `test` for lifecycle, ergonomics, and deterministic tests
Start with [Beginner Tutorial](https://binbandit.github.io/yieldless/docs/guides/beginner-tutorial/) if you are new to Yieldless, [Simple Recipes](https://binbandit.github.io/yieldless/docs/recipes/simple-recipes/) if you want a small copy-pasteable pattern, or [Module Selection](https://binbandit.github.io/yieldless/docs/guides/module-selection/) if you are deciding which piece to reach for first.
# Quickstart
The fastest way to understand Yieldless is to wire the core pieces together:
1. `yieldless/error` to keep failures in tuple form
2. `yieldless/result` when tuple branches need light composition
3. `yieldless/task`, `yieldless/all`, `yieldless/iterable`, `yieldless/queue`, `yieldless/pubsub`, and `yieldless/limiter` to fan work out, hand it off, and control pressure
4. `yieldless/retry`, `yieldless/schedule`, `yieldless/signal`, `yieldless/timer`, and `yieldless/breaker` for resilient async boundaries
5. `yieldless/cache`, `yieldless/batcher`, and `yieldless/singleflight` for repeated, keyed, and duplicate work
6. Boundary adapters like `yieldless/fetch`, `yieldless/schema`, `yieldless/router`, `yieldless/ipc`, `yieldless/node`, and `yieldless/env`
Install [#install]
```bash
pnpm add yieldless
```
TypeScript 5.5+ is the target baseline. The package is compiled with `isolatedDeclarations` enabled.
Step 1: stop throwing for routine failures [#step-1-stop-throwing-for-routine-failures]
`safeTry()` and `safeTrySync()` let you treat common failures as data instead of exception control flow.
```ts
import { safeTry } from "yieldless/error";
const [repoError, repo] = await safeTry(loadRepository(repoId));
if (repoError) {
return [repoError, null] as const;
}
```
That pattern is the center of the library. Every module is designed to fit back into the same `[error, value]` shape.
When a tuple flow grows past one branch, use `yieldless/result` to keep it readable.
```ts
import { fromNullable, mapOk } from "yieldless/result";
const userResult = fromNullable(
repo.owner,
() => new Error("Repository owner is missing"),
);
return mapOk(userResult, (owner) => ({
id: repo.id,
ownerName: owner.name,
}));
```
Keep this modest. If one `if (error)` branch is clearer, use the branch.
Step 2: group related work under one cancellation signal [#step-2-group-related-work-under-one-cancellation-signal]
`runTaskGroup()` gives you a shared abort signal and sibling failure propagation.
```ts
import { runTaskGroup } from "yieldless/task";
const requestController = new AbortController();
const summary = await runTaskGroup(async (group) => {
const refs = group.spawn((signal) => readRefs(repo.path, signal));
const status = group.spawn((signal) => readStatus(repo.path, signal));
return {
refs: await refs,
status: await status,
};
}, {
signal: requestController.signal,
});
```
If `readRefs()` throws, the group aborts the shared signal, waits for `readStatus()` to settle, and then rethrows the original failure.
If the request or UI flow already has an `AbortSignal`, pass it in so the whole group inherits upstream cancellation.
When the work is a list rather than a few named tasks, use `mapLimit()` to keep pressure on the outside world under control.
```ts
import { mapLimit } from "yieldless/all";
const [thumbnailError, thumbnails] = await mapLimit(
images,
(image, _index, signal) => renderThumbnail(image, signal),
{ concurrency: 4, signal: requestController.signal },
);
if (thumbnailError) {
return [thumbnailError, null] as const;
}
```
The result order matches the input order, and the first tuple error aborts the remaining in-flight work.
For async streams or generators, reach for `yieldless/iterable` instead of materializing everything first.
```ts
import { mapAsyncLimit } from "yieldless/iterable";
const [indexError, indexed] = await mapAsyncLimit(
readRepositories(workspacePath),
(repository, _index, signal) => indexRepository(repository, signal),
{ concurrency: 3, signal: requestController.signal },
);
```
Step 3: add retries where the outside world is unreliable [#step-3-add-retries-where-the-outside-world-is-unreliable]
Retries should stay close to transport boundaries: HTTP, queues, database connections, and subprocesses that can legitimately fail for transient reasons.
```ts
import { safeTry } from "yieldless/error";
import { safeRetry } from "yieldless/retry";
const [fetchError, payload] = await safeRetry(
async (_attempt, signal) => safeTry(fetchRepository(repoId, signal)),
{
maxAttempts: 4,
baseDelayMs: 150,
},
);
if (fetchError) {
return [fetchError, null] as const;
}
```
The retry loop respects `AbortSignal`, so if a parent task group is canceled, the pending backoff timer is canceled too.
Step 4: keep the edges boring [#step-4-keep-the-edges-boring]
When a boundary needs a firm time budget, derive one signal and pass it through the whole call chain.
```ts
import { safeTry } from "yieldless/error";
import { withTimeout } from "yieldless/signal";
const [fetchError, payload] = await safeTry(
withTimeout(
(signal) => fetchRepository(repoId, signal),
{ timeoutMs: 5_000 },
),
);
```
If you are calling HTTP JSON APIs, `yieldless/fetch` combines the same ideas:
```ts
import { fetchJsonSafe } from "yieldless/fetch";
const [error, user] = await fetchJsonSafe(`/api/users/${userId}`, {
timeoutMs: 5_000,
signal: requestController.signal,
});
```
If you need to wait for eventual consistency, use `poll()` instead of hand-writing timers.
```ts
import { poll } from "yieldless/timer";
class JobNotReadyError extends Error {
constructor() {
super("Job is not ready yet.");
this.name = "JobNotReadyError";
}
}
const [jobError, job] = await poll(
async (_attempt, signal) => {
const [fetchError, current] = await fetchJsonSafe(jobUrl, { signal });
if (fetchError) {
return [fetchError, null] as const;
}
return current.status === "ready"
? [null, current] as const
: [new JobNotReadyError(), null] as const;
},
{
intervalMs: 1_000,
timeoutMs: 30_000,
signal: requestController.signal,
shouldContinue: (error) => error instanceof JobNotReadyError,
},
);
```
The package ships helpers for common backend boundaries and production pressure:
* `yieldless/env` validates startup configuration
* `yieldless/fetch` wraps HTTP status, JSON parsing, and deadlines
* `yieldless/schema` keeps validators in tuple form
* `yieldless/router` turns tuple handlers into HTTP responses
* `yieldless/ipc` preserves tuple results across Electron IPC
* `yieldless/node` wraps filesystem and child-process work
* `yieldless/event` waits for one event with cleanup and abort support
* `yieldless/schedule` builds reusable retry and repeat policies
* `yieldless/limiter` protects shared capacity with semaphores and rate limits
* `yieldless/queue` and `yieldless/pubsub` coordinate in-process workers and subscribers
* `yieldless/cache` keeps successful read-through loads fresh for a TTL
* `yieldless/batcher` collapses nearby keyed reads into ordered batches
* `yieldless/breaker` backs off dependencies that are already failing
* `yieldless/singleflight` deduplicates duplicate in-flight work
* `yieldless/test` makes promises, clocks, and abort signals deterministic in tests
A full request flow [#a-full-request-flow]
```ts
import { safeTry } from "yieldless/error";
import { safeRetry } from "yieldless/retry";
import { parseSafe } from "yieldless/schema";
import { NotFoundError, honoHandler } from "yieldless/router";
const getUser = honoHandler(async (c) => {
const [inputError, input] = parseSafe(userParamsSchema, c.req.param());
if (inputError) {
return [inputError, null];
}
const [userError, user] = await safeRetry(
async (_attempt, signal) => safeTry(loadUser(input.id, signal)),
{ maxAttempts: 3 },
);
if (userError) {
return [userError, null];
}
if (user === null) {
return [new NotFoundError("User not found"), null];
}
return [null, user];
});
```
Where to go next [#where-to-go-next]
* Read [Beginner Tutorial](https://binbandit.github.io/yieldless/docs/guides/beginner-tutorial/) if the tuple style is still new and you want one feature built slowly.
* Read [Simple Recipes](https://binbandit.github.io/yieldless/docs/recipes/simple-recipes/) when you want small, single-feature snippets.
* Read [Read IDs and Fetch Records](https://binbandit.github.io/yieldless/docs/recipes/read-ids-fetch-records/) for a concrete `forEach()` plus `fetchJsonSafe()` workflow.
* Read [Examples](https://binbandit.github.io/yieldless/docs/guides/examples/) when you want copy-pasteable small patterns and a few larger compositions.
* Read [Module Selection](https://binbandit.github.io/yieldless/docs/guides/module-selection/) when you are deciding which helper fits a specific problem.
* Read [Design Rules](https://binbandit.github.io/yieldless/docs/guides/design-rules/) before you spread the tuple style across a large codebase.
* Read [Do and Don't](https://binbandit.github.io/yieldless/docs/guides/do-and-dont/) if you are moving a team onto the library and want the conventions to stay sharp.
* Use the reference section when you already know what module you need.
Common mistakes [#common-mistakes]
Good: keep tuple boundaries explicit [#good-keep-tuple-boundaries-explicit]
```ts
const [error, user] = await fetchJsonSafe(url, { signal });
if (error) return [error, null] as const;
return [null, toUserView(user)] as const;
```
Avoid: carrying tuples through every layer [#avoid-carrying-tuples-through-every-layer]
```ts
const result = await fetchJsonSafe(url);
return renderProfile(result);
```
Most UI and domain code wants ordinary values or ordinary state. Convert at the boundary.
Good: pass cancellation into real I/O [#good-pass-cancellation-into-real-io]
```ts
await runCommandSafe("git", ["fetch"], {
cwd: repoPath,
signal,
});
```
Avoid: expecting Yieldless to stop work that ignores signals [#avoid-expecting-yieldless-to-stop-work-that-ignores-signals]
```ts
await runTaskGroup(async (group) => {
group.spawn(() => expensiveLoopThatNeverChecksAbort());
});
```
# Examples
This page is a grab bag of practical examples. Each snippet is intentionally plain TypeScript: no custom runtime, no program builder, no framework-specific wrapper.
Use the small examples when you need one pattern. Use the larger examples when you want to see how several modules compose at an application boundary.
Small examples [#small-examples]
Convert one unreliable boundary into a tuple [#convert-one-unreliable-boundary-into-a-tuple]
```ts
import { safeTry } from "yieldless/error";
export async function readJson(response: Response) {
const [error, body] = await safeTry(response.json() as Promise);
if (error) {
return [error, null] as const;
}
return [null, body] as const;
}
```
Good tuple code usually starts at an edge: HTTP, files, subprocesses, validation, IPC, or user-provided data.
Fetch JSON with a real deadline [#fetch-json-with-a-real-deadline]
```ts
import { fetchJsonSafe } from "yieldless/fetch";
const [error, profile] = await fetchJsonSafe(
`https://api.example.com/profiles/${profileId}`,
{
timeoutMs: 5_000,
signal,
},
);
if (error) {
return [error, null] as const;
}
```
The timeout and parent `signal` are linked, so either one can stop the request.
Retry only the noisy dependency [#retry-only-the-noisy-dependency]
```ts
import { HttpStatusError, fetchJsonSafe } from "yieldless/fetch";
import { safeRetry } from "yieldless/retry";
const [error, invoice] = await safeRetry(
(_attempt, attemptSignal) =>
fetchJsonSafe(`/api/invoices/${invoiceId}`, {
timeoutMs: 3_000,
signal: attemptSignal,
}),
{
maxAttempts: 3,
baseDelayMs: 150,
signal,
shouldRetry: (error) =>
!(error instanceof HttpStatusError) || error.status >= 500,
},
);
```
Avoid retrying the whole route handler. Validation, authorization, and side effects should not run again just because a remote service had a brief wobble.
Reuse a schedule policy [#reuse-a-schedule-policy]
```ts
import {
composeSchedules,
exponentialBackoff,
maxAttempts,
runScheduled,
} from "yieldless/schedule";
const transientRemotePolicy = composeSchedules(
maxAttempts(5),
exponentialBackoff({
baseDelayMs: 100,
maxDelayMs: 2_000,
}),
);
const result = await runScheduled(
(_attempt, attemptSignal) => refreshSearchIndex(indexId, attemptSignal),
transientRemotePolicy,
{ signal },
);
```
Schedules are useful when several call sites should share timing rules without sharing business logic.
Put a ceiling on a shared resource [#put-a-ceiling-on-a-shared-resource]
```ts
import { createSemaphore, withPermit } from "yieldless/limiter";
const databaseConnections = createSemaphore(10);
const [error, user] = await withPermit(
databaseConnections,
(scopedSignal) => loadUserFromDatabase(userId, scopedSignal),
{ signal },
);
```
Use semaphores for "only N at once." Use rate limiters for "only N per time window."
Respect an API quota [#respect-an-api-quota]
```ts
import { fetchJsonSafe } from "yieldless/fetch";
import { createRateLimiter } from "yieldless/limiter";
const paymentQuota = createRateLimiter({
limit: 30,
intervalMs: 60_000,
});
const [quotaError] = await paymentQuota.takeSafe({ signal });
if (quotaError) {
return [quotaError, null] as const;
}
return await fetchJsonSafe(invoiceUrl, { signal });
```
Keeping the wait explicit makes quota pressure visible in code review.
Hand work from producers to workers [#hand-work-from-producers-to-workers]
```ts
import { createQueue } from "yieldless/queue";
const queue = createQueue({ capacity: 100 });
async function worker(signal: AbortSignal) {
while (!signal.aborted) {
const [takeError, job] = await queue.take({ signal });
if (takeError) {
return;
}
await processJob(job, signal);
}
}
const [offerError] = await queue.offer({ id: "job-1" }, { signal });
```
Bound the queue when producers can outpace workers. Capacity is part of the user experience.
Broadcast local progress [#broadcast-local-progress]
```ts
import { createPubSub } from "yieldless/pubsub";
const progress = createPubSub({ replay: 1 });
const subscription = progress.subscribe();
progress.publish({
type: "started",
jobId,
});
for await (const event of subscription) {
renderProgress(event);
}
```
Use pub/sub for in-process progress, notifications, and diagnostics. Use durable infrastructure when events must survive restarts.
Cache stable reads [#cache-stable-reads]
```ts
import { createCache } from "yieldless/cache";
import { fetchJsonSafe } from "yieldless/fetch";
const projectCache = createCache({
maxSize: 500,
ttlMs: 60_000,
load: (projectId, signal) =>
fetchJsonSafe(`/api/projects/${projectId}`, { signal }),
});
const [error, project] = await projectCache.get(projectId, { signal });
```
Cache reads, not commands. Failed loads are returned but not stored.
Batch nearby keyed reads [#batch-nearby-keyed-reads]
```ts
import { createBatcher } from "yieldless/batcher";
import { fetchJsonSafe } from "yieldless/fetch";
const owners = createBatcher({
waitMs: 2,
maxBatchSize: 50,
loadMany: async (ownerIds, signal) => {
const [error, values] = await fetchJsonSafe(
`/api/owners?ids=${encodeURIComponent(ownerIds.join(","))}`,
{ signal },
);
if (error) {
return [error, null] as const;
}
const byId = new Map(values.map((owner) => [owner.id, owner]));
return [
null,
ownerIds.map((id) => byId.get(id) ?? { id, name: "Unknown owner" }),
] as const;
},
});
const [error, owner] = await owners.load(ownerId, { signal });
```
Batchers return one value per requested key, in the same order. If your backend returns unordered results, map them back explicitly.
Fail fast when a dependency is already down [#fail-fast-when-a-dependency-is-already-down]
```ts
import { CircuitOpenError, createCircuitBreaker } from "yieldless/breaker";
import { fetchJsonSafe } from "yieldless/fetch";
const loadFlags = createCircuitBreaker(
(signal, accountId: string) =>
fetchJsonSafe(`/api/accounts/${accountId}/flags`, {
timeoutMs: 2_000,
signal,
}),
{
failureThreshold: 3,
cooldownMs: 15_000,
},
);
const [error, flags] = await loadFlags(accountId);
if (error instanceof CircuitOpenError) {
return [null, defaultFlags] as const;
}
```
Circuit breakers are for external dependencies and expensive boundaries, not validation branches.
Make async tests deterministic [#make-async-tests-deterministic]
```ts
import {
createManualClock,
createTestSignal,
flushMicrotasks,
} from "yieldless/test";
const clock = createManualClock();
const testSignal = createTestSignal();
let settled = false;
void clock.sleep(1_000, { signal: testSignal.signal }).then(() => {
settled = true;
});
clock.tick(1_000);
await flushMicrotasks();
expect(settled).toBe(true);
```
Manual clocks work best when the production code accepts a clock or sleep dependency. They do not patch global timers.
Larger examples [#larger-examples]
A user card loader [#a-user-card-loader]
This example validates input, fetches two independent resources under one cancellation signal, and returns an ordinary view model.
```ts
import { all } from "yieldless/all";
import { fetchJsonSafe } from "yieldless/fetch";
import { parseSafe } from "yieldless/schema";
export async function loadUserCard(input: unknown, signal: AbortSignal) {
const [inputError, params] = parseSafe(userCardParamsSchema, input);
if (inputError) {
return [inputError, null] as const;
}
const [loadError, loaded] = await all(
[
(taskSignal) =>
fetchJsonSafe(`/api/users/${params.userId}`, {
timeoutMs: 3_000,
signal: taskSignal,
}),
(taskSignal) =>
fetchJsonSafe(`/api/users/${params.userId}/activity`, {
timeoutMs: 3_000,
signal: taskSignal,
}),
],
{ signal },
);
if (loadError) {
return [loadError, null] as const;
}
const [user, activity] = loaded;
return [
null,
{
id: user.id,
name: user.name,
recentActivityCount: activity.length,
},
] as const;
}
```
The two fetches are allowed to run together, but a failure in either one aborts the other.
A cached GraphQL-style resolver helper [#a-cached-graphql-style-resolver-helper]
This example combines `cache` and `batcher`: repeated IDs are cached across resolver calls, while nearby misses are batched together.
```ts
import { createBatcher } from "yieldless/batcher";
import { createCache } from "yieldless/cache";
const loadUsersById = createBatcher({
waitMs: 1,
maxBatchSize: 100,
loadMany: (ids, signal) => userRepository.findManyById(ids, signal),
});
const users = createCache({
ttlMs: 30_000,
maxSize: 1_000,
load: (id, signal) => loadUsersById.load(id, { signal }),
});
export async function resolveAuthor(post: Post, signal: AbortSignal) {
const [error, author] = await users.get(post.authorId, { signal });
if (error) {
return [error, null] as const;
}
return [null, author] as const;
}
```
The batcher removes N+1 pressure from the current tick. The cache removes repeated reads across later calls.
A product route with cached reads and explicit fresh reload [#a-product-route-with-cached-reads-and-explicit-fresh-reload]
```ts
import { createCache } from "yieldless/cache";
import { fetchJsonSafe } from "yieldless/fetch";
import { honoHandler } from "yieldless/router";
const products = createCache({
ttlMs: 30_000,
maxSize: 1_000,
load: (productId, signal) =>
fetchJsonSafe(`/api/products/${productId}`, {
timeoutMs: 4_000,
signal,
}),
});
export const getProduct = honoHandler(async (c) => {
const productId = c.req.param("productId");
const forceRefresh = c.req.query("refresh") === "true";
const result = forceRefresh
? await products.refresh(productId, { signal: c.req.raw.signal })
: await products.get(productId, { signal: c.req.raw.signal });
return result;
});
```
The caller can ask for freshness without bypassing the tuple flow or duplicating the loader.
A webhook intake route that returns quickly [#a-webhook-intake-route-that-returns-quickly]
```ts
import { safeTry } from "yieldless/error";
import { createQueue } from "yieldless/queue";
import { BadRequestError, honoHandler } from "yieldless/router";
import { parseSafe } from "yieldless/schema";
const webhookJobs = createQueue({ capacity: 5_000 });
export const postPaymentWebhook = honoHandler(
async (c) => {
const [bodyError, body] = await safeTry(c.req.json());
if (bodyError) {
return [new BadRequestError("Invalid webhook JSON"), null] as const;
}
const [eventError, event] = parseSafe(paymentWebhookSchema, body);
if (eventError) {
return [eventError, null] as const;
}
const [queueError] = await webhookJobs.offer(event, {
signal: c.req.raw.signal,
});
if (queueError) {
return [queueError, null] as const;
}
return [null, { accepted: true }] as const;
},
{ successStatus: 202 },
);
```
The webhook handler validates and enqueues. The slow reconciliation work can happen in a worker with retries, logging, and its own cancellation boundary.
Avoid examples [#avoid-examples]
Avoid hiding every policy in one wrapper [#avoid-hiding-every-policy-in-one-wrapper]
```ts
const user = await runtime.run(
retry(cache(batch(limit(fetchUser)))),
userId,
);
```
This makes the code look compact, but now readers must learn your runtime before they can understand a request.
Prefer visible composition at the boundary:
```ts
const [quotaError] = await apiQuota.takeSafe({ signal });
if (quotaError) return [quotaError, null] as const;
const [userError, user] = await users.get(userId, { signal });
if (userError) return [userError, null] as const;
```
Avoid using queues as silent memory [#avoid-using-queues-as-silent-memory]
```ts
const queue = createQueue();
for (const upload of incomingUploads) {
await queue.offer(upload);
}
```
If the input is user-sized or service-sized, make the pressure visible:
```ts
const queue = createQueue({ capacity: 200 });
```
Good examples should make failure, cancellation, and pressure obvious. If a helper hides those three things, it is probably drifting away from Yieldless' mission.
# Module Selection
Yieldless is easiest to adopt when you choose one small module at the boundary where the pain appears. You do not need to move an application into a framework runtime. Keep ordinary TypeScript, add tuple and cancellation helpers where they remove noise, and stop there.
The capability map [#the-capability-map]
| Work you are doing | Start with | Why |
| -------------------------------------------- | ------------------------ | ------------------------------------------------------------------ |
| Convert a promise or sync throw into a tuple | `yieldless/error` | Establishes the `[error, value]` shape. |
| Transform or chain tuples | `yieldless/result` | Removes repetitive branch plumbing without a DSL. |
| Run related promise work together | `yieldless/task` | Shared `AbortSignal` and sibling failure cleanup. |
| Run tuple tasks in parallel | `yieldless/all` | Tuple-native `all()`, `race()`, and fixed-list `mapLimit()`. |
| Process sync or async streams of values | `yieldless/iterable` | `collect()`, sequential `forEach()`, and bounded iterable mapping. |
| Connect producers and consumers | `yieldless/queue` | Bounded async queues with abortable offer/take backpressure. |
| Broadcast in-process events | `yieldless/pubsub` | Async-iterable subscriptions with optional replay. |
| Protect shared capacity or API quota | `yieldless/limiter` | Semaphores and rate limiters for explicit backpressure. |
| Cache expensive async reads | `yieldless/cache` | TTL/LRU caching with shared in-flight loads. |
| Batch nearby keyed reads | `yieldless/batcher` | DataLoader-style coalescing without another dependency. |
| Guard flaky dependencies | `yieldless/breaker` | Circuit breaking for repeated tuple failures. |
| Retry transient tuple failures | `yieldless/retry` | Exponential backoff with jitter and abort-aware waits. |
| Reuse retry or polling timing rules | `yieldless/schedule` | Composable delay and stop policies without a scheduler. |
| Apply a time budget | `yieldless/signal` | Disposable derived deadline signals. |
| Sleep or poll | `yieldless/timer` | Abort-aware timer utilities without a scheduler. |
| Fetch JSON or inspect HTTP status | `yieldless/fetch` | Native fetch with tuple errors, timeouts, and JSON parsing. |
| Read process configuration | `yieldless/env` | Required/optional env helpers and schema-backed parsing. |
| Validate unknown input | `yieldless/schema` | Adapter for existing schema libraries. |
| Build HTTP JSON handlers | `yieldless/router` | Convert tuple handlers into responses. |
| Cross an Electron IPC boundary | `yieldless/ipc` | Tuple serialization and optional renderer-driven cancellation. |
| Use Node files or subprocesses | `yieldless/node` | Tuple filesystem helpers and subprocess output capture. |
| Wait for one event | `yieldless/event` | Abortable `EventTarget` / `EventEmitter` waits. |
| Deduplicate in-flight work | `yieldless/singleflight` | Prevent duplicate calls from stampeding the same operation. |
| Scope cleanup | `yieldless/resource` | Native `await using` for acquire/release pairs. |
| Bind stable dependencies | `yieldless/di` | Reader-like dependency binding with plain functions. |
| Carry request metadata | `yieldless/context` | `AsyncLocalStorage` for request-scoped values and spans. |
| Test async helpers | `yieldless/test` | Deferred promises, manual clocks, and controlled abort signals. |
Good adoption path [#good-adoption-path]
Start at the boundary where failures are already expected.
```ts
import { fetchJsonSafe } from "yieldless/fetch";
const [error, user] = await fetchJsonSafe(url, {
timeoutMs: 5_000,
signal,
});
if (error) {
return [error, null] as const;
}
return [null, user] as const;
```
Then add composition only when the code asks for it.
```ts
import { fromNullable, mapOk } from "yieldless/result";
return mapOk(
fromNullable(user, () => new Error("User not found")),
(value) => ({ id: value.id, name: value.name }),
);
```
Avoid turning Yieldless into a runtime [#avoid-turning-yieldless-into-a-runtime]
This is possible, but it is not the point:
```ts
// Avoid: a home-grown mini runtime with hidden policy everywhere.
const program = pipe(
readConfig(),
retryEverywhere(),
injectGlobals(),
runWithHiddenContext(),
);
```
Prefer direct code with explicit control flow.
```ts
const [configError, config] = parseEnvSafe(envSchema);
if (configError) return [configError, null] as const;
const [userError, user] = await safeRetry(
(_attempt, signal) => fetchUser(config.apiUrl, signal),
{ maxAttempts: 3, signal },
);
```
Choosing between similar modules [#choosing-between-similar-modules]
Use `yieldless/all` when you already have a finite list of tuple tasks.
```ts
await all([
(signal) => readPrimary(signal),
(signal) => readReplica(signal),
]);
```
Use `yieldless/task` when the fan-out is imperative or the children return ordinary promise values.
```ts
await runTaskGroup(async (group) => {
const refs = group.spawn((signal) => loadRefs(signal));
const status = group.spawn((signal) => loadStatus(signal));
return {
refs: await refs,
status: await status,
};
});
```
Use `yieldless/iterable` when the input is a stream or async generator.
```ts
await mapAsyncLimit(readRows(filePath), processRow, {
concurrency: 4,
signal,
});
```
Use `yieldless/singleflight` when duplicate callers ask for the same work at the same time. It is not a cache.
```ts
const loadRepo = singleFlight(
(signal, repoId: string) => readRepository(repoId, signal),
);
```
A complete boundary [#a-complete-boundary]
```ts
import { fetchJsonSafe } from "yieldless/fetch";
import { safeRetry } from "yieldless/retry";
import { parseSafe } from "yieldless/schema";
export async function loadUserView(input: unknown, signal: AbortSignal) {
const [inputError, params] = parseSafe(userParamsSchema, input);
if (inputError) return [inputError, null] as const;
const [userError, user] = await safeRetry(
(_attempt, attemptSignal) =>
fetchJsonSafe(`/api/users/${params.id}`, {
timeoutMs: 5_000,
signal: attemptSignal,
}),
{
maxAttempts: 3,
signal,
},
);
if (userError) return [userError, null] as const;
return [
null,
{
id: user.id,
label: user.name,
},
] as const;
}
```
# Design Rules
Yieldless is small on purpose. The design rules are what keep it small.
1\. Prefer native language features over framework runtimes [#1-prefer-native-language-features-over-framework-runtimes]
If JavaScript or Node already has a solid primitive for the job, Yieldless uses it.
* `Promise` and `async/await` for sequencing
* `AbortController` and `AbortSignal` for cancellation
* `AsyncLocalStorage` for async context in Node
* `Symbol.asyncDispose` and `await using` for resource cleanup
This rule keeps the library easy to explain to engineers who did not build the original system.
2\. Keep failures visible [#2-keep-failures-visible]
Thrown exceptions are still useful for process-level failures and framework boundaries, but routine operational failures should stay explicit.
```ts
const [error, value] = await safeTry(readConfig());
if (error) {
return [error, null] as const;
}
```
That shape is intentionally repetitive. It makes failure handling obvious during code review.
3\. Cancellation is cooperative [#3-cancellation-is-cooperative]
`runTaskGroup()`, `all()`, `race()`, and `safeRetry()` all respect `AbortSignal`, but they cannot cancel code that ignores the signal.
```ts
group.spawn((signal) => runCommand("git", ["fetch"], { signal }));
```
4\. Keep adapters thin [#4-keep-adapters-thin]
The schema, router, IPC, and Node modules are adapters. They are not attempts to replace the libraries they sit beside.
5\. Avoid global magic [#5-avoid-global-magic]
Use `inject()` for stable dependencies. Use `createContext()` for request-scoped data.
6\. Pick the smallest module that solves the problem [#6-pick-the-smallest-module-that-solves-the-problem]
| Problem | Start with |
| ------------------------------------- | --------------------------------------- |
| You want cleaner async errors | `yieldless/error` |
| You want to transform tuple values | `yieldless/result` |
| You need sibling cancellation | `yieldless/task` |
| You need bounded batch work | `yieldless/all` or `yieldless/iterable` |
| You need abort-aware retries | `yieldless/retry` |
| You need a firm deadline | `yieldless/signal` |
| You need polling or sleep | `yieldless/timer` |
| You need HTTP JSON with deadlines | `yieldless/fetch` |
| You need startup config | `yieldless/env` |
| You want typed route handlers | `yieldless/router` |
| You are building an Electron boundary | `yieldless/ipc` |
| You need in-flight deduplication | `yieldless/singleflight` |
7\. Re-throw only at the boundary that needs it [#7-re-throw-only-at-the-boundary-that-needs-it]
`unwrap()` exists for places that genuinely expect thrown exceptions.
```ts
import { safeTry, unwrap } from "yieldless/error";
await transaction(async () => {
const result = await safeTry(writeModel());
return unwrap(result);
});
```
8\. Prefer adapters over replacements [#8-prefer-adapters-over-replacements]
Yieldless should sit beside the tools teams already like.
* Use your schema library, adapt it with `yieldless/schema`.
* Use platform `fetch`, wrap it with `yieldless/fetch`.
* Use Hono-style handlers, adapt tuple results with `yieldless/router`.
* Use Electron IPC, preserve tuple results with `yieldless/ipc`.
9\. Keep pressure explicit [#9-keep-pressure-explicit]
When work can fan out, make the pressure control visible in code.
```ts
await mapLimit(repositories, refreshRepository, {
concurrency: 4,
signal,
});
```
That one option is often the difference between a helpful tool and a machine that feels frozen.
# Do and Don't
This page is the short list of habits that keep tuple-based code clean under real production pressure.
Error handling [#error-handling]
✓ Keep the tuple close to the boundary. [#-keep-the-tuple-close-to-the-boundary]
Wrap I/O once, then pass domain values through the rest of the function.
✗ Wrap every line in safeTry(). [#-wrap-every-line-in-safetry]
If a function is already synchronous and pure, let it stay ordinary code.
```ts
const [readError, configText] = await readFileSafe("config.json");
if (readError) return [readError, null];
const [parseError, config] = safeTrySync(() => JSON.parse(configText));
if (parseError) return [parseError, null];
```
✓ Use result helpers when they remove branch noise. [#-use-result-helpers-when-they-remove-branch-noise]
```ts
return mapOk(
fromNullable(user, () => new NotFoundError("User not found")),
toUserView,
);
```
✗ Build a pipeline because it feels more abstract. [#-build-a-pipeline-because-it-feels-more-abstract]
```ts
return andThen(andThen(andThen(result, a), b), c);
```
If the branch reads better, use the branch.
Cancellation [#cancellation]
✓ Pass the signal all the way into the I/O API. [#-pass-the-signal-all-the-way-into-the-io-api]
Cancellation only matters when the transport or subprocess actually sees the signal.
✗ Assume a task group can kill arbitrary CPU work. [#-assume-a-task-group-can-kill-arbitrary-cpu-work]
`AbortSignal` is cooperative. If your code ignores it, the work keeps running.
Retries [#retries]
✓ Retry transport failures and transient infrastructure noise. [#-retry-transport-failures-and-transient-infrastructure-noise]
Network timeouts, brief lock contentions, and flaky subprocess startup are reasonable candidates.
✗ Retry validation failures or deterministic domain errors. [#-retry-validation-failures-or-deterministic-domain-errors]
If the request shape is bad on attempt one, it will still be bad on attempt three.
Timeouts and timers [#timeouts-and-timers]
✓ Use one deadline signal for the whole boundary. [#-use-one-deadline-signal-for-the-whole-boundary]
```ts
const [error, response] = await fetchJsonSafe(url, {
timeoutMs: 5_000,
signal,
});
```
✗ Scatter unrelated setTimeout calls through business logic. [#-scatter-unrelated-settimeout-calls-through-business-logic]
Timers without abort handling tend to outlive the work they were created for.
Parallel work [#parallel-work]
✓ Put a ceiling on list-shaped work. [#-put-a-ceiling-on-list-shaped-work]
```ts
await mapAsyncLimit(readRows(file), processRow, {
concurrency: 4,
signal,
});
```
✗ Start one promise for every item in a large collection. [#-start-one-promise-for-every-item-in-a-large-collection]
```ts
await Promise.all(rows.map(processRow));
```
Small lists are fine. Unknown or user-sized lists need pressure control.
Dependency injection [#dependency-injection]
✓ Bind stable dependencies at the edge. [#-bind-stable-dependencies-at-the-edge]
Loggers, repositories, mailers, and feature flags are good candidates for `inject()`.
✗ Build a hidden service locator around it. [#-build-a-hidden-service-locator-around-it]
If the dependencies are not obvious from the function signature, the code gets harder to review.
Configuration [#configuration]
✓ Parse environment once. [#-parse-environment-once]
```ts
const [error, config] = parseEnvSafe(configSchema);
```
✗ Read `process.env` from deep application code. [#-read-processenv-from-deep-application-code]
It makes tests harder and hides what the function actually needs.
Context [#context]
✓ Use async context for request-scoped metadata. [#-use-async-context-for-request-scoped-metadata]
Trace spans, request IDs, user sessions, or a transaction handle fit well.
✗ Use async context as your application container. [#-use-async-context-as-your-application-container]
Configuration and stable dependencies should still be explicit.
Boundaries [#boundaries]
A good rule of thumb: tuples are for work you expect to fail sometimes, thrown exceptions are for code that truly cannot continue. If a boundary requires exceptions, convert at that one spot with `unwrap()` and move on.
IPC and UI state [#ipc-and-ui-state]
✓ Keep tuples at transport and service boundaries. [#-keep-tuples-at-transport-and-service-boundaries]
```ts
const result = await window.gitBridge.withSignal.status(signal, repoPath);
```
✓ Convert tuples into view state before rendering. [#-convert-tuples-into-view-state-before-rendering]
```ts
return match(result, {
ok: (data) => ({ kind: "ready" as const, data }),
err: (error) => ({ kind: "error" as const, message: error.message }),
});
```
✗ Pass raw tuple results through every component prop. [#-pass-raw-tuple-results-through-every-component-prop]
React, Vue, and Svelte components are usually clearer with domain-specific screen state.
# Beginner Tutorial
This tutorial starts with ordinary TypeScript and adds Yieldless one piece at a time.
The goal is small on purpose: read IDs from a text file, turn each line into a number, fetch one JSON record for each ID, and return the accumulated records.
The one shape to learn first [#the-one-shape-to-learn-first]
Yieldless helpers return a tuple:
```ts
type SafeResult =
| readonly [error: E, value: null]
| readonly [error: null, value: T];
```
That means most Yieldless code follows this rhythm:
```ts
const [error, value] = await somethingSafe();
if (error) {
return [error, null] as const;
}
return [null, value] as const;
```
The branch is not boilerplate to hide. It is where you decide what should happen when that boundary fails.
Install [#install]
```bash
pnpm add yieldless
```
Import from the small module you need:
```ts
import { fetchJsonSafe } from "yieldless/fetch";
import { forEach } from "yieldless/iterable";
import { readFileSafe } from "yieldless/node";
```
Step 1: read a file without throwing [#step-1-read-a-file-without-throwing]
`readFileSafe()` wraps Node's file read in tuple form.
```ts
import { readFileSafe } from "yieldless/node";
const [readError, contents] = await readFileSafe("ids.txt");
if (readError) {
return [readError, null] as const;
}
```
If the file does not exist, the error is returned. Your function does not throw unless you choose to throw.
Step 2: parse lines into IDs [#step-2-parse-lines-into-ids]
Keep parsing as plain TypeScript. Return the same tuple shape when the file contains bad input.
```ts
import type { SafeResult } from "yieldless/error";
class InvalidIdFileError extends Error {
readonly line: number;
constructor(line: number, value: string) {
super(`Expected a positive integer on line ${String(line)}, got "${value}".`);
this.name = "InvalidIdFileError";
this.line = line;
}
}
function parseIds(contents: string): SafeResult {
const ids: number[] = [];
for (const [index, line] of contents.split(/\r?\n/).entries()) {
const value = line.trim();
if (value === "" || value.startsWith("#")) {
continue;
}
const id = Number(value);
if (!Number.isInteger(id) || id <= 0) {
return [new InvalidIdFileError(index + 1, value), null] as const;
}
ids.push(id);
}
return [null, ids] as const;
}
```
This accepts files like:
```txt
101
102
# comments are fine
103
```
Step 3: fetch one record [#step-3-fetch-one-record]
`fetchJsonSafe()` wraps `fetch()`, non-2xx statuses, timeouts, and JSON parsing in one tuple.
```ts
import { fetchJsonSafe } from "yieldless/fetch";
interface User {
readonly id: number;
readonly name: string;
}
async function fetchUserById(
apiBaseUrl: string,
id: number,
signal: AbortSignal,
) {
const url = new URL(`/users/${String(id)}`, apiBaseUrl);
return await fetchJsonSafe(url, {
headers: { accept: "application/json" },
timeoutMs: 5_000,
signal,
});
}
```
The `signal` matters. It lets a caller cancel the whole workflow, and it lets Yieldless stop in-progress work when a helper decides the operation should end.
Step 4: use forEach to accumulate results [#step-4-use-foreach-to-accumulate-results]
`forEach()` runs one item at a time. That is useful when order matters, when the remote API should not be hit in parallel, or when you want the first failure to stop the rest of the file.
```ts
import { forEach } from "yieldless/iterable";
const users: User[] = [];
const [loadError] = await forEach(
ids,
async (id, _index, signal) => {
const [fetchError, user] = await fetchUserById(apiBaseUrl, id, signal);
if (fetchError) {
return [fetchError, null] as const;
}
users.push(user);
return [null, undefined] as const;
},
{ signal: parentSignal },
);
if (loadError) {
return [loadError, null] as const;
}
return [null, users] as const;
```
The worker returns a tuple too. If it returns `[error, null]`, `forEach()` stops and gives you that error.
The complete function [#the-complete-function]
```ts
import type { SafeResult } from "yieldless/error";
import { fetchJsonSafe } from "yieldless/fetch";
import { forEach } from "yieldless/iterable";
import { readFileSafe } from "yieldless/node";
interface User {
readonly id: number;
readonly name: string;
}
interface LoadUsersOptions {
readonly apiBaseUrl: string;
readonly signal?: AbortSignal;
}
class InvalidIdFileError extends Error {
readonly line: number;
constructor(line: number, value: string) {
super(`Expected a positive integer on line ${String(line)}, got "${value}".`);
this.name = "InvalidIdFileError";
this.line = line;
}
}
function parseIds(contents: string): SafeResult {
const ids: number[] = [];
for (const [index, line] of contents.split(/\r?\n/).entries()) {
const value = line.trim();
if (value === "" || value.startsWith("#")) {
continue;
}
const id = Number(value);
if (!Number.isInteger(id) || id <= 0) {
return [new InvalidIdFileError(index + 1, value), null] as const;
}
ids.push(id);
}
return [null, ids] as const;
}
async function fetchUserById(
apiBaseUrl: string,
id: number,
signal: AbortSignal,
) {
const url = new URL(`/users/${String(id)}`, apiBaseUrl);
return await fetchJsonSafe(url, {
headers: { accept: "application/json" },
timeoutMs: 5_000,
signal,
});
}
export async function loadUsersFromIdFile(
filePath: string,
options: LoadUsersOptions,
): Promise> {
const [readError, contents] = await readFileSafe(filePath);
if (readError) {
return [readError, null] as const;
}
const [parseError, ids] = parseIds(contents);
if (parseError) {
return [parseError, null] as const;
}
const users: User[] = [];
const [loadError] = await forEach(
ids,
async (id, _index, signal) => {
const [fetchError, user] = await fetchUserById(
options.apiBaseUrl,
id,
signal,
);
if (fetchError) {
return [fetchError, null] as const;
}
users.push(user);
return [null, undefined] as const;
},
{ signal: options.signal },
);
if (loadError) {
return [loadError, null] as const;
}
return [null, users] as const;
}
```
Call it from your app boundary:
```ts
const controller = new AbortController();
const [error, users] = await loadUsersFromIdFile("ids.txt", {
apiBaseUrl: "https://api.example.com",
signal: controller.signal,
});
if (error) {
console.error(error);
} else {
console.log(users);
}
```
When to change the shape [#when-to-change-the-shape]
Use `forEach()` when you want sequential side effects and a manually accumulated result.
Use `mapAsyncLimit()` when every item returns a value and you want Yieldless to collect the array for you:
```ts
import { mapAsyncLimit } from "yieldless/iterable";
const [error, users] = await mapAsyncLimit(
ids,
(id, _index, signal) => fetchUserById(apiBaseUrl, id, signal),
{ concurrency: 4, signal },
);
```
Use `safeRetry()` inside the worker when an individual fetch can fail for transient reasons.
Use `createRateLimiter()` before the fetch when the API has a quota.
Use `createCache()` when repeated IDs should reuse successful previous loads.
What to read next [#what-to-read-next]
* [Simple Recipes](https://binbandit.github.io/yieldless/docs/recipes/simple-recipes/) for small copy-pasteable patterns.
* [Read IDs and Fetch Records](https://binbandit.github.io/yieldless/docs/recipes/read-ids-fetch-records/) for the same workflow as a recipe.
* [Module Selection](https://binbandit.github.io/yieldless/docs/guides/module-selection/) when you know the problem but not the helper.
* [yieldless/iterable](https://binbandit.github.io/yieldless/docs/reference/iterable/) and [yieldless/fetch](https://binbandit.github.io/yieldless/docs/reference/fetch/) for API details.
# yieldless/error
`yieldless/error` is the smallest useful piece of the library. It gives you a single tuple shape and a few helpers for converting thrown code into that shape.
Exports [#exports]
* `type SafeResult = [E, null] | [null, T]`
* `ok(value): SafeResult`
* `err(error): SafeResult`
* `safeTry(promise): Promise>`
* `safeTrySync(fn): SafeResult`
* `match(result, { ok, err }): Return`
* `unwrap(result): T`
Typical use [#typical-use]
```ts
import { err, match, ok, safeTry, safeTrySync, unwrap } from "yieldless/error";
const [readError, body] = await safeTry(readFile("package.json", "utf8"));
if (readError) {
return err(readError);
}
const parsed = safeTrySync(() => JSON.parse(body));
const value = unwrap(parsed);
const state = match(ok(value), {
ok: (data) => ({ kind: "ready", data }),
err: (error) => ({ kind: "error", message: String(error) }),
});
```
When to use it [#when-to-use-it]
* Wrapping filesystem, HTTP, database, or subprocess calls
* Converting parse and validation failures into explicit branches
* Leaving framework boundaries as tuples until the last possible moment, then folding them with `match()`
Rules of thumb [#rules-of-thumb]
* Prefer `safeTry()` at the boundary, not around every individual expression.
* Use `ok()` and `err()` when returning tuples so the intent reads clearly in application code.
* Keep the tuple local. Once you have the success value, use the value or fold it with `match()`.
* Use `unwrap()` only where a thrown exception is genuinely required.
Good [#good]
Wrap the operation that can fail, check the error slot once, then continue with normal values.
```ts
const [readError, text] = await safeTry(readConfigFile());
if (readError) {
return err(readError);
}
const [parseError, config] = safeTrySync(() => JSON.parse(text));
if (parseError) {
return err(parseError);
}
return ok(config);
```
Use `match()` when crossing into UI state or framework output.
```ts
const view = match(result, {
ok: (user) => ({ status: "ready" as const, user }),
err: (error) => ({ status: "failed" as const, message: String(error) }),
});
```
Avoid [#avoid]
Do not ignore the error slot just because a type checker lets you.
```ts
const result = await safeTry(readConfigFile());
const config = JSON.parse(result[1] as string);
```
Do not use `unwrap()` as a replacement for tuple handling in service code.
```ts
const user = unwrap(await safeTry(loadUser(id)));
return ok(user);
```
Caveat [#caveat]
`SafeResult` uses null sentinels. If your success value is literally `null`, the runtime tuple is still correct, but the type system cannot fully discriminate that case.
# yieldless/result
`yieldless/result` keeps tuple handling pleasant once a flow has more than one branch. It does not add a runtime, a pipe DSL, or a new result object. Every helper accepts or returns the same `[error, value]` tuple from `yieldless/error`.
Exports [#exports]
* `isOk(result): result is [null, T]`
* `isErr(result): result is [E, null]`
* `fromNullable(value, createError): SafeResult, E>`
* `mapOk(result, mapper): SafeResult`
* `mapOkAsync(result, mapper): Promise>`
* `mapErr(result, mapper): SafeResult`
* `mapErrAsync(result, mapper): Promise>`
* `andThen(result, mapper): SafeResult`
* `andThenAsync(result, mapper): Promise>`
* `tapOk(result, effect): SafeResult`
* `tapOkAsync(result, effect): Promise>`
* `tapErr(result, effect): SafeResult`
* `tapErrAsync(result, effect): Promise>`
* `toPromise(result): Promise`
Example [#example]
```ts
import { err, ok, safeTry } from "yieldless/error";
import { andThenAsync, fromNullable, mapOk, tapErr } from "yieldless/result";
const [error, view] = await andThenAsync(
await safeTry(loadUser(userId)),
async (user) => {
const existingUser = fromNullable(
user,
() => new Error("User not found"),
);
return mapOk(existingUser, (value) => ({
id: value.id,
name: value.name,
}));
},
);
return tapErr(
error ? err(error) : ok(view),
(cause) => logger.warn(cause),
);
```
Behavior notes [#behavior-notes]
* Mapping helpers only touch the branch named in the function.
* Async helpers are explicit so synchronous tuple code can stay synchronous.
* `fromNullable()` treats both `null` and `undefined` as missing.
* `toPromise()` is intended for framework boundaries that require promise rejection.
When to use it [#when-to-use-it]
Use these helpers when a tuple flow is becoming noisy but still belongs in ordinary TypeScript. If the code is clearer with one `if (error) return [error, null]`, keep the branch.
Good [#good]
Use `fromNullable()` to turn a common domain miss into an explicit tuple.
```ts
const userResult = fromNullable(
await repository.findUser(id),
() => new NotFoundError("User not found"),
);
```
Use `andThen()` when the second step depends on the first success value.
```ts
const result = andThen(userResult, (user) =>
user.active ? ok(user) : err(new ForbiddenError("User is inactive")),
);
```
Use `tapErr()` for logging or telemetry that should not change the result.
```ts
return tapErr(result, (error) => {
logger.warn({ error }, "load user failed");
});
```
Avoid [#avoid]
Do not turn a simple branch into a clever pipeline.
```ts
const result = andThen(
andThen(await loadUser(id), validateUser),
renderUser,
);
```
If the explicit branch is clearer, keep it.
```ts
const [error, user] = await loadUser(id);
if (error) return [error, null] as const;
return renderUser(user);
```
Do not use `toPromise()` in the middle of tuple-native application code. It is for framework boundaries that require rejection.
# yieldless/task
`yieldless/task` gives you a small structured-concurrency primitive for normal async functions.
Exports [#exports]
* `type TaskFactory = (signal: AbortSignal) => PromiseLike | T`
* `interface TaskGroup { readonly signal: AbortSignal; spawn(task): Promise }`
* `runTaskGroup(operation, options?): Promise`
What runTaskGroup() guarantees [#what-runtaskgroup-guarantees]
* All spawned tasks share one AbortSignal
* The group can inherit cancellation from an upstream AbortSignal
* The first task failure aborts the group immediately
* The group waits for every child task to settle before returning
* The original failure is rethrown after cleanup
Example [#example]
```ts
import { runTaskGroup } from "yieldless/task";
const controller = new AbortController();
const repository = await runTaskGroup(async (group) => {
const refs = group.spawn((signal) => loadRefs(signal));
const branches = group.spawn((signal) => loadBranches(signal));
return {
refs: await refs,
branches: await branches,
};
}, {
signal: controller.signal,
});
```
What it does not guarantee [#what-it-does-not-guarantee]
Task cancellation is cooperative. Your spawned function must check or forward the signal for cancellation to take effect.
```ts
group.spawn((signal) => runCommand("git", ["fetch"], { signal }));
```
Good fits [#good-fits]
* Parallel repository reads that should rise and fall together
* Request-scoped fan-out in HTTP handlers
* Background jobs that launch multiple abortable I/O operations
Good [#good]
Use the group for work with one lifecycle.
```ts
const summary = await runTaskGroup(async (group) => {
const status = group.spawn((signal) => readStatus(repoPath, signal));
const branches = group.spawn((signal) => readBranches(repoPath, signal));
return {
status: await status,
branches: await branches,
};
}, {
signal: request.signal,
});
```
Let failures throw inside `runTaskGroup()` when the body is promise-native. If you need tuple-native fan-out, use `yieldless/all`.
Avoid [#avoid]
Do not spawn children after the group body has returned.
```ts
let groupRef: TaskGroup | undefined;
await runTaskGroup((group) => {
groupRef = group;
return "done";
});
groupRef?.spawn(loadLater);
```
Do not swallow child failures unless you intentionally convert them to successful values. The group uses thrown failures to abort siblings.
# yieldless/resource
`yieldless/resource` turns a pair of acquire/release functions into an object that participates in native `await using` cleanup.
Exports [#exports]
* `type ResourceAcquire = () => PromiseLike | T`
* `type ResourceRelease = (resource: T) => PromiseLike | void`
* `interface AsyncResource extends AsyncDisposable { readonly value: T }`
* `acquireResource(acquire, release): Promise>`
Example [#example]
```ts
import { acquireResource } from "yieldless/resource";
{
await using db = await acquireResource(connect, disconnect);
await db.value.query("select 1");
}
```
Where it fits [#where-it-fits]
* Database or queue connections scoped to a request or job
* Temporary filesystem handles
* External clients that need explicit teardown
Important detail [#important-detail]
The resource wrapper exposes the underlying value as `.value`. That keeps the disposable handle explicit and avoids pretending that current `await using` syntax can destructure directly into a tuple.
Good [#good]
Use `acquireResource()` when cleanup must happen even if the body throws.
```ts
{
await using temp = await acquireResource(
() => createTempDirectory(),
(directory) => rmSafe(directory, { recursive: true, force: true }),
);
await writeBuildArtifacts(temp.value);
}
```
Keep the resource lifetime as small as the work that needs it.
Avoid [#avoid]
Do not acquire a resource and hope every return path remembers cleanup.
```ts
const client = await connect();
const result = await runJob(client);
await disconnect(client);
return result;
```
Use `await using` when the TypeScript target and runtime support it. Otherwise keep `try` / `finally` explicit.
# yieldless/di
`yieldless/di` is intentionally small. It binds stable dependencies at the application edge and returns the executable version of the function.
Exports [#exports]
* `type Injectable = (deps: Deps, ...args: Args) => Return`
* `inject(core, deps): (...args) => Return`
Example [#example]
```ts
import { inject } from "yieldless/di";
const createHandler = (
deps: {
logger: { info(message: string): void };
audit: { write(message: string): Promise };
},
repoId: string,
) => {
deps.logger.info(`Loading ${repoId}`);
return deps.audit.write(`repo:${repoId}`);
};
const handler = inject(createHandler, {
logger: console,
audit,
});
```
Why this stays readable [#why-this-stays-readable]
* All required dependencies are still visible in the function signature
* There is no hidden container lookup
* TypeScript enforces that the injected object satisfies `Deps` before the returned function can be called
Use it for [#use-it-for]
* Route handlers configured with repositories, loggers, and feature flags
* CLI commands configured with a filesystem or process adapter
* Background jobs configured with queues or telemetry sinks
Good [#good]
Put stable dependencies in the first argument and domain inputs after that.
```ts
type Deps = {
readonly logger: { info(message: string): void };
readonly users: { find(id: string): Promise };
};
const loadUser = async (deps: Deps, id: string) => {
deps.logger.info(`loading ${id}`);
return await deps.users.find(id);
};
export const handler = inject(loadUser, {
logger,
users,
});
```
Avoid [#avoid]
Do not hide dependencies behind a global lookup.
```ts
const loadUser = async (id: string) => {
const users = container.get("users");
return await users.find(id);
};
```
The point of `inject()` is that the dependency list remains visible and type-checked.
# yieldless/env
`yieldless/env` keeps configuration loading explicit without inventing a config framework. It helps with the parts that are easy to get wrong: missing variables, empty variables, selecting only the keys a service cares about, and handing the result to the same schema adapters used elsewhere.
Exports [#exports]
* `readEnv(source, key, options): SafeResult`
* `readOptionalEnv(source, key, options): SafeResult`
* `pickEnv(source, keys): PickedEnv`
* `parseEnvSafe(schema, source = process.env): SafeResult`
* `parseEnvAsyncSafe(schema, source = process.env): Promise>`
* `class EnvVarError extends Error`
Required values [#required-values]
```ts
import { readEnv } from "yieldless/env";
const [error, databaseUrl] = readEnv(process.env, "DATABASE_URL");
if (error) {
return [error, null] as const;
}
```
Empty strings are treated as errors by default. Pass `allowEmpty: true` when an empty string is meaningful.
Schema-backed config [#schema-backed-config]
```ts
import { parseEnvSafe, pickEnv } from "yieldless/env";
const [error, env] = parseEnvSafe(
envSchema,
pickEnv(process.env, ["DATABASE_URL", "PORT", "NODE_ENV"] as const),
);
```
The schema can be anything supported by `yieldless/schema`: `safeParse()`, `parse()`, `safeParseAsync()`, or `parseAsync()`.
Behavior notes [#behavior-notes]
* `parseEnvSafe()` defaults to `process.env` when it exists.
* `pickEnv()` is useful for tests and for avoiding accidental coupling to unrelated variables.
* `EnvVarError.code` is either `ERR_ENV_MISSING` or `ERR_ENV_EMPTY`.
* `readOptionalEnv()` returns `[null, undefined]` for missing values but still rejects empty strings unless `allowEmpty` is enabled.
Good [#good]
Validate configuration once at startup or at the outer service boundary.
```ts
const [error, config] = parseEnvSafe(
configSchema,
pickEnv(process.env, ["DATABASE_URL", "PORT", "NODE_ENV"] as const),
);
if (error) {
console.error(error);
process.exitCode = 1;
}
```
Use `readEnv()` for one-off scripts where a full schema would add noise.
```ts
const [tokenError, token] = readEnv(process.env, "GITHUB_TOKEN");
if (tokenError) return [tokenError, null] as const;
```
Avoid [#avoid]
Do not read `process.env` deep inside business logic.
```ts
export async function sendEmail(user: User) {
const apiKey = process.env.MAILER_API_KEY;
return mailer.send(apiKey, user.email);
}
```
Parse once, then pass config explicitly or inject it at the edge.
# yieldless/retry
`yieldless/retry` wraps tuple-returning operations with exponential backoff and abort-aware sleep.
Exports [#exports]
* `safeRetry(operation, options): Promise>`
Options [#options]
| Option | Description |
| ----------------------------- | ----------------------------------------------------- |
| `maxAttempts` | Total number of attempts including the first |
| `baseDelayMs` | Initial delay before the first retry |
| `maxDelayMs` | Upper bound on the computed delay |
| `factor` | Multiplier applied to the delay after each attempt |
| `jitter` | Jitter strategy applied to the delay |
| `signal` | AbortSignal that stops the retry loop immediately |
| `shouldRetry(error, attempt)` | Predicate that decides whether to retry a given error |
| `onRetry(state)` | Callback invoked before each retry attempt |
Example [#example]
```ts
import { safeTry } from "yieldless/error";
import { safeRetry } from "yieldless/retry";
const [error, response] = await safeRetry(
async (_attempt, signal) => safeTry(fetchWithSignal(signal)),
{
maxAttempts: 5,
baseDelayMs: 100,
shouldRetry: (error) => error.name !== "ValidationError",
},
);
```
Operational rules [#operational-rules]
* Attempt counts start at 1
* `maxAttempts` includes the first attempt
* The retry loop stops immediately when the parent signal is aborted
* Jitter defaults to `"full"` to avoid herd behavior
Good retry targets [#good-retry-targets]
* HTTP calls to other services
* Transient database connection failures
* Temporary subprocess startup issues
Bad retry targets [#bad-retry-targets]
* Validation failures
* Permission errors
* Business-rule violations that are deterministic
Good [#good]
Retry only the noisy boundary and keep validation outside the loop.
```ts
const [inputError, input] = parseSafe(schema, rawInput);
if (inputError) return [inputError, null] as const;
const [error, response] = await safeRetry(
(_attempt, signal) => fetchJsonSafe(urlFor(input), { signal }),
{
maxAttempts: 4,
baseDelayMs: 100,
signal,
shouldRetry: (error) =>
!(error instanceof HttpStatusError) || error.status >= 500,
},
);
```
Use `onRetry()` for logging, metrics, or tests.
```ts
await safeRetry(operation, {
onRetry: ({ attempt, delayMs, error }) => {
logger.warn({ attempt, delayMs, error }, "retrying request");
},
});
```
Avoid [#avoid]
Do not retry an entire request handler when only one transport call is flaky.
```ts
await safeRetry(
() => handleWholeRequest(context),
{ maxAttempts: 3 },
);
```
Do not retry deterministic failures.
```ts
await safeRetry(
() => parseUserInput(input),
{ maxAttempts: 5 },
);
```
# yieldless/schedule
`yieldless/schedule` gives timing policy a name without introducing a runtime. A schedule is just a function from the latest attempt state to a decision: continue or stop, and how long to wait before trying again.
Use it when retry or polling policy is shared across call sites, when tests need to inspect timing decisions, or when a loop needs both a delay policy and a stopping rule.
Exports [#exports]
* `type SchedulePolicy = (state: ScheduleState) => ScheduleDecision`
* `type ScheduleState = { attempt, elapsedMs, error, signal }`
* `type ScheduleDecision = { continue: boolean, delayMs: number }`
* `fixedDelay(delayMs): SchedulePolicy`
* `exponentialBackoff({ baseDelayMs, factor, maxDelayMs, jitter }): SchedulePolicy`
* `maxAttempts(attempts): SchedulePolicy`
* `maxElapsedTime(maxElapsedMs): SchedulePolicy`
* `composeSchedules(...policies): SchedulePolicy`
* `getScheduleDecision(policy, state): ScheduleDecision`
* `waitForSchedule(policy, state): Promise>`
* `runScheduled(operation, policy, options): Promise>`
* `continueNow(): SchedulePolicy`
* `stopSchedule(): SchedulePolicy`
Example [#example]
```ts
import {
composeSchedules,
exponentialBackoff,
maxAttempts,
runScheduled,
} from "yieldless/schedule";
const schedule = composeSchedules(
exponentialBackoff({
baseDelayMs: 100,
jitter: "full",
maxDelayMs: 2_000,
}),
maxAttempts(5),
);
const [error, user] = await runScheduled(
(attempt, signal) => loadUserFromApi(userId, { attempt, signal }),
schedule,
{ signal },
);
```
Behavior notes [#behavior-notes]
* `composeSchedules()` stops when any child policy stops.
* When multiple policies continue, the largest `delayMs` wins.
* `maxAttempts(3)` allows attempts `1`, `2`, and `3`, then stops before attempt `4`.
* `waitForSchedule()` returns the latest `error` when the policy stops.
* `runScheduled()` normalizes thrown operation failures into tuple errors.
* All waiting is abort-aware through `yieldless/timer`.
Good [#good]
Define retry policy once and reuse it.
```ts
const apiSchedule = composeSchedules(
exponentialBackoff({ baseDelayMs: 150, maxDelayMs: 5_000 }),
maxAttempts(4),
);
```
Inspect a policy without sleeping.
```ts
const decision = getScheduleDecision(apiSchedule, {
attempt: 2,
elapsedMs: 500,
signal,
});
```
Avoid [#avoid]
Do not hide business decisions inside a global schedule.
```ts
const schedule = composeSchedules(exponentialBackoff(), maxAttempts(10));
```
Prefer naming the boundary-specific reason.
```ts
const githubApiSchedule = composeSchedules(
exponentialBackoff({ baseDelayMs: 250 }),
maxAttempts(3),
);
```
# yieldless/signal
`yieldless/signal` adds a small missing piece around native cancellation: derived deadline signals that clean up after themselves.
Exports [#exports]
* `createTimeoutSignal(timeoutMs, options?): TimeoutSignal`
* `withTimeout(operation, options): Promise`
* `class TimeoutError extends Error`
Options [#options]
| Option | Description |
| ----------- | --------------------------------------------------- |
| `timeoutMs` | Maximum time before the derived signal aborts |
| `signal` | Optional parent signal to inherit cancellation from |
| `reason` | Optional custom timeout reason |
Typical use [#typical-use]
```ts
import { safeTry } from "yieldless/error";
import { withTimeout } from "yieldless/signal";
const [error, response] = await safeTry(
withTimeout(
(signal) => fetch("https://example.com/api/repos", { signal }),
{
timeoutMs: 5_000,
},
),
);
```
Long-lived scope [#long-lived-scope]
Use `createTimeoutSignal()` when you need to pass one deadline signal across several calls.
```ts
import { createTimeoutSignal } from "yieldless/signal";
const deadline = createTimeoutSignal(10_000, {
signal: request.signal,
});
try {
const [error, result] = await runCommandSafe(
"git",
["fetch", "--all"],
{ signal: deadline.signal },
);
} finally {
deadline[Symbol.dispose]();
}
```
Operational rules [#operational-rules]
* The parent signal wins if it aborts first.
* Timeout failures use `TimeoutError` by default.
* Disposing the derived signal clears its timer and parent listener.
* `timeoutMs` must be zero or greater.
Good fits [#good-fits]
* HTTP requests that should not hang forever
* Git commands or subprocesses with a firm deadline
* Any abort-aware operation where you want one shared time budget
Good [#good]
Use `withTimeout()` for one operation.
```ts
const [error, response] = await safeTry(
withTimeout(
(signal) => fetch(url, { signal }),
{ timeoutMs: 5_000, signal: request.signal },
),
);
```
Use `createTimeoutSignal()` for a scope of related operations.
```ts
const deadline = createTimeoutSignal(10_000, { signal });
try {
await readProfile(deadline.signal);
await readPermissions(deadline.signal);
} finally {
deadline[Symbol.dispose]();
}
```
Avoid [#avoid]
Do not create ad hoc timers that outlive the request.
```ts
const timer = setTimeout(() => controller.abort(), 5_000);
await doWork(controller.signal);
```
Use `withTimeout()` or dispose the derived signal explicitly so timers and parent listeners are cleaned up.
# yieldless/timer
`yieldless/timer` covers the small timing jobs that show up around network and UI work: wait for a little while, make that wait cancelable, or poll a tuple-returning operation until it is ready.
It is deliberately not a scheduler. Timers stay native, cancellation stays on `AbortSignal`, and polling returns the same tuple shape as the rest of Yieldless.
Exports [#exports]
* `sleep(delayMs, options): Promise`
* `sleepSafe(delayMs, options): Promise>`
* `poll(operation, options): Promise>`
* `type PollOperation = (attempt, signal) => SafeResult | PromiseLike>`
* `type PollOptions = { intervalMs, maxAttempts?, timeoutMs?, signal?, shouldContinue? }`
Sleep [#sleep]
```ts
import { sleep } from "yieldless/timer";
await sleep(250, { signal });
```
If the signal aborts, `sleep()` rejects with the abort reason. Use `sleepSafe()` when the wait itself belongs in tuple form.
```ts
import { sleepSafe } from "yieldless/timer";
const [error] = await sleepSafe(250, { signal });
```
Poll [#poll]
```ts
import { poll } from "yieldless/timer";
const [error, job] = await poll(
async (_attempt, signal) => getJob(jobId, signal),
{
intervalMs: 1_000,
timeoutMs: 30_000,
signal,
},
);
```
`poll()` stops when the operation returns `[null, value]`, when `maxAttempts` is reached, when `shouldContinue()` returns `false`, or when cancellation/timeout aborts the derived signal.
Behavior notes [#behavior-notes]
* `sleep(0)` resolves immediately and is useful as an async yield point.
* Negative or non-finite delays throw `RangeError`.
* Poll attempts start at `1`.
* `timeoutMs` uses `yieldless/signal` internally, so timeout errors are `TimeoutError`.
* The operation receives the same signal that controls the interval wait.
Good [#good]
Use `poll()` for eventual consistency and job status checks.
```ts
const [error, job] = await poll(
async (_attempt, signal) => {
const [error, current] = await fetchJsonSafe(jobUrl, { signal });
if (error) return [error, null];
return current.state === "ready"
? [null, current] as const
: [new Error("Job is not ready"), null] as const;
},
{
intervalMs: 1_000,
timeoutMs: 30_000,
signal,
},
);
```
Use `sleepSafe()` when a delay is part of tuple-native control flow.
Avoid [#avoid]
Do not write uncancelable timer promises.
```ts
await new Promise((resolve) => {
setTimeout(resolve, 1_000);
});
```
If a user can navigate away or a request can abort, pass the signal to `sleep()` or `poll()`.
# yieldless/event
`yieldless/event` is for the small boundary where callback-style events meet async application code. It waits for one event, cleans up listeners, and can return the result in tuple form.
It supports browser-style `EventTarget` objects and Node-style `EventEmitter` objects without adding an event abstraction of its own.
Exports [#exports]
* `onceEvent(source, eventName, options): Promise`
* `onceEventSafe(source, eventName, options): Promise>`
* `type OnceEventOptions = { signal?, rejectOn? }`
* `type EventSourceLike = EventTargetLike | EventEmitterLike`
EventTarget [#eventtarget]
```ts
import { onceEventSafe } from "yieldless/event";
const [error, event] = await onceEventSafe(button, "click", { signal });
```
EventEmitter [#eventemitter]
```ts
import { onceEvent } from "yieldless/event";
const socket = await onceEvent(server, "connection", { signal });
```
For EventEmitter-like sources, `error` events reject the wait by default. Disable or customize that behavior with `rejectOn`.
```ts
const value = await onceEvent(emitter, "error", {
rejectOn: false,
});
```
Behavior notes [#behavior-notes]
* Listener cleanup happens on success, rejection, and abort.
* EventEmitter payloads resolve to the single argument when one argument is emitted, or the full argument array when multiple values are emitted.
* EventTarget event names must be strings.
* `onceEventSafe()` wraps success and failure in the normal Yieldless tuple shape.
Good [#good]
Use `onceEventSafe()` when the event wait is one step in a tuple flow.
```ts
const [error, event] = await onceEventSafe(socket, "open", {
signal,
});
if (error) return [error, null] as const;
```
Use `rejectOn` to make domain-specific failure events reject the wait.
```ts
await onceEvent(stream, "ready", {
rejectOn: "close",
signal,
});
```
Avoid [#avoid]
Do not create one-off promises without listener cleanup.
```ts
await new Promise((resolve) => {
emitter.once("ready", resolve);
});
```
That pattern usually forgets abort handling and error-event cleanup.
# yieldless/fetch
`yieldless/fetch` keeps HTTP calls on the platform `fetch()` API while handling the production chores that otherwise show up in every service: tuple errors, deadlines, non-2xx responses, JSON parsing, and abort forwarding.
Exports [#exports]
* `fetchSafe(input, options): Promise>`
* `fetchJsonSafe(input, options): Promise>`
* `readJsonSafe(response): Promise>`
* `class HttpStatusError extends Error`
* `class JsonParseError extends Error`
* `class FetchUnavailableError extends Error`
* `type FetchSafeOptions = RequestInit & { timeoutMs?, isOkStatus?, fetch? }`
Example [#example]
```ts
import { fetchJsonSafe } from "yieldless/fetch";
const [error, user] = await fetchJsonSafe<{ id: string; name: string }>(
`https://api.example.com/users/${userId}`,
{
headers: {
accept: "application/json",
},
timeoutMs: 5_000,
signal,
},
);
if (error) {
return [error, null] as const;
}
return [null, user] as const;
```
Fetching one record by ID [#fetching-one-record-by-id]
Build the URL in normal TypeScript, then pass the active `AbortSignal` into `fetchJsonSafe()`.
```ts
import { fetchJsonSafe } from "yieldless/fetch";
async function fetchUserById(
apiBaseUrl: string,
userId: number,
signal: AbortSignal,
) {
const url = new URL(`/users/${String(userId)}`, apiBaseUrl);
return await fetchJsonSafe(url, {
headers: { accept: "application/json" },
timeoutMs: 5_000,
signal,
});
}
```
This shape works well inside `forEach()`, `mapAsyncLimit()`, `safeRetry()`, and cache loaders because all of them pass a signal to the work they run.
Status handling [#status-handling]
`fetchSafe()` treats `response.ok` as success by default. Override `isOkStatus` when an API uses a status like `304` or `409` as part of a normal workflow.
```ts
const [error, response] = await fetchSafe(url, {
isOkStatus: (response) => response.ok || response.status === 304,
});
```
Behavior notes [#behavior-notes]
* Request options are ordinary `RequestInit` options.
* `timeoutMs` derives an `AbortSignal` and cleans up the timer when the request settles.
* Non-ok responses return `HttpStatusError` with the original `response` attached.
* `fetchJsonSafe()` only parses JSON after `fetchSafe()` succeeds.
* `readJsonSafe()` wraps parser failures in `JsonParseError` instead of throwing.
Good [#good]
Keep HTTP concerns at the HTTP boundary.
```ts
const [error, user] = await fetchJsonSafe(url, {
headers: { accept: "application/json" },
timeoutMs: 5_000,
signal,
});
if (error instanceof HttpStatusError && error.status === 404) {
return [new NotFoundError("User not found"), null] as const;
}
```
Inject a custom `fetch` in tests or runtimes that do not expose global fetch.
```ts
await fetchJsonSafe(url, {
fetch: testFetch,
});
```
Avoid [#avoid]
Do not parse JSON before checking status.
```ts
const response = await fetch(url);
const body = await response.json();
if (!response.ok) {
return [new Error(body.message), null] as const;
}
```
Use `fetchSafe()` when you need the raw response and `fetchJsonSafe()` when success means "valid JSON body".
# yieldless/context
`yieldless/context` wraps Node's `AsyncLocalStorage` without turning it into a global application container.
Exports [#exports]
* `createContext(): YieldlessContext`
* `createTraceContext(): YieldlessContext`
* `withSpan(tracer, context, name, fn): Promise`
YieldlessContext [#yieldlesscontextt]
| Method | Description |
| ------------------ | ---------------------------------------------------------- |
| `run(value, fn)` | Execute a function with the given context value |
| `get()` | Return the current value or undefined |
| `expect(message?)` | Return the current value or throw with an optional message |
| `bind(fn)` | Capture the current context and bind it to a function |
Example [#example]
```ts
import { createContext, withSpan } from "yieldless/context";
const requestContext = createContext<{ requestId: string }>();
await requestContext.run({ requestId: crypto.randomUUID() }, async () => {
console.log(requestContext.expect().requestId);
});
```
Tracing shape [#tracing-shape]
`withSpan()` expects a tracer with `startActiveSpan()` and a span with `end()`. That matches the OpenTelemetry style API closely without taking a hard runtime dependency on it.
Use it for [#use-it-for]
* Request IDs
* Trace spans
* User session metadata
* Transaction handles
Do not use it for [#do-not-use-it-for]
* Static application dependencies
* Feature flags that are known at startup
* Anything that would be clearer as a regular function argument
Good [#good]
Use context for metadata that naturally follows asynchronous work.
```ts
const requestContext = createContext<{ requestId: string }>();
await requestContext.run({ requestId }, async () => {
logger.info(requestContext.expect().requestId);
await handleRequest();
});
```
Use `bind()` when a callback will run later but should keep the current store.
```ts
const onComplete = requestContext.bind(() => {
logger.info(requestContext.expect().requestId);
});
```
Avoid [#avoid]
Do not use async context as a dependency container.
```ts
const appContext = createContext<{ database: Database; logger: Logger }>();
export async function loadUser(id: string) {
return await appContext.expect().database.findUser(id);
}
```
Stable dependencies are clearer as function parameters or `yieldless/di` inputs.
# yieldless/all
`yieldless/all` gives you helpers for tuple-returning parallel work: `all(tasks)` waits for every task or aborts siblings on the first error, `race(tasks)` resolves with the first settled result and aborts the rest, and `mapLimit(items, mapper, options)` processes a collection with bounded concurrency.
Exports [#exports]
* `type SafeTask = (signal: AbortSignal) => PromiseLike> | SafeResult`
* `type MapLimitMapper- = (item: Item, index: number, signal: AbortSignal) => PromiseLike> | SafeResult`
* `all(tasks, options): Promise, ParallelError>>`
* `mapLimit(items, mapper, { concurrency, signal }): Promise>`
* `race(tasks, options): Promise>`
Example [#example]
```ts
import { all } from "yieldless/all";
import { safeTry } from "yieldless/error";
const result = await all([
(signal) => safeTry(readPrimary(signal)),
(signal) => safeTry(readReplica(signal)),
]);
```
For large batches, use `mapLimit()` to avoid starting every item at once:
```ts
import { mapLimit } from "yieldless/all";
import { safeTry } from "yieldless/error";
const [error, avatars] = await mapLimit(
users,
(user, _index, signal) =>
safeTry(fetchAvatar(user.avatarUrl, { signal })),
{ concurrency: 4 },
);
```
Behavior notes [#behavior-notes]
* `all([])` succeeds with an empty array.
* `mapLimit([], mapper, options)` succeeds with an empty array.
* `mapLimit()` preserves input order and throws a `RangeError` when `concurrency` is less than `1` or not an integer.
* `race([])` throws a `RangeError`.
* If any task or mapped item returns `[error, null]`, siblings are aborted before the final tuple is returned.
* `race()` aborts losing tasks immediately, then waits for them to settle before it returns.
* Thrown task and mapper failures are normalized into tuple failures internally.
When to prefer runTaskGroup() instead [#when-to-prefer-runtaskgroup-instead]
Use `all()`, `race()`, and `mapLimit()` when the work is already tuple-native. Use `runTaskGroup()` when you want imperative fan-out and regular promise values.
Good [#good]
Use `all()` for a small, fixed set of independent tuple tasks.
```ts
const [error, [profile, permissions]] = await all([
(signal) => loadProfile(userId, signal),
(signal) => loadPermissions(userId, signal),
]);
```
Use `mapLimit()` when a list could be large or expensive.
```ts
const [error, summaries] = await mapLimit(
repositories,
(repo, _index, signal) => readSummary(repo.path, signal),
{ concurrency: 4, signal },
);
```
Use `race()` when the first success or first failure should settle the operation.
```ts
const result = await race([
(signal) => readPrimary(signal),
(signal) => readReplica(signal),
]);
```
Avoid [#avoid]
Do not pass work that ignores the signal and expect cancellation to be immediate.
```ts
await all([
async () => safeTry(expensiveCpuLoop()),
async () => safeTry(readRemoteData()),
]);
```
Do not use `all()` for thousands of items. Use `mapLimit()` or `yieldless/iterable` so you can control pressure.
# yieldless/iterable
`yieldless/iterable` handles streams of ordinary JavaScript values without introducing a stream runtime. It works with both `Iterable` and `AsyncIterable`, captures thrown iterator failures as tuples, and forwards `AbortSignal` through workers.
Exports [#exports]
* `collect(iterable, options): Promise>`
* `forEach(iterable, worker, options): Promise>`
* `mapAsyncLimit(iterable, mapper, options): Promise>`
* `type AnyIterable = Iterable | AsyncIterable`
* `type IterableWorker
- = (item, index, signal) => SafeResult | PromiseLike>`
* `type IterableMapper
- = (item, index, signal) => SafeResult | PromiseLike>`
Collect [#collect]
```ts
import { collect } from "yieldless/iterable";
const [error, lines] = await collect(readLines(filePath), { signal });
```
Sequential work [#sequential-work]
```ts
import { forEach } from "yieldless/iterable";
const [error] = await forEach(
readRows(filePath),
async (row, _index, signal) => writeRow(row, signal),
{ signal },
);
```
Accumulating results with forEach [#accumulating-results-with-foreach]
`forEach()` is useful when each item performs side effects and you want to decide exactly what gets accumulated.
```ts
import { fetchJsonSafe } from "yieldless/fetch";
import { forEach } from "yieldless/iterable";
const users: User[] = [];
const [error] = await forEach(
ids,
async (id, _index, signal) => {
const [fetchError, user] = await fetchJsonSafe(
`https://api.example.com/users/${String(id)}`,
{
timeoutMs: 5_000,
signal,
},
);
if (fetchError) {
return [fetchError, null] as const;
}
users.push(user);
return [null, undefined] as const;
},
{ signal },
);
if (error) {
return [error, null] as const;
}
return [null, users] as const;
```
Use [Read IDs and Fetch Records](https://binbandit.github.io/yieldless/docs/recipes/read-ids-fetch-records/) for the full version that reads and parses IDs from a file first.
Bounded mapping [#bounded-mapping]
```ts
import { mapAsyncLimit } from "yieldless/iterable";
const [error, thumbnails] = await mapAsyncLimit(
readImages(source),
(image, _index, signal) => renderThumbnail(image, signal),
{
concurrency: 4,
signal,
},
);
```
`mapAsyncLimit()` preserves input order in the returned array while keeping only the configured number of mappers active.
Behavior notes [#behavior-notes]
* Iterator throws are captured as tuple errors.
* Worker and mapper throws are captured as tuple errors.
* The first tuple error aborts in-flight bounded mapping work.
* `forEach()` is sequential by design; use `mapAsyncLimit()` for parallelism.
* `concurrency` must be a positive integer.
Good [#good]
Use `forEach()` when order and backpressure matter more than throughput.
```ts
const [error] = await forEach(
readLogLines(filePath),
async (line, index, signal) => writeLine(index, line, signal),
{ signal },
);
```
Use `mapAsyncLimit()` for many independent operations.
```ts
const [error, results] = await mapAsyncLimit(
readRepositories(workspace),
(repo, _index, signal) => inspectRepository(repo, signal),
{ concurrency: 4, signal },
);
```
Avoid [#avoid]
Do not materialize a huge async iterable just so you can use array helpers.
```ts
const rows = [];
for await (const row of readRows(file)) {
rows.push(row);
}
await Promise.all(rows.map(processRow));
```
Use `forEach()` or `mapAsyncLimit()` so the iterable can stream and cancellation can stop the work early.
# yieldless/queue
`yieldless/queue` connects producers and consumers without a framework runtime. It is useful when work arrives faster than it can be processed, or when a worker loop should consume values as they appear.
The queue supports bounded capacity, producer backpressure, abortable waits, explicit close, draining, and `for await` consumption.
Exports [#exports]
* `createQueue({ capacity }): AsyncQueue`
* `class QueueClosedError extends Error`
* `type AsyncQueue = { offer, take, close, drain, clear, size, capacity, closed, pendingOffers, pendingTakes } & AsyncIterable`
* `type QueueOperationOptions = { signal?: AbortSignal }`
Example [#example]
```ts
import { createQueue } from "yieldless/queue";
const queue = createQueue({ capacity: 100 });
async function worker(signal: AbortSignal) {
for await (const jobId of queue) {
if (signal.aborted) return;
await processJob(jobId, signal);
}
}
await queue.offer("job-1", { signal });
```
Behavior notes [#behavior-notes]
* `capacity` defaults to `Infinity`.
* A full bounded queue makes `offer()` wait until a consumer takes a value.
* A waiting `take()` receives the next offered value immediately.
* `offer()` and `take()` return tuple results instead of throwing on abort or close.
* `close(reason)` resolves pending and future operations with `[reason, null]`.
* Async iteration ends when the queue closes.
* `drain()` removes buffered values and then gives pending offers a chance to enter the queue.
Good [#good]
Use a bounded queue to make backpressure visible.
```ts
const thumbnails = createQueue({ capacity: 50 });
await thumbnails.offer(job, { signal });
```
Consume with ordinary async iteration.
```ts
for await (const job of thumbnails) {
await renderThumbnail(job, signal);
}
```
Avoid [#avoid]
Do not use an unbounded queue for input you do not control.
```ts
const queue = createQueue();
```
Prefer a capacity that matches the worker pool or memory budget.
```ts
const queue = createQueue({ capacity: 200 });
```
# yieldless/pubsub
`yieldless/pubsub` broadcasts values to many local subscribers. It is intentionally in-process and small: publish values, subscribe with `for await`, close subscriptions when they are no longer needed, and optionally replay the latest values to late subscribers.
Use it for CLI progress, Electron main-process status streams, local job updates, or tests that need observable fanout.
Exports [#exports]
* `createPubSub({ replay, subscriberCapacity }): PubSub`
* `type PubSub = { publish, subscribe, close, subscriberCount, closed }`
* `type PubSubSubscription = AsyncIterable & { next, close }`
Example [#example]
```ts
import { createPubSub } from "yieldless/pubsub";
const progress = createPubSub<{ id: string; percent: number }>({
replay: 1,
});
const subscription = progress.subscribe();
void (async () => {
for await (const update of subscription) {
renderProgress(update);
}
})();
progress.publish({ id: "index", percent: 25 });
```
Behavior notes [#behavior-notes]
* `publish(value)` returns the number of active subscribers.
* `replay` defaults to `0`.
* `subscribe()` returns an async iterable subscription.
* `subscription.close()` removes that subscriber and closes its internal queue.
* `close(reason)` closes all subscribers and prevents future publishes.
* Each subscriber has its own queue. `subscriberCapacity` can bound per-subscriber buffering.
Good [#good]
Use pubsub for local fanout, not remote messaging.
```ts
const buildEvents = createPubSub({ replay: 1 });
```
Close subscriptions owned by short-lived UI or request scopes.
```ts
const subscription = buildEvents.subscribe();
try {
for await (const event of subscription) {
send(event);
}
} finally {
subscription.close();
}
```
Avoid [#avoid]
Do not treat it as a durable queue.
```ts
const orders = createPubSub();
```
Use `yieldless/queue` when every item must be processed by a worker. Use `yieldless/pubsub` when every current listener should hear the same update.
# yieldless/limiter
`yieldless/limiter` keeps pressure explicit. It gives you a tiny semaphore for local concurrency and a simple rate limiter for APIs that need paced calls. Both are built on promises and `AbortSignal`.
Use it when many independent flows share a limited resource: subprocess slots, database connections, API quota, filesystem pressure, or CPU-heavy local work.
Exports [#exports]
* `createSemaphore(capacity): Semaphore`
* `withPermit(semaphore, operation, options): Promise>`
* `createRateLimiter({ limit, intervalMs }): RateLimiter`
* `type Semaphore = { acquire, tryAcquire, withPermit, available, capacity, pending }`
* `type SemaphorePermit = { release, [Symbol.dispose] }`
* `type RateLimiter = { take, takeSafe, clear, pending }`
Example [#example]
```ts
import { createSemaphore, withPermit } from "yieldless/limiter";
import { safeTry } from "yieldless/error";
const gitSlots = createSemaphore(3);
const [error, result] = await withPermit(
gitSlots,
(signal) => safeTry(runGitCommand(repoPath, ["status"], { signal })),
{ signal },
);
```
Pace API calls:
```ts
import { createRateLimiter } from "yieldless/limiter";
const githubLimit = createRateLimiter({
limit: 2,
intervalMs: 1_000,
});
await githubLimit.take({ signal });
const response = await fetch(url, { signal });
```
Behavior notes [#behavior-notes]
* `createSemaphore(0)` throws a `RangeError`.
* `acquire()` waits until a permit is available or the signal aborts.
* `tryAcquire()` returns `null` instead of waiting.
* `withPermit()` always releases the permit in a `finally` block.
* The exported `withPermit()` helper expects tuple-returning work and normalizes acquisition or thrown failures into tuple errors.
* `createRateLimiter()` schedules calls into fixed-size windows.
* `rateLimiter.clear(reason)` rejects pending waiters.
Good [#good]
Protect the boundary, not every call site.
```ts
const subprocesses = createSemaphore(4);
export function runLimitedGit(args: readonly string[], signal: AbortSignal) {
return withPermit(
subprocesses,
(permitSignal) => runCommandSafe("git", args, { signal: permitSignal }),
{ signal },
);
}
```
Use `takeSafe()` when rate-limit waiting is part of a tuple flow.
```ts
const [limitError] = await githubLimit.takeSafe({ signal });
if (limitError) return [limitError, null] as const;
```
Avoid [#avoid]
Do not acquire a permit and forget to release it.
```ts
const permit = await semaphore.acquire();
await doWork();
```
Prefer scoped release.
```ts
await semaphore.withPermit((signal) => doWork(signal), { signal });
```
# yieldless/cache
`yieldless/cache` stores successful tuple load results and shares in-flight loads for the same key. It is useful for API clients, metadata readers, schema discovery, docs search, and any expensive async read where several callers can ask for the same thing.
It is intentionally small: no background worker, no global store, no runtime. The cache loads on demand, expires entries by TTL, evicts least-recently-used entries by size, and keeps errors out of the cache.
Exports [#exports]
* `createCache({ load, getKey, maxSize, ttlMs }): Cache`
* `type Cache = { get, refresh, delete, clear, has, stats, size }`
* `type CacheStats = { hits, misses, inFlight, size }`
* `type CacheGetOptions = { signal?: AbortSignal }`
Example [#example]
```ts
import { createCache } from "yieldless/cache";
import { fetchJsonSafe } from "yieldless/fetch";
const userCache = createCache({
ttlMs: 30_000,
maxSize: 500,
load: (userId: string, signal) =>
fetchJsonSafe(`/api/users/${userId}`, { signal }),
});
const [error, user] = await userCache.get("u_123", { signal });
```
Behavior notes [#behavior-notes]
* Only successful `[null, value]` results are cached.
* Concurrent `get()` calls for the same key share the same load.
* `refresh()` skips the stored value and starts or joins the in-flight load.
* `delete(key)` removes cached and in-flight work for the key, aborting an in-flight load.
* `clear()` removes all cached entries and aborts all in-flight loads.
* `ttlMs` defaults to no expiry. `maxSize` defaults to a very large limit.
* Reading a cached value refreshes its LRU position.
Good [#good]
Cache read-through data at the boundary.
```ts
const repoCache = createCache({
maxSize: 200,
ttlMs: 60_000,
load: (repoId: string, signal) => loadRepository(repoId, signal),
});
```
Use `refresh()` when a user explicitly asks for fresh data.
```ts
const result = await repoCache.refresh(repoId, { signal });
```
Avoid [#avoid]
Do not cache commands with side effects.
```ts
const cache = createCache({
load: (_id, signal) => createPullRequest(signal),
});
```
Prefer cache keys that describe pure reads.
```ts
const cache = createCache({
load: (repoId, signal) => loadRepositorySummary(repoId, signal),
});
```
# yieldless/batcher
`yieldless/batcher` coalesces nearby `load(key)` calls into one `loadMany(keys, signal)` call. It is useful when many screens, resolvers, or workflow steps can ask for related data in the same tick.
The module stays intentionally boring: it does not cache, it does not add a scheduler, and it does not require a runtime. It batches pending keys, maps results back by index, and returns tuple results to each caller.
Exports [#exports]
* `createBatcher({ loadMany, waitMs, maxBatchSize }): Batcher`
* `class MissingBatchResultError extends Error`
* `type Batcher = { load, clear, pending }`
* `type BatcherLoadOptions = { signal?: AbortSignal }`
Example [#example]
```ts
import { createBatcher } from "yieldless/batcher";
const users = createBatcher({
waitMs: 1,
maxBatchSize: 100,
loadMany: (ids: readonly string[], signal) =>
loadUsersByIds(ids, { signal }),
});
const [error, user] = await users.load("u_123", { signal });
```
Behavior notes [#behavior-notes]
* `waitMs` defaults to `0`, which batches calls queued in the same turn.
* `maxBatchSize` flushes a batch early when enough keys are waiting.
* `loadMany()` must return values in the same order as the input keys.
* If `loadMany()` returns `[error, null]`, every waiting caller receives that error.
* If a result is missing for an index, that caller receives `MissingBatchResultError`.
* A pending `load()` can be aborted before the batch flushes.
* `clear(reason)` resolves all pending loads with the reason.
Good [#good]
Use a batcher where the backend already supports bulk reads.
```ts
const labels = createBatcher({
loadMany: (ids, signal) => loadLabels(ids, signal),
});
```
Keep caching separate when you need both.
```ts
const labelCache = createCache({
load: (id, signal) => labels.load(id, { signal }),
});
```
Avoid [#avoid]
Do not use batching to hide write side effects.
```ts
const writes = createBatcher({
loadMany: (inputs, signal) => updateManyRecords(inputs, signal),
});
```
Prefer explicit write orchestration for mutations, and use `yieldless/batcher` for read-like keyed loading.
# yieldless/breaker
`yieldless/breaker` protects a dependency after repeated failures. It tracks a small state machine: `closed`, `open`, and `half-open`. When enough failures trip the breaker, new calls fail fast with `CircuitOpenError` until the cooldown passes.
Use it around external services, subprocess-heavy integrations, or any expensive boundary where repeated failures should stop causing more pressure.
Exports [#exports]
* `createCircuitBreaker(operation, options): CircuitBreaker`
* `class CircuitOpenError extends Error`
* `type CircuitBreakerState = "closed" | "half-open" | "open"`
* `type CircuitBreakerOptions = { failureThreshold, cooldownMs, successThreshold, shouldTrip, onStateChange }`
* `type CircuitBreaker = callable & { state, failureCount, reset }`
Example [#example]
```ts
import { createCircuitBreaker } from "yieldless/breaker";
import { fetchJsonSafe } from "yieldless/fetch";
const loadGitHubUser = createCircuitBreaker(
(_signal, login: string) =>
fetchJsonSafe(`https://api.github.com/users/${login}`),
{
failureThreshold: 3,
cooldownMs: 30_000,
},
);
const [error, user] = await loadGitHubUser("octocat");
```
Behavior notes [#behavior-notes]
* The breaker opens after `failureThreshold` tripping failures.
* While open, calls return `[new CircuitOpenError(), null]`.
* After `cooldownMs`, the next call moves the breaker to `half-open`.
* A half-open success closes the breaker once `successThreshold` is reached.
* A half-open failure opens it again.
* `shouldTrip(error)` lets you ignore expected failures such as validation errors.
* `reset()` closes the breaker and clears counts.
Good [#good]
Trip only on dependency failures, not user input.
```ts
const breaker = createCircuitBreaker(loadUser, {
failureThreshold: 3,
cooldownMs: 10_000,
shouldTrip: (error) => !(error instanceof ValidationError),
});
```
Expose breaker state for diagnostics.
```ts
logger.info("github breaker", { state: breaker.state });
```
Avoid [#avoid]
Do not use a circuit breaker for ordinary validation or authorization branches.
```ts
const guarded = createCircuitBreaker(validateUserInput, {
failureThreshold: 1,
cooldownMs: 60_000,
});
```
Use it for unstable boundaries where failing fast reduces damage.
# yieldless/singleflight
`yieldless/singleflight` prevents duplicate in-flight work from stampeding the same expensive operation. Calls with the same key share one promise while it is running, then the entry is removed when it settles.
This is useful for Electron preload APIs, CLIs, API clients, docs search, and cache warmers where several callers can ask for the same thing at once.
Exports [#exports]
* `singleFlight(operation, options): SingleFlight`
* `type SingleFlightOperation = (signal, ...args) => SafeResult | PromiseLike>`
* `type SingleFlightOptions = { getKey?, signal? }`
* `type SingleFlight = callable & { clear(...args), clearAll(), has(...args), size }`
Example [#example]
```ts
import { singleFlight } from "yieldless/singleflight";
const loadRepository = singleFlight(
async (signal, repoId: string) => readRepository(repoId, signal),
);
const [first, second] = await Promise.all([
loadRepository("yieldless"),
loadRepository("yieldless"),
]);
```
Only one `readRepository()` call runs for the duplicate key. Both callers receive the same tuple result.
Custom keys [#custom-keys]
The default key is `JSON.stringify(args)`. Pass `getKey` when your arguments include values that need a stable domain key.
```ts
const loadUser = singleFlight(
async (signal, request: { userId: string; refresh: boolean }) =>
readUser(request.userId, signal),
{
getKey: (request) => request.userId,
},
);
```
Behavior notes [#behavior-notes]
* Results are not cached after settlement.
* Thrown operation failures are normalized into tuple errors.
* `clear(...args)` aborts and removes one in-flight entry.
* `clearAll()` aborts and removes every in-flight entry.
* A parent `signal` aborts the shared operation for every active caller.
Good [#good]
Wrap expensive idempotent reads that often get requested twice at the same time.
```ts
const loadPullRequest = singleFlight(
(signal, owner: string, repo: string, number: number) =>
fetchPullRequest(owner, repo, number, signal),
{
getKey: (owner, repo, number) => `${owner}/${repo}#${String(number)}`,
},
);
```
Clear entries when a view or process is no longer interested.
```ts
loadPullRequest.clear(owner, repo, number);
```
Avoid [#avoid]
Do not use `singleFlight()` as a long-lived cache.
```ts
const loadUser = singleFlight(readUser);
// Later, expecting this to be cached:
await loadUser("same-user");
```
Entries are removed when the in-flight promise settles. Put durable caching in your own data layer.
# yieldless/schema
`yieldless/schema` keeps validation inside the same error model as the rest of the library.
Exports [#exports]
* `parseSafe(schema, input)`
* `parseAsyncSafe(schema, input)`
Supported schema shapes [#supported-schema-shapes]
* Objects with `safeParse()`
* Objects with `parse()`
* Objects with `safeParseAsync()`
* Objects with `parseAsync()`
Example with a safeParse() schema [#example-with-a-safeparse-schema]
```ts
import { parseSafe } from "yieldless/schema";
const [error, user] = parseSafe(userSchema, input);
if (error) {
return [error, null] as const;
}
```
Example with an async parser [#example-with-an-async-parser]
```ts
const [error, user] = await parseAsyncSafe(userSchema, input);
```
Why it exists [#why-it-exists]
Most validation libraries are already good at describing schemas. Yieldless does not try to replace them.
Good fits [#good-fits]
* HTTP request validation
* Environment parsing
* Normalizing database payloads
* Decoding IPC input
Good [#good]
Validate unknown input before it enters domain code.
```ts
const [error, input] = parseSafe(createUserSchema, await request.json());
if (error) {
return [new ValidationError("Invalid request body", { details: error }), null];
}
return createUser(input);
```
Use `parseAsyncSafe()` only when the schema itself performs asynchronous validation.
```ts
const [error, user] = await parseAsyncSafe(userSchema, input);
```
Avoid [#avoid]
Do not validate the same data repeatedly in inner functions.
```ts
function renderUser(input: unknown) {
const [error, user] = parseSafe(userSchema, input);
if (error) throw error;
return user.name;
}
```
Validate once at the boundary, then pass typed values through ordinary code.
# yieldless/router
`yieldless/router` turns tuple-native handlers into Hono-style JSON handlers.
Exports [#exports]
* `honoHandler(handler, options)`
Error classes [#error-classes]
* `HttpError`
* `BadRequestError`
* `UnauthorizedError`
* `ForbiddenError`
* `NotFoundError`
* `ConflictError`
* `ValidationError`
Handler shape [#handler-shape]
* `type TupleRouteHandler = (context: Context) => PromiseLike> | SafeResult`
Example [#example]
```ts
import { honoHandler, NotFoundError } from "yieldless/router";
export const getRepository = honoHandler(async (c) => {
const repo = await findRepository(c.req.param("id"));
if (repo === null) {
return [new NotFoundError("Repository not found"), null];
}
return [null, repo];
});
```
What the adapter does [#what-the-adapter-does]
| Input | Output |
| -------------------- | ----------------------------------------------- |
| Success tuple | `context.json(data, status)` |
| `HttpError` instance | Configured status code |
| Unknown error | Generic 500 |
| `options.mapError()` | Normalize custom domain errors into `HttpError` |
When this module is enough [#when-this-module-is-enough]
If your framework only needs a `json()` method on the context, this adapter is usually enough.
Good [#good]
Return domain or HTTP errors as tuples and let the adapter serialize them.
```ts
export const getUser = honoHandler(async (c) => {
const [error, user] = await loadUser(c.req.param("id"));
if (error) return [error, null];
if (user === null) {
return [new NotFoundError("User not found"), null];
}
return [null, user];
});
```
Use `mapError()` when your domain errors are not already `HttpError` instances.
```ts
honoHandler(handler, {
mapError: (error) =>
error instanceof DomainValidationError
? new BadRequestError(error.message)
: error,
});
```
Avoid [#avoid]
Do not throw routine route failures just to let a global error handler find them later.
```ts
if (user === null) {
throw new Error("User not found");
}
```
For expected misses, return a tuple error with an explicit HTTP shape.
# yieldless/ipc
Electron IPC is a good place for Yieldless because the boundary is inherently failure-heavy and the transport only accepts structured-clone-safe data.
Exports [#exports]
* `createIpcMain(ipcMain)`
* `createIpcRenderer(ipcRenderer)`
* `createIpcBridge(client, channels)`
* `createAbortableIpcMain(ipcMain)`
* `createAbortableIpcRenderer(ipcRenderer)`
* `createAbortableIpcBridge(client, channels)`
* `serializeIpcError(error)`
* `deserializeIpcResult(payload)`
Core types [#core-types]
* `IpcProcedure`
* `IpcContract`
* `IpcClient`
* `IpcBridge`
* `AbortableIpcClient`
* `AbortableIpcBridge`
* `SerializedIpcError`
Contract example [#contract-example]
```ts
import type { IpcProcedure } from "yieldless/ipc";
type Contract = {
getStatus: IpcProcedure<[directory: string], { output: string }, Error>;
};
```
Main process [#main-process]
```ts
const server = createIpcMain(ipcMain);
server.handle("getStatus", async (_event, directory) => {
return await runGitStatus(directory);
});
```
Renderer process [#renderer-process]
```ts
const client = createIpcRenderer(ipcRenderer);
const [error, result] = await client.invoke("getStatus", "/tmp/repo");
```
Abortable renderer requests [#abortable-renderer-requests]
When a renderer can abandon stale work, use the abortable helpers instead of pushing request IDs and cancel channels through your own app code.
```ts
const client = createAbortableIpcRenderer(ipcRenderer);
const bridge = createAbortableIpcBridge(client, ["getStatus"] as const);
const controller = new AbortController();
const result = await bridge.withSignal.getStatus(
controller.signal,
"/tmp/repo",
);
```
On the main-process side, the handler receives a shared `AbortSignal`:
```ts
const server = createAbortableIpcMain(ipcMain);
server.handle("getStatus", async (_event, signal, directory) => {
return await runCommandSafe("git", ["status", "--short"], {
cwd: directory,
signal,
});
});
```
Why the serialization layer matters [#why-the-serialization-layer-matters]
Electron can flatten thrown errors when they cross IPC. Yieldless avoids that by serializing tuple errors into plain objects before they cross the boundary, then decoding them back into tuple form on the receiving side.
Good fits [#good-fits]
* React or Vue renderers calling main-process Git operations
* Preload bridges that expose only an allowlisted set of channels
* Desktop apps where you want one consistent error model from UI to subprocess
* Screens that need to cancel stale in-flight requests when the user navigates away
Good [#good]
Define the contract in one shared type module.
```ts
type GitContract = {
status: IpcProcedure<[repoPath: string], { stdout: string }, SerializedIpcError>;
fetch: IpcProcedure<[repoPath: string], { stdout: string }, SerializedIpcError>;
};
```
Expose only allowlisted bridge methods from preload code.
```ts
const bridge = createAbortableIpcBridge(client, [
"status",
"fetch",
] as const);
```
Use abortable IPC for views that can be replaced before the main-process work finishes.
Avoid [#avoid]
Do not throw rich `Error` objects across Electron IPC and expect every property to survive.
```ts
ipcMain.handle("status", async () => {
throw Object.assign(new Error("git failed"), { code: "E_GIT" });
});
```
Return tuple errors through Yieldless so they are serialized into clone-safe objects.
# yieldless/node
`yieldless/node` wraps the pieces of Node that backend tools and desktop apps touch constantly: filesystem calls and external commands.
Exports [#exports]
Filesystem [#filesystem]
* `accessSafe(path)`
* `readFileSafe(path, encoding?)`
* `writeFileSafe(path, contents, options?)`
* `readdirSafe(path)`
* `mkdirSafe(path, options?)`
* `rmSafe(path, options?)`
* `statSafe(path)`
Processes [#processes]
* `runCommand(file, args?, options?)`
* `runCommandSafe(file, args?, options?)`
* `runShellCommand(command, options?)`
* `runShellCommandSafe(command, options?)`
* `CommandError`
* `CommandTimeoutError`
* `CommandOutputLimitError`
`CommandOptions` accepts:
* `cwd`, `env`, `input`, `signal`, and `windowsHide`
* `timeoutMs` for command deadlines
* `maxOutputBytes` to stop noisy commands before they consume too much memory
* `onStdout(chunk)` and `onStderr(chunk)` for live output
* `killSignal` when aborted commands need a signal other than Node's default
Filesystem example [#filesystem-example]
```ts
import { readFileSafe } from "yieldless/node";
const [error, contents] = await readFileSafe(".git/HEAD");
```
Child-process example [#child-process-example]
```ts
import { runCommandSafe } from "yieldless/node";
const [error, result] = await runCommandSafe(
"pnpm",
["test"],
{
cwd: workspacePath,
timeoutMs: 60_000,
maxOutputBytes: 1024 * 1024,
signal,
},
);
```
Successful results include `{ command, args, cwd, durationMs, stdout, stderr, exitCode, signal }`, which makes logging and diagnostics easier without parsing thrown errors.
runCommand() vs runCommandSafe() [#runcommand-vs-runcommandsafe]
* `runCommand()` throws on non-zero exit and returns command metadata plus captured output
* `runCommandSafe()` wraps that behavior into a tuple
`CommandError` includes the command, args, duration, output, exit code, signal, and Node error code when one exists.
Exec-style shell command strings [#exec-style-shell-command-strings]
Prefer `runCommandSafe(file, args)` when you can. It keeps argument boundaries explicit.
Use `runShellCommandSafe()` only when shell syntax is the point: pipes, redirects, environment expansion, or command strings provided by a trusted developer tool.
```ts
import { runShellCommandSafe } from "yieldless/node";
const [error, result] = await runShellCommandSafe(
"pnpm test -- --runInBand",
{
cwd: workspacePath,
timeoutMs: 60_000,
onStdout: (chunk) => {
process.stdout.write(chunk);
},
},
);
```
Do not pass user input into a shell command string. If the command contains user-provided values, use `runCommandSafe(file, args)` instead.
Live output [#live-output]
Commands still capture output by default, but `onStdout` and `onStderr` let you stream progress to logs, terminals, or a UI while the command runs.
```ts
const [error, result] = await runCommandSafe("npm", ["run", "build"], {
cwd: workspacePath,
timeoutMs: 120_000,
onStdout: (chunk) => appendBuildLog(chunk),
onStderr: (chunk) => appendBuildLog(chunk),
});
if (error) {
appendBuildLog(error.stderr);
}
```
Output limits [#output-limits]
Use `maxOutputBytes` when a command can print unbounded logs. If the combined output exceeds the limit, the command is aborted and returns `CommandOutputLimitError` with partial captured output.
```ts
import { CommandOutputLimitError, runCommandSafe } from "yieldless/node";
const [error, result] = await runCommandSafe("npm", ["test"], {
maxOutputBytes: 512 * 1024,
timeoutMs: 60_000,
});
if (error instanceof CommandOutputLimitError) {
logger.warn({ stdout: error.stdout }, "test output exceeded the capture limit");
}
```
Cancellation [#cancellation]
If you pass an `AbortSignal`, Yieldless forwards it to Node's native child-process signal handling and waits for the subprocess to close before it settles the wrapper promise.
Good [#good]
Use filesystem tuple helpers at the file boundary.
```ts
const [readError, contents] = await readFileSafe(configPath);
if (readError) return [readError, null] as const;
```
Use `runCommandSafe()` when non-zero exit status is a normal operational failure.
```ts
const [error, result] = await runCommandSafe("node", ["--check", filePath], {
cwd: workspacePath,
signal,
});
if (error instanceof CommandError) {
logger.warn({ durationMs: error.durationMs, stderr: error.stderr }, "syntax check failed");
}
```
Avoid [#avoid]
Do not shell-concatenate user input.
```ts
await runCommandSafe("sh", ["-c", `git -C ${repoPath} status`]);
```
Pass the executable and args separately so Node handles argument boundaries.
# yieldless/test
`yieldless/test` contains tiny helpers for testing async code that uses promises and `AbortSignal`. They are deliberately independent from Vitest, Jest, or Node's test runner.
Use these helpers to make async tests deterministic without changing application code or installing a fake runtime.
Exports [#exports]
* `deferred(): Deferred`
* `flushMicrotasks(times): Promise`
* `createTestSignal(): TestSignal`
* `createManualClock(start): ManualClock`
* `type Deferred = { promise, resolve, reject }`
* `type ManualClock = { now, pending, sleep, tick, runAll }`
* `type TestSignal = { controller, signal, abort }`
Example [#example]
```ts
import { createManualClock, deferred, flushMicrotasks } from "yieldless/test";
const ready = deferred();
ready.resolve("done");
await expect(ready.promise).resolves.toBe("done");
const clock = createManualClock();
let settled = false;
void clock.sleep(100).then(() => {
settled = true;
});
clock.tick(100);
await flushMicrotasks();
expect(settled).toBe(true);
```
Behavior notes [#behavior-notes]
* `deferred()` exposes a promise plus its `resolve` and `reject` functions.
* `flushMicrotasks()` awaits `Promise.resolve()` one or more times.
* `createTestSignal()` returns a controller, signal, and convenience `abort()` method.
* `createManualClock().sleep()` is abort-aware and resolves only when `tick()` or `runAll()` reaches its time.
* `createManualClock()` does not patch global timers.
Good [#good]
Use manual clocks for code that already accepts a sleep function or clock dependency.
```ts
const clock = createManualClock();
const running = waitForReady({ sleep: clock.sleep });
clock.runAll();
await running;
```
Use controlled signals when testing cleanup.
```ts
const testSignal = createTestSignal();
testSignal.abort(new Error("stop"));
```
Avoid [#avoid]
Do not mix the manual clock with real `setTimeout()` and expect it to control global time.
```ts
const clock = createManualClock();
setTimeout(resolve, 100);
clock.tick(100);
```
Use it by dependency injection, not global monkey-patching.
# Checkout Flow
Checkout is a good Yieldless fit because it crosses several failure-heavy boundaries without needing a framework runtime: user input, inventory reads, payment APIs, rate limits, and background receipts.
This recipe keeps those boundaries explicit. The service code still reads like ordinary TypeScript, but the painful edges return tuples and accept cancellation.
The moving parts [#the-moving-parts]
* `yieldless/router` turns the tuple service into an HTTP response
* `yieldless/schema` validates checkout input
* `yieldless/cache` and `yieldless/batcher` keep inventory reads efficient
* `yieldless/limiter` protects a payment API quota
* `yieldless/breaker` fails fast while the payment provider is unhealthy
* `yieldless/queue` stores receipt work for a local worker
* `yieldless/fetch` keeps remote API calls in tuple form
Checkout service [#checkout-service]
```ts
import { createBatcher } from "yieldless/batcher";
import { CircuitOpenError, createCircuitBreaker } from "yieldless/breaker";
import { createCache } from "yieldless/cache";
import { safeTry } from "yieldless/error";
import { HttpStatusError, fetchJsonSafe } from "yieldless/fetch";
import { createRateLimiter } from "yieldless/limiter";
import { createQueue } from "yieldless/queue";
import { BadRequestError, honoHandler } from "yieldless/router";
import { parseSafe } from "yieldless/schema";
interface CheckoutInput {
readonly customerId: string;
readonly idempotencyKey: string;
readonly items: readonly {
readonly sku: string;
readonly quantity: number;
}[];
}
interface InventoryItem {
readonly priceCents: number;
readonly sku: string;
readonly stock: number;
}
interface PaymentIntent {
readonly id: string;
readonly status: "authorized" | "requires_action";
}
interface ReceiptJob {
readonly customerId: string;
readonly paymentId: string;
}
export function createCheckoutService(apiBaseUrl: string) {
const receiptJobs = createQueue({ capacity: 1_000 });
const paymentQuota = createRateLimiter({ limit: 50, intervalMs: 60_000 });
const inventoryBatcher = createBatcher({
waitMs: 2,
maxBatchSize: 100,
loadMany: async (skus, signal) => {
const [error, inventory] = await fetchJsonSafe(
`${apiBaseUrl}/inventory?skus=${encodeURIComponent(skus.join(","))}`,
{ timeoutMs: 2_000, signal },
);
if (error) {
return [error, null] as const;
}
const bySku = new Map(inventory.map((item) => [item.sku, item]));
return [
null,
skus.map(
(sku) => bySku.get(sku) ?? { sku, priceCents: 0, stock: 0 },
),
] as const;
},
});
const inventory = createCache({
ttlMs: 15_000,
maxSize: 2_000,
load: (sku, signal) => inventoryBatcher.load(sku, { signal }),
});
const createPayment = createCircuitBreaker(
(signal, input: { amountCents: number; idempotencyKey: string }) =>
fetchJsonSafe(`${apiBaseUrl}/payments/intents`, {
method: "POST",
headers: {
"content-type": "application/json",
"idempotency-key": input.idempotencyKey,
},
body: JSON.stringify({ amountCents: input.amountCents }),
timeoutMs: 5_000,
signal,
}),
{
failureThreshold: 3,
cooldownMs: 20_000,
shouldTrip: (error) =>
!(error instanceof HttpStatusError) || error.status >= 500,
},
);
return {
receiptJobs,
async checkout(input: unknown, signal: AbortSignal) {
const [inputError, checkout] = parseSafe(checkoutSchema, input);
if (inputError) {
return [inputError, null] as const;
}
const loadedItems = await Promise.all(
checkout.items.map((item) => inventory.get(item.sku, { signal })),
);
const inventoryError = loadedItems.find(([error]) => error !== null)?.[0];
if (inventoryError) {
return [inventoryError, null] as const;
}
const items = loadedItems.map(([, item]) => item as InventoryItem);
const unavailable = checkout.items.find((requested, index) => {
const available = items[index]?.stock ?? 0;
return available < requested.quantity;
});
if (unavailable !== undefined) {
return [new BadRequestError(`${unavailable.sku} is out of stock`), null] as const;
}
const amountCents = checkout.items.reduce(
(total, item, index) =>
total + item.quantity * (items[index]?.priceCents ?? 0),
0,
);
const [quotaError] = await paymentQuota.takeSafe({ signal });
if (quotaError) {
return [quotaError, null] as const;
}
const [paymentError, payment] = await createPayment({
amountCents,
idempotencyKey: checkout.idempotencyKey,
});
if (paymentError instanceof CircuitOpenError) {
return [
new Error("Payments are temporarily unavailable. Please try again soon."),
null,
] as const;
}
if (paymentError) {
return [paymentError, null] as const;
}
const [receiptError] = await receiptJobs.offer(
{
customerId: checkout.customerId,
paymentId: payment.id,
},
{ signal },
);
if (receiptError) {
return [receiptError, null] as const;
}
return [
null,
{
paymentId: payment.id,
status: payment.status,
},
] as const;
},
};
}
const checkoutService = createCheckoutService("https://api.example.com");
export const postCheckout = honoHandler(
async (c) => {
const [bodyError, body] = await safeTry(c.req.json());
if (bodyError) {
return [new BadRequestError("Invalid checkout JSON"), null] as const;
}
return await checkoutService.checkout(body, c.req.raw.signal);
},
{ successStatus: 201 },
);
```
Receipt worker [#receipt-worker]
```ts
async function runReceiptWorker(signal: AbortSignal) {
while (!signal.aborted) {
const [takeError, job] = await checkoutService.receiptJobs.take({ signal });
if (takeError) {
return;
}
const [sendError] = await fetchJsonSafe("/internal/receipts/send", {
method: "POST",
body: JSON.stringify(job),
headers: { "content-type": "application/json" },
timeoutMs: 5_000,
signal,
});
if (sendError) {
console.error("failed to send receipt", sendError);
}
}
}
```
Why this is a good fit [#why-this-is-a-good-fit]
* Cart validation returns a normal HTTP validation response.
* Inventory reads batch together and cache briefly, which helps busy product pages.
* Payment quota is explicit before the payment call.
* The circuit breaker stops a payment outage from becoming a retry storm.
* Receipt delivery is queued so checkout can finish without waiting on email.
Avoid: hiding business rules behind a generic pipeline [#avoid-hiding-business-rules-behind-a-generic-pipeline]
```ts
return await checkoutRuntime.run(input, {
validate: true,
cache: true,
rateLimit: true,
payment: true,
queueReceipt: true,
});
```
That hides the parts reviewers most need to see: stock checks, idempotency, payment error handling, and what happens after the customer pays.
# Customer Import
Bulk imports are familiar to almost every product team. They look simple until a customer uploads a file with thousands of rows, duplicate emails, invalid data, and a slow CRM integration.
This recipe keeps the import understandable:
* validate each row as data
* check duplicates in batches
* limit outbound CRM calls
* publish progress as rows complete
* stop cleanly when the user cancels
Import service [#import-service]
```ts
import { createBatcher } from "yieldless/batcher";
import { safeTry, safeTrySync } from "yieldless/error";
import { fetchJsonSafe } from "yieldless/fetch";
import { mapLimit } from "yieldless/all";
import { createRateLimiter } from "yieldless/limiter";
import { createPubSub } from "yieldless/pubsub";
import { parseSafe } from "yieldless/schema";
interface CsvCustomerRow {
readonly email: string;
readonly name: string;
readonly plan: "free" | "pro" | "enterprise";
}
type ImportEvent =
| { readonly type: "started"; readonly total: number }
| { readonly type: "row-imported"; readonly row: number; readonly email: string }
| { readonly type: "row-skipped"; readonly row: number; readonly reason: string }
| { readonly type: "finished"; readonly imported: number; readonly skipped: number };
export function createCustomerImporter() {
const events = createPubSub({ replay: 25 });
const crmQuota = createRateLimiter({ limit: 120, intervalMs: 60_000 });
const existingCustomers = createBatcher({
waitMs: 2,
maxBatchSize: 250,
loadMany: async (emails, signal) => {
const [error, existing] = await customerRepository.existsByEmail(
emails,
signal,
);
if (error) {
return [error, null] as const;
}
const existingSet = new Set(existing);
return [null, emails.map((email) => existingSet.has(email))] as const;
},
});
async function importRow(
row: unknown,
index: number,
signal: AbortSignal,
) {
const rowNumber = index + 1;
const [parseError, customer] = parseSafe(customerRowSchema, row);
if (parseError) {
events.publish({
type: "row-skipped",
row: rowNumber,
reason: "Invalid customer data",
});
return [null, { imported: 0, skipped: 1 }] as const;
}
const [duplicateError, exists] = await existingCustomers.load(
customer.email,
{ signal },
);
if (duplicateError) {
return [duplicateError, null] as const;
}
if (exists) {
events.publish({
type: "row-skipped",
row: rowNumber,
reason: "Customer already exists",
});
return [null, { imported: 0, skipped: 1 }] as const;
}
const [quotaError] = await crmQuota.takeSafe({ signal });
if (quotaError) {
return [quotaError, null] as const;
}
const [crmError] = await fetchJsonSafe("/crm/customers", {
method: "POST",
body: JSON.stringify(customer),
headers: { "content-type": "application/json" },
timeoutMs: 4_000,
signal,
});
if (crmError) {
return [crmError, null] as const;
}
events.publish({
type: "row-imported",
row: rowNumber,
email: customer.email,
});
return [null, { imported: 1, skipped: 0 }] as const;
}
return {
events,
async importCsv(file: File, signal: AbortSignal) {
const [readError, text] = await safeTry(file.text());
if (readError) {
return [readError, null] as const;
}
const [parseError, rows] = safeTrySync(() => parseCustomerCsv(text));
if (parseError) {
return [parseError, null] as const;
}
events.publish({ type: "started", total: rows.length });
const [importError, results] = await mapLimit(
rows,
importRow,
{
concurrency: 8,
signal,
},
);
if (importError) {
return [importError, null] as const;
}
const summary = results.reduce(
(total, row) => ({
imported: total.imported + row.imported,
skipped: total.skipped + row.skipped,
}),
{ imported: 0, skipped: 0 },
);
events.publish({ type: "finished", ...summary });
return [null, summary] as const;
},
};
}
```
Streaming progress to a UI [#streaming-progress-to-a-ui]
```ts
const importer = createCustomerImporter();
const progress = importer.events.subscribe();
const controller = new AbortController();
const running = importer.importCsv(uploadedFile, controller.signal);
void running.finally(() => progress.close());
for await (const event of progress) {
updateImportScreen(event);
}
const [error, summary] = await running;
```
The UI does not need to know about batching, rate limiting, or CRM failures. It gets ordinary domain events.
Why this is a good fit [#why-this-is-a-good-fit]
* Invalid rows become skipped rows instead of crashing the whole import.
* Duplicate checks batch together instead of hitting the database once per row.
* CRM calls run with bounded concurrency and a rate limit.
* A single `AbortSignal` lets the user cancel the import from the UI.
* Progress is an async subscription, so it works for web sockets, server-sent events, Electron IPC, or tests.
Avoid: one promise per row [#avoid-one-promise-per-row]
```ts
await Promise.all(rows.map((row) => importCustomer(row)));
```
That pattern looks tidy until a customer uploads 20,000 rows. Use `mapLimit()` when the input is customer-sized.
# Resilient Service Flow
This recipe shows the shape Yieldless is best at: an HTTP request that validates input, performs a few pieces of I/O, retries the flaky part, and returns a normal JSON response.
The moving parts [#the-moving-parts]
* `yieldless/schema` validates input without throwing
* `yieldless/fetch` keeps HTTP calls in tuple form
* `yieldless/retry` handles transient I/O noise
* `yieldless/task` keeps sibling work under one cancellation signal
* `yieldless/router` turns tuple results into a plain response
Route handler [#route-handler]
```ts
import { safeTry } from "yieldless/error";
import { parseSafe } from "yieldless/schema";
import { safeRetry } from "yieldless/retry";
import { NotFoundError, honoHandler } from "yieldless/router";
import { runTaskGroup } from "yieldless/task";
export const getRepository = honoHandler(async (c) => {
const [paramsError, params] = parseSafe(repositoryParamsSchema, c.req.param());
if (paramsError) {
return [paramsError, null];
}
const [repoError, repo] = await safeRetry(
async (_attempt, signal) => safeTry(loadRepository(params.id, signal)),
{ maxAttempts: 3, baseDelayMs: 100 },
);
if (repoError) {
return [repoError, null];
}
if (repo === null) {
return [new NotFoundError("Repository not found"), null];
}
const payload = await runTaskGroup(async (group) => {
const refs = group.spawn((signal) => loadRefs(repo.path, signal));
const status = group.spawn((signal) => loadStatus(repo.path, signal));
return {
id: repo.id,
refs: await refs,
status: await status,
};
});
return [null, payload];
});
```
Why this holds up well in production [#why-this-holds-up-well-in-production]
* Validation failures never take the exception path.
* If `loadRefs()` fails, `loadStatus()` is aborted immediately.
* Retry timers are cancellable because they run through `AbortSignal`.
* The handler body stays linear. There is no framework-specific DSL to learn.
Good variation: add a remote dependency [#good-variation-add-a-remote-dependency]
```ts
import { fetchJsonSafe, HttpStatusError } from "yieldless/fetch";
const [metadataError, metadata] = await safeRetry(
(_attempt, signal) =>
fetchJsonSafe(metadataUrl(repo.id), {
timeoutMs: 3_000,
signal,
}),
{
maxAttempts: 3,
shouldRetry: (error) =>
!(error instanceof HttpStatusError) || error.status >= 500,
},
);
if (metadataError) {
return [metadataError, null];
}
```
The retry policy is attached to the flaky network call, not to the whole request.
Avoid: retrying the whole handler [#avoid-retrying-the-whole-handler]
```ts
export const getRepository = honoHandler((c) =>
safeRetry(() => loadWholeRepositoryResponse(c), {
maxAttempts: 3,
}),
);
```
That repeats validation, routing decisions, and any side effects. Retry the noisy boundary instead.
Rules worth keeping [#rules-worth-keeping]
* Validate early and return early.
* Retry only the noisy boundary, not the entire request.
* Spawn sibling work only when both tasks should die together.
* Map domain misses to explicit HTTP errors like `NotFoundError`.
# Bounded Batch Work
This recipe is for the common "do this for every item" path: refresh every repo, fetch every avatar, resize every image, or inspect every file. The work is independent, but starting it all at once can make a user's machine or an upstream service miserable.
Use `mapLimit()` when you want tuple errors, shared cancellation, stable output order, and a hard ceiling on active work.
Refresh several repositories [#refresh-several-repositories]
```ts
import { mapLimit } from "yieldless/all";
import { runCommandSafe } from "yieldless/node";
interface Repository {
readonly id: string;
readonly path: string;
}
export async function refreshRepositories(
repositories: readonly Repository[],
signal: AbortSignal,
) {
return await mapLimit(
repositories,
async (repository, _index, itemSignal) => {
const [fetchError] = await runCommandSafe("git", ["fetch", "--prune"], {
cwd: repository.path,
signal: itemSignal,
});
if (fetchError) {
return [fetchError, null] as const;
}
const [statusError, status] = await runCommandSafe(
"git",
["status", "--short"],
{
cwd: repository.path,
signal: itemSignal,
},
);
if (statusError) {
return [statusError, null] as const;
}
return [
null,
{
id: repository.id,
status: status.stdout,
},
] as const;
},
{
concurrency: 3,
signal,
},
);
}
```
Why this is friendlier [#why-this-is-friendlier]
* Output order matches `repositories`, so UI code does not need to sort the result back into place.
* The first tuple error aborts in-flight work and stops starting new items.
* A parent `AbortSignal` can cancel the whole batch when the user navigates away.
* `concurrency` is explicit, so expensive work stays polite by default.
Good variation: stream the input [#good-variation-stream-the-input]
If repositories come from an async source, use `yieldless/iterable` instead of building a large array first.
```ts
import { mapAsyncLimit } from "yieldless/iterable";
const [error, summaries] = await mapAsyncLimit(
readRepositories(workspacePath),
(repository, _index, signal) => inspectRepository(repository, signal),
{
concurrency: 3,
signal,
},
);
```
Avoid: unbounded fan-out [#avoid-unbounded-fan-out]
```ts
await Promise.all(
repositories.map((repository) =>
runCommandSafe("git", ["fetch"], { cwd: repository.path }),
),
);
```
That can launch dozens or hundreds of subprocesses. Use `mapLimit()` or `mapAsyncLimit()` so the user’s machine stays responsive.
# Repository Indexer
This recipe shows a realistic in-process worker pipeline: a user selects repositories, the app indexes them, progress is streamed to the UI, remote metadata is cached, owner lookups are batched, and local Git subprocesses stay bounded.
The point is not to build a new runtime. The point is to keep the moving parts visible:
* `yieldless/queue` accepts work with backpressure
* `yieldless/pubsub` broadcasts progress
* `yieldless/task` runs workers under one cancellation signal
* `yieldless/limiter` protects API quota and local subprocess capacity
* `yieldless/cache` avoids repeated metadata reads
* `yieldless/batcher` collapses nearby owner lookups
* `yieldless/retry` handles transient remote failures
* `yieldless/fetch` keeps remote calls in tuple form
* `yieldless/node` wraps local Git commands as tuples
The indexer [#the-indexer]
```ts
import { createBatcher } from "yieldless/batcher";
import { createCache } from "yieldless/cache";
import { HttpStatusError, fetchJsonSafe } from "yieldless/fetch";
import { createPubSub } from "yieldless/pubsub";
import { createQueue } from "yieldless/queue";
import {
createRateLimiter,
createSemaphore,
withPermit,
} from "yieldless/limiter";
import { runCommandSafe } from "yieldless/node";
import { safeRetry } from "yieldless/retry";
import { runTaskGroup } from "yieldless/task";
interface RepositoryMetadata {
readonly id: string;
readonly ownerId: string;
readonly path: string;
}
interface Owner {
readonly id: string;
readonly name: string;
}
interface IndexJob {
readonly repositoryId: string;
}
type IndexEvent =
| { readonly type: "queued"; readonly repositoryId: string }
| { readonly type: "started"; readonly repositoryId: string }
| {
readonly type: "indexed";
readonly repositoryId: string;
readonly ownerName: string;
readonly dirty: boolean;
}
| {
readonly type: "failed";
readonly repositoryId: string;
readonly message: string;
};
interface RepositoryIndexerOptions {
readonly apiBaseUrl: string;
readonly workerCount?: number;
}
function messageFrom(error: unknown): string {
return error instanceof Error ? error.message : String(error);
}
export function createRepositoryIndexer(options: RepositoryIndexerOptions) {
const workerCount = options.workerCount ?? 4;
const queue = createQueue({ capacity: workerCount * 25 });
const events = createPubSub({ replay: 25 });
const apiQuota = createRateLimiter({ limit: 60, intervalMs: 60_000 });
const gitSlots = createSemaphore(workerCount);
async function apiGet(path: string, signal: AbortSignal) {
const [quotaError] = await apiQuota.takeSafe({ signal });
if (quotaError) {
return [quotaError, null] as const;
}
return await fetchJsonSafe(`${options.apiBaseUrl}${path}`, {
timeoutMs: 3_000,
signal,
});
}
const metadata = createCache({
ttlMs: 60_000,
maxSize: 2_000,
load: (repositoryId, signal) =>
safeRetry(
(_attempt, attemptSignal) =>
apiGet(
`/repositories/${repositoryId}`,
attemptSignal,
),
{
maxAttempts: 3,
baseDelayMs: 150,
shouldRetry: (error) =>
!(error instanceof HttpStatusError) || error.status >= 500,
signal,
},
),
});
const owners = createBatcher({
waitMs: 2,
maxBatchSize: 50,
loadMany: async (ownerIds, signal) => {
const ids = encodeURIComponent(ownerIds.join(","));
const [error, values] = await apiGet(`/owners?ids=${ids}`, signal);
if (error) {
return [error, null] as const;
}
const byId = new Map(values.map((owner) => [owner.id, owner]));
return [
null,
ownerIds.map((id) => byId.get(id) ?? { id, name: "Unknown owner" }),
] as const;
},
});
async function inspectGitStatus(
repository: RepositoryMetadata,
signal: AbortSignal,
) {
return await withPermit(
gitSlots,
(scopedSignal) =>
runCommandSafe("git", ["status", "--short"], {
cwd: repository.path,
signal: scopedSignal,
}),
{ signal },
);
}
async function processJob(job: IndexJob, signal: AbortSignal) {
events.publish({
type: "started",
repositoryId: job.repositoryId,
});
const [metadataError, repository] = await metadata.get(job.repositoryId, {
signal,
});
if (metadataError) {
events.publish({
type: "failed",
repositoryId: job.repositoryId,
message: messageFrom(metadataError),
});
return;
}
const [ownerError, owner] = await owners.load(repository.ownerId, {
signal,
});
if (ownerError) {
events.publish({
type: "failed",
repositoryId: job.repositoryId,
message: messageFrom(ownerError),
});
return;
}
const [statusError, status] = await inspectGitStatus(repository, signal);
if (statusError) {
events.publish({
type: "failed",
repositoryId: job.repositoryId,
message: messageFrom(statusError),
});
return;
}
events.publish({
type: "indexed",
repositoryId: repository.id,
ownerName: owner.name,
dirty: status.stdout.trim().length > 0,
});
}
async function runWorker(signal: AbortSignal) {
while (!signal.aborted) {
const [takeError, job] = await queue.take({ signal });
if (takeError) {
return;
}
await processJob(job, signal);
}
}
return {
events,
close(): void {
queue.close();
},
async enqueue(repositoryId: string, signal?: AbortSignal) {
const result = await queue.offer({ repositoryId }, { signal });
if (result[0] === null) {
events.publish({ type: "queued", repositoryId });
}
return result;
},
async run(signal?: AbortSignal): Promise {
try {
await runTaskGroup(
async (group) => {
const workers = Array.from({ length: workerCount }, () =>
group.spawn((workerSignal) => runWorker(workerSignal)),
);
await Promise.all(workers);
},
{ signal },
);
} finally {
events.close();
}
},
};
}
```
Using it from a route or action [#using-it-from-a-route-or-action]
```ts
const indexer = createRepositoryIndexer({
apiBaseUrl: "https://api.example.com",
workerCount: 4,
});
const progress = indexer.events.subscribe();
const controller = new AbortController();
const running = indexer.run(controller.signal);
for (const repositoryId of selectedRepositoryIds) {
const [enqueueError] = await indexer.enqueue(
repositoryId,
controller.signal,
);
if (enqueueError) {
controller.abort(enqueueError);
break;
}
}
indexer.close();
for await (const event of progress) {
renderIndexProgress(event);
}
await running;
```
In a real UI, the progress subscription often lives in a websocket, server-sent events stream, or Electron IPC handler. The shape stays the same: subscribe, render events, and abort when the user leaves.
Why this is easier to operate [#why-this-is-easier-to-operate]
* The queue is bounded, so user-selected work cannot grow memory forever.
* Worker count is explicit, so local Git subprocesses stay polite.
* API quota waits happen before remote calls, not after the upstream complains.
* Metadata reads are cached, but failed reads are not stored.
* Owner lookups batch together while still returning one result per job.
* Retry policy sits around the flaky remote read, not around the whole pipeline.
* Progress is just an async subscription, so it can feed logs, UI, or tests.
Smaller variations [#smaller-variations]
Process a fixed list without a long-running queue [#process-a-fixed-list-without-a-long-running-queue]
If you already have the whole list and do not need progress fanout, `mapLimit()` is smaller.
```ts
import { mapLimit } from "yieldless/all";
const [error, summaries] = await mapLimit(
selectedRepositoryIds,
(repositoryId, _index, signal) => indexOneRepository(repositoryId, signal),
{
concurrency: 4,
signal,
},
);
```
Keep only the cache and batcher [#keep-only-the-cache-and-batcher]
If the pipeline is overkill, keep the read model helpers.
```ts
const repository = await metadata.get(repositoryId, { signal });
const owner = await owners.load(ownerId, { signal });
```
Add a circuit breaker to an optional dependency [#add-a-circuit-breaker-to-an-optional-dependency]
If indexing can continue without feature flags or recommendations, wrap that optional remote call with a circuit breaker and use a fallback when it is open.
```ts
import { CircuitOpenError, createCircuitBreaker } from "yieldless/breaker";
import { fetchJsonSafe } from "yieldless/fetch";
const loadRecommendations = createCircuitBreaker(
(signal, repositoryId: string) =>
fetchJsonSafe(`/recommendations/${repositoryId}`, {
timeoutMs: 2_000,
signal,
}),
{
failureThreshold: 3,
cooldownMs: 15_000,
},
);
const [recommendationError, recommendations] =
await loadRecommendations(repositoryId);
if (recommendationError instanceof CircuitOpenError) {
return [null, []] as const;
}
if (recommendationError) {
return [recommendationError, null] as const;
}
return [null, recommendations] as const;
```
Avoid: hiding the pipeline inside a generic worker framework [#avoid-hiding-the-pipeline-inside-a-generic-worker-framework]
```ts
const indexer = makeRuntime({
retry: true,
cache: true,
batch: true,
queue: true,
workers: 4,
});
```
That looks shorter, but the operational choices disappear. Yieldless works best when the important limits remain close to the work they protect.
# Electron Git Client
Yieldless fits Electron well because the important boundaries in an Electron app are all failure-heavy: the renderer asks for work through IPC, the main process touches the filesystem, the main process launches long-running child processes like `git clone`.
Main-process contract [#main-process-contract]
```ts
import type { IpcProcedure } from "yieldless/ipc";
type GitContract = {
cloneRepository: IpcProcedure<
[url: string, directory: string],
{ path: string },
Error
>;
getStatus: IpcProcedure<[directory: string], { output: string }, Error>;
};
```
Main-process implementation [#main-process-implementation]
```ts
import { createAbortableIpcMain } from "yieldless/ipc";
import { runTaskGroup } from "yieldless/task";
import { runCommandSafe } from "yieldless/node";
const ipc = createAbortableIpcMain(ipcMain);
const windowLifecycle = new AbortController();
ipc.handle("cloneRepository", async (_event, requestSignal, url, directory) => {
return await runTaskGroup(async (_group, signal) => {
const [cloneError] = await runCommandSafe(
"git",
["clone", url, directory],
{ signal },
);
if (cloneError) {
return [cloneError, null];
}
return [null, { path: directory }];
}, {
signal: AbortSignal.any([windowLifecycle.signal, requestSignal]),
});
});
```
If the window is torn down or a parent task group is canceled, the subprocess is aborted through Node's native child-process signal handling.
Preload bridge [#preload-bridge]
```ts
import { createAbortableIpcBridge, createAbortableIpcRenderer } from "yieldless/ipc";
const client = createAbortableIpcRenderer(ipcRenderer);
export const gitBridge = createAbortableIpcBridge(client, [
"cloneRepository",
"getStatus",
] as const);
```
Renderer usage [#renderer-usage]
```ts
const controller = new AbortController();
const [error, result] = await window.gitBridge.withSignal.cloneRepository(
controller.signal,
"git@github.com:binbandit/yieldless.git",
"/tmp/yieldless",
);
if (error) {
showToast(error.message);
return;
}
openRepository(result.path);
```
Why this boundary feels cleaner [#why-this-boundary-feels-cleaner]
* The renderer never depends on Electron's lossy thrown-error conversion.
* The main process can use the same tuple style it uses everywhere else.
* Long-running Git subprocesses can be canceled as soon as the UI no longer cares about them.
Good additions [#good-additions]
Deduplicate status reads if several panels can request the same repository at once.
```ts
import { singleFlight } from "yieldless/singleflight";
const getStatus = singleFlight(
(signal, directory: string) =>
runCommandSafe("git", ["status", "--short"], {
cwd: directory,
signal,
}),
);
```
Wait for one app or window event with cleanup.
```ts
import { onceEventSafe } from "yieldless/event";
const [closeError] = await onceEventSafe(mainWindow, "closed", {
signal: windowLifecycle.signal,
});
```
Avoid [#avoid]
Do not expose arbitrary IPC channels to the renderer.
```ts
contextBridge.exposeInMainWorld("ipc", ipcRenderer);
```
Build a small preload bridge with `createIpcBridge()` or `createAbortableIpcBridge()` so the renderer only sees the operations it is allowed to call.
# Electron PR Review Workbench
If you are building a GitHub PR review app in Electron, Yieldless fits best in the main process, preload bridge, and service layer. It is less useful as the shape of your React component state.
That distinction matters:
* Use tuples while you are still doing failure-heavy work: GitHub API calls, local Git reads, diff parsing, cache reads, IPC boundaries.
* Once the renderer receives the result, fold it into ordinary screen state and move on.
Main-process contract [#main-process-contract]
```ts
import type { IpcProcedure } from "yieldless/ipc";
type ReviewContract = {
loadPullRequest: IpcProcedure<
[owner: string, repo: string, number: number],
{
summary: PullRequestSummary;
files: PullRequestFile[];
threads: ReviewThread[];
diffSummary: string;
},
Error
>;
};
```
Main-process implementation [#main-process-implementation]
```ts
import { all } from "yieldless/all";
import { createAbortableIpcMain } from "yieldless/ipc";
import { err, ok, safeTry } from "yieldless/error";
import { runCommandSafe } from "yieldless/node";
import { runTaskGroup } from "yieldless/task";
const ipc = createAbortableIpcMain(ipcMain);
const appLifecycle = new AbortController();
ipc.handle("loadPullRequest", async (_event, requestSignal, owner, repo, number) => {
return await runTaskGroup(async (_group, signal) => {
const [fetchError, payload] = await all([
() => safeTry(github.loadPullRequest(owner, repo, number)),
() => safeTry(github.loadPullRequestFiles(owner, repo, number)),
() => safeTry(github.loadReviewThreads(owner, repo, number)),
() =>
runCommandSafe(
"git",
["diff", "--stat", "origin/main...HEAD"],
{ cwd: repositoryRoot, signal },
),
]);
if (fetchError) {
return err(fetchError);
}
const [summary, files, threads, diff] = payload;
return ok({
summary,
files,
threads,
diffSummary: diff.stdout,
});
}, {
signal: AbortSignal.any([appLifecycle.signal, requestSignal]),
});
});
```
The important thing here is not the exact helper list. It is the layering:
* `yieldless/error` keeps the boundary readable with `ok()` and `err()`.
* `yieldless/all` loads the PR payload in parallel.
* `yieldless/task` gives the whole request one shared cancellation signal.
* `yieldless/node` keeps Git subprocess failures in the same tuple contract.
Preload bridge [#preload-bridge]
```ts
import { createAbortableIpcBridge, createAbortableIpcRenderer } from "yieldless/ipc";
const client = createAbortableIpcRenderer(ipcRenderer);
export const reviewBridge = createAbortableIpcBridge(client, [
"loadPullRequest",
] as const);
```
Renderer state [#renderer-state]
This is the point where Yieldless should usually stop being the dominant shape.
```ts
import { match } from "yieldless/error";
type PullRequestPayload = {
summary: PullRequestSummary;
files: PullRequestFile[];
threads: ReviewThread[];
diffSummary: string;
};
type PullRequestScreenState =
| { kind: "idle" }
| { kind: "loading" }
| { kind: "ready"; data: PullRequestPayload }
| { kind: "error"; message: string };
async function loadPullRequest(
signal: AbortSignal,
owner: string,
repo: string,
number: number,
): Promise {
const result = await window.reviewBridge.withSignal.loadPullRequest(
signal,
owner,
repo,
number,
);
return match(result, {
ok: (data) => ({ kind: "ready", data }),
err: (error) => ({ kind: "error", message: error.message }),
});
}
```
That keeps the renderer normal. Components render `idle`, `loading`, `ready`, or `error`. They do not need to carry `[error, value]` through every prop.
Why this fits PR review tools well [#why-this-fits-pr-review-tools-well]
* Pull-request work crosses many noisy boundaries: GitHub HTTP, local Git, caches, and Electron IPC.
* Cancellation matters because users jump between PRs quickly and the old load should stop being important.
* The renderer wants stable view state, not a low-level transport shape.
Good additions [#good-additions]
Deduplicate duplicate PR loads while a user is switching tabs or refreshing panels.
```ts
import { singleFlight } from "yieldless/singleflight";
const loadPullRequestPayload = singleFlight(
(signal, owner: string, repo: string, number: number) =>
loadPullRequestFromGitHub(owner, repo, number, signal),
{
getKey: (owner, repo, number) => `${owner}/${repo}#${String(number)}`,
},
);
```
Use bounded iterable work when processing many files in a diff.
```ts
import { mapAsyncLimit } from "yieldless/iterable";
const [diffError, files] = await mapAsyncLimit(
github.streamPullRequestFiles(owner, repo, number),
(file, _index, signal) => enrichFile(file, signal),
{
concurrency: 6,
signal,
},
);
```
Avoid [#avoid]
Do not keep tuples as your component model.
```ts
function PullRequestView(props: {
result: SafeResult;
}) {
// Every child now has to know tuple mechanics.
}
```
Fold the tuple once into `idle`, `loading`, `ready`, and `error` state. UI code should speak in screen states.
Rule of thumb [#rule-of-thumb]
* Use Yieldless in the main process and preload boundary.
* Convert tuples into domain state at the renderer edge.
* Do not force tuple results deep into component trees.
# Command Runner
Many products eventually need to run a local tool: a project build, a formatter, a video transcode, a document converter, or a test command. The hard part is rarely starting the process. The hard part is keeping the UI responsive, stopping cleanly when the user cancels, and returning enough detail to explain what happened.
This recipe builds a small command runner for developer tools and desktop apps.
Command runner service [#command-runner-service]
```ts
import {
CommandOutputLimitError,
CommandTimeoutError,
type CommandResult,
runCommandSafe,
runShellCommandSafe,
} from "yieldless/node";
import { createPubSub } from "yieldless/pubsub";
type CommandEvent =
| { readonly type: "started"; readonly label: string }
| { readonly type: "stdout"; readonly chunk: string }
| { readonly type: "stderr"; readonly chunk: string }
| { readonly type: "finished"; readonly durationMs: number }
| { readonly type: "failed"; readonly message: string };
export function createCommandRunner(workspacePath: string) {
const events = createPubSub({ replay: 20 });
async function runScript(scriptName: string, signal: AbortSignal) {
events.publish({ type: "started", label: `pnpm run ${scriptName}` });
const [error, result] = await runCommandSafe(
"pnpm",
["run", scriptName],
{
cwd: workspacePath,
maxOutputBytes: 2 * 1024 * 1024,
onStderr: (chunk) => events.publish({ type: "stderr", chunk }),
onStdout: (chunk) => events.publish({ type: "stdout", chunk }),
signal,
timeoutMs: 120_000,
},
);
if (error) {
events.publish({ type: "failed", message: describeCommandError(error) });
return [error, null] as const;
}
events.publish({ type: "finished", durationMs: result.durationMs });
return [null, result] as const;
}
async function runTrustedPipeline(signal: AbortSignal) {
return await runShellCommandSafe("pnpm lint && pnpm test", {
cwd: workspacePath,
maxOutputBytes: 4 * 1024 * 1024,
onStderr: (chunk) => events.publish({ type: "stderr", chunk }),
onStdout: (chunk) => events.publish({ type: "stdout", chunk }),
signal,
timeoutMs: 180_000,
});
}
return {
events,
runScript,
runTrustedPipeline,
};
}
function describeCommandError(error: Error) {
if (error instanceof CommandTimeoutError) {
return `Timed out after ${error.timeoutMs}ms`;
}
if (error instanceof CommandOutputLimitError) {
return `Stopped after ${error.maxOutputBytes} bytes of command output`;
}
return error.message;
}
```
`runScript()` accepts the script name as an argument, so Node keeps argument boundaries intact. `runTrustedPipeline()` uses a shell command string because the `&&` syntax is intentionally shell-specific and controlled by the application.
Showing progress in a UI [#showing-progress-in-a-ui]
```ts
const runner = createCommandRunner(projectPath);
const controller = new AbortController();
const subscription = runner.events.subscribe();
const running = runner.runScript("build", controller.signal);
void running.finally(() => subscription.close());
for await (const event of subscription) {
if (event.type === "stdout" || event.type === "stderr") {
appendLog(event.chunk);
}
if (event.type === "failed") {
showToast(event.message);
}
}
const [error, result] = await running;
```
The UI does not need to know how the process is spawned. It subscribes to ordinary events and owns cancellation through one `AbortController`.
Returning a useful result [#returning-a-useful-result]
```ts
function summarize(result: CommandResult) {
return {
command: result.command,
durationMs: result.durationMs,
ok: result.exitCode === 0,
outputPreview: result.stdout.slice(-4_000),
};
}
```
The command result includes the command, args, working directory, duration, stdout, stderr, exit code, and signal. That makes audit logs and support screens possible without reparsing the terminal transcript.
Good [#good]
Use `runCommandSafe(file, args)` when any part of the command came from a user, a project setting, or a database row.
```ts
await runCommandSafe("ffmpeg", [
"-i",
inputPath,
"-frames:v",
"1",
thumbnailPath,
], {
maxOutputBytes: 512 * 1024,
signal,
timeoutMs: 30_000,
});
```
Use `runShellCommandSafe()` for trusted shell syntax that is easier to read as one command.
```ts
await runShellCommandSafe("pnpm lint && pnpm test", {
cwd: workspacePath,
signal,
timeoutMs: 180_000,
});
```
Avoid [#avoid]
Do not put user-controlled values into a shell command string.
```ts
await runShellCommandSafe(`ffmpeg -i ${inputPath} ${thumbnailPath}`);
```
Use separate arguments instead. It reads a little longer, but it avoids shell injection and quoting edge cases.
# Read IDs and Fetch Records
This recipe covers a common first Yieldless workflow:
1. Read a file with one ID per line.
2. Split and validate the lines.
3. Turn each ID into a number.
4. Use `forEach()` to fetch one record at a time.
5. Accumulate the successful records into an array.
Use this version when you want sequential requests and the first bad line or failed request should stop the whole job.
Input file [#input-file]
```txt
101
102
# blank lines and comments are ignored
103
```
Complete recipe [#complete-recipe]
```ts
import type { SafeResult } from "yieldless/error";
import { fetchJsonSafe } from "yieldless/fetch";
import { forEach } from "yieldless/iterable";
import { readFileSafe } from "yieldless/node";
interface Customer {
readonly id: number;
readonly email: string;
}
interface LoadCustomersOptions {
readonly apiBaseUrl: string;
readonly signal?: AbortSignal;
}
class InvalidCustomerIdError extends Error {
readonly line: number;
constructor(line: number, value: string) {
super(`Expected a positive integer on line ${String(line)}, got "${value}".`);
this.name = "InvalidCustomerIdError";
this.line = line;
}
}
function parseCustomerIds(
contents: string,
): SafeResult {
const ids: number[] = [];
for (const [index, line] of contents.split(/\r?\n/).entries()) {
const value = line.trim();
if (value === "" || value.startsWith("#")) {
continue;
}
const id = Number(value);
if (!Number.isInteger(id) || id <= 0) {
return [new InvalidCustomerIdError(index + 1, value), null] as const;
}
ids.push(id);
}
return [null, ids] as const;
}
function customerUrl(apiBaseUrl: string, id: number): URL {
return new URL(`/customers/${String(id)}`, apiBaseUrl);
}
export async function loadCustomersFromIdFile(
filePath: string,
options: LoadCustomersOptions,
): Promise> {
const [readError, contents] = await readFileSafe(filePath);
if (readError) {
return [readError, null] as const;
}
const [parseError, ids] = parseCustomerIds(contents);
if (parseError) {
return [parseError, null] as const;
}
const customers: Customer[] = [];
const [loadError] = await forEach(
ids,
async (id, _index, signal) => {
const [fetchError, customer] = await fetchJsonSafe(
customerUrl(options.apiBaseUrl, id),
{
headers: { accept: "application/json" },
timeoutMs: 5_000,
signal,
},
);
if (fetchError) {
return [fetchError, null] as const;
}
customers.push(customer);
return [null, undefined] as const;
},
{ signal: options.signal },
);
if (loadError) {
return [loadError, null] as const;
}
return [null, customers] as const;
}
```
Why forEach fits [#why-foreach-fits]
`forEach()` is sequential. The next ID is not fetched until the current worker returns `[null, undefined]`.
That gives you three useful properties:
* The returned `customers` array matches the file order.
* The API receives one request at a time.
* The first parse, fetch, status, JSON, timeout, or abort error stops the job.
The worker receives a scoped `signal`. Pass that signal into real I/O such as `fetchJsonSafe()` so cancellation can do useful work.
Calling the recipe [#calling-the-recipe]
```ts
const controller = new AbortController();
const [error, customers] = await loadCustomersFromIdFile("customer-ids.txt", {
apiBaseUrl: "https://api.example.com",
signal: controller.signal,
});
if (error) {
console.error(error);
} else {
console.log(customers);
}
```
Bounded parallel version [#bounded-parallel-version]
When each ID maps to one result and the API can handle a few concurrent requests, `mapAsyncLimit()` is shorter because it collects the output array for you.
```ts
import { mapAsyncLimit } from "yieldless/iterable";
const [loadError, customers] = await mapAsyncLimit(
ids,
(id, _index, signal) =>
fetchJsonSafe(customerUrl(apiBaseUrl, id), {
headers: { accept: "application/json" },
timeoutMs: 5_000,
signal,
}),
{
concurrency: 4,
signal: parentSignal,
},
);
```
`mapAsyncLimit()` still preserves input order in the returned array.
Common additions [#common-additions]
Add `safeRetry()` inside the worker when one customer fetch can fail transiently:
```ts
import { safeRetry } from "yieldless/retry";
const [fetchError, customer] = await safeRetry(
(_attempt, attemptSignal) =>
fetchJsonSafe(customerUrl(apiBaseUrl, id), {
timeoutMs: 5_000,
signal: attemptSignal,
}),
{
maxAttempts: 3,
baseDelayMs: 150,
signal,
},
);
```
Add `createRateLimiter()` when the remote API has a quota. Create the limiter once outside the loop, then take a slot inside each worker before the fetch:
```ts
import { createRateLimiter } from "yieldless/limiter";
const quota = createRateLimiter({
limit: 60,
intervalMs: 60_000,
});
const [quotaError] = await quota.takeSafe({ signal });
if (quotaError) {
return [quotaError, null] as const;
}
```
Add `createCache()` when repeated IDs should reuse previous successful loads instead of making duplicate HTTP requests.
# Simple Recipes
These recipes are intentionally tiny. Each one shows one common task, the module to reach for, and the basic tuple branch you should expect to write.
Use these when the larger recipes feel like too much ceremony for the problem in front of you.
Read a text file [#read-a-text-file]
```ts
import { readFileSafe } from "yieldless/node";
const [error, contents] = await readFileSafe("config.json");
if (error) {
return [error, null] as const;
}
return [null, contents] as const;
```
Use this at Node file boundaries where missing files, permissions, or bad paths are normal operational failures.
Parse JSON without throwing [#parse-json-without-throwing]
```ts
import { safeTrySync } from "yieldless/error";
const [error, value] = safeTrySync(() => JSON.parse(contents) as Config);
if (error) {
return [error, null] as const;
}
return [null, value] as const;
```
Use `safeTrySync()` for sync code that might throw.
Fetch JSON with a timeout [#fetch-json-with-a-timeout]
```ts
import { fetchJsonSafe } from "yieldless/fetch";
const [error, user] = await fetchJsonSafe(
`https://api.example.com/users/${userId}`,
{
headers: { accept: "application/json" },
timeoutMs: 5_000,
signal,
},
);
if (error) {
return [error, null] as const;
}
return [null, user] as const;
```
Use this when success means "the HTTP status was OK and the response body was valid JSON."
Process items one at a time [#process-items-one-at-a-time]
```ts
import { forEach } from "yieldless/iterable";
const processed: ProcessedItem[] = [];
const [error] = await forEach(
items,
async (item, _index, signal) => {
const [itemError, processedItem] = await processItem(item, signal);
if (itemError) {
return [itemError, null] as const;
}
processed.push(processedItem);
return [null, undefined] as const;
},
{ signal },
);
if (error) {
return [error, null] as const;
}
return [null, processed] as const;
```
Use `forEach()` when you want order, backpressure, and a stop-on-first-error flow.
Map items with bounded concurrency [#map-items-with-bounded-concurrency]
```ts
import { mapAsyncLimit } from "yieldless/iterable";
const [error, thumbnails] = await mapAsyncLimit(
imagePaths,
(path, _index, signal) => renderThumbnail(path, signal),
{ concurrency: 4, signal },
);
if (error) {
return [error, null] as const;
}
return [null, thumbnails] as const;
```
Use this when each item produces one output and you want more throughput without unbounded `Promise.all()`.
Retry one flaky call [#retry-one-flaky-call]
```ts
import { fetchJsonSafe } from "yieldless/fetch";
import { safeRetry } from "yieldless/retry";
const [error, user] = await safeRetry(
(_attempt, attemptSignal) =>
fetchJsonSafe(`https://api.example.com/users/${userId}`, {
timeoutMs: 3_000,
signal: attemptSignal,
}),
{
maxAttempts: 3,
baseDelayMs: 150,
signal,
},
);
if (error) {
return [error, null] as const;
}
return [null, user] as const;
```
Retry the unreliable boundary, not the whole business flow.
Cache a read-through lookup [#cache-a-read-through-lookup]
```ts
import { createCache } from "yieldless/cache";
import { fetchJsonSafe } from "yieldless/fetch";
const users = createCache({
maxSize: 500,
ttlMs: 60_000,
load: (userId, signal) =>
fetchJsonSafe(`https://api.example.com/users/${userId}`, {
signal,
}),
});
const [error, user] = await users.get(userId, { signal });
if (error) {
return [error, null] as const;
}
return [null, user] as const;
```
Successful loads are cached. Failed loads are returned but not stored.
Poll until a job is ready [#poll-until-a-job-is-ready]
```ts
import { fetchJsonSafe } from "yieldless/fetch";
import { poll } from "yieldless/timer";
class JobNotReadyError extends Error {
constructor() {
super("Job is not ready yet.");
this.name = "JobNotReadyError";
}
}
const [error, job] = await poll(
async (_attempt, attemptSignal) => {
const [fetchError, current] = await fetchJsonSafe(
`https://api.example.com/jobs/${jobId}`,
{
signal: attemptSignal,
},
);
if (fetchError) {
return [fetchError, null] as const;
}
return current.status === "ready"
? [null, current] as const
: [new JobNotReadyError(), null] as const;
},
{
intervalMs: 1_000,
timeoutMs: 30_000,
signal,
shouldContinue: (error) => error instanceof JobNotReadyError,
},
);
if (error) {
return [error, null] as const;
}
return [null, job] as const;
```
Use `poll()` when "not ready yet" is expected and the caller still needs a firm timeout. `poll()` repeats while the operation returns an error tuple, then stops when it returns `[null, value]`.
Run a command safely [#run-a-command-safely]
```ts
import { runCommandSafe } from "yieldless/node";
const [error, result] = await runCommandSafe("git", ["status", "--short"], {
cwd: workspacePath,
timeoutMs: 10_000,
maxOutputBytes: 512 * 1024,
signal,
});
if (error) {
return [error, null] as const;
}
return [null, result.stdout] as const;
```
Use `runCommandSafe(file, args)` when command failure is something your app should display, log, or recover from.
Need a full version? [#need-a-full-version]
* [Beginner Tutorial](https://binbandit.github.io/yieldless/docs/guides/beginner-tutorial/) builds a complete file-to-API workflow step by step.
* [Read IDs and Fetch Records](https://binbandit.github.io/yieldless/docs/recipes/read-ids-fetch-records/) is a focused recipe for `readFileSafe()`, `forEach()`, and `fetchJsonSafe()`.
* [Examples](https://binbandit.github.io/yieldless/docs/guides/examples/) has larger compositions once these single-feature recipes feel comfortable.