The core extensibility model in Phoenix: serial and parallel pipelines, hooks, pre hooks, pipeline context, and caching.
Core Concept
Pipelines are the single most important architectural concept in Phoenix. They are the primary unit of work, the primary extension point, and the mechanism through which virtually all application logic flows. Understanding pipelines is prerequisite to understanding everything else.
Pipelines provide a clean, composable model for extending and replacing behaviors without conflicts.
Every pipeline defines strict input and output types. A pipeline takes a well-defined input, processes it through a default implementation and any registered hooks, and produces a well-defined output. This type safety ensures that all participants in a pipeline chain are working with compatible data.
Almost all controllers and API endpoints in Phoenix delegate their work to a specific pipeline. When a request arrives, the controller extracts the relevant input, calls the appropriate pipeline, and returns the result. This means that to change the behavior of any endpoint, you simply hook into its pipeline rather than modifying controller code.
Pipelines can call other pipelines, and this pattern is actively encouraged. By composing pipelines, you build extensible chains of behavior where each step can be independently hooked, replaced, or augmented by any plugin in the system.
Each hook receives the original input and the previous hook's result, producing a new result passed to the next hook.
Serial pipelines are the default and most common pipeline type. In a serial pipeline, the default logic runs first, and then each registered hook runs one after another in sequence. Each hook receives the original input along with the result from the previous step, allowing it to transform, augment, or completely replace the result before passing it along.
To create a serial pipeline, your class derives from SerialPipeline<TSelf, TInput, TOutput>. The three type parameters are:
| Parameter | Description |
|---|---|
| TSelf | The pipeline class itself (enables the static generic pattern) |
| TInput | The type of data the pipeline accepts as input |
| TOutput | The type of data the pipeline returns as its result |
Override ExecuteDefaultAsync to provide the pipeline's default behavior. This is the logic that runs when no default hook has replaced it.
public class MyCustomPipeline : SerialPipeline<MyCustomPipeline, int, SomeModel>
{
public override ValueTask<SomeModel> ExecuteDefaultAsync(
int input, IPipelineContext context, CancellationToken token = default)
{
// Your default logic goes here.
// This runs first, before any hooks.
var result = new SomeModel { Id = input, Name = "Default" };
return new ValueTask<SomeModel>(result);
}
}
Pipelines expose static methods for execution. You do not need to instantiate the pipeline class yourself — the framework handles resolution and hook ordering.
// Standard execution — throws on failure
var result = await MyCustomPipeline.ExecuteAsync(1, context);
// Safe execution — returns success flag instead of throwing
var (IsSuccess, Result) = await MyCustomPipeline.ExecuteSafelyAsync(1, context);
When to use ExecuteSafelyAsync
Use ExecuteSafelyAsync when a pipeline failure is an expected scenario and you want to handle it gracefully without propagating an exception. The returned tuple gives you an IsSuccess boolean and the Result value, making conditional handling straightforward.
Hooks are the most common way to customize pipeline behavior. A hook is a class that runs after the pipeline's default logic (or after the previous hook in the chain). Each hook receives two key pieces of data:
This design means hooks form a chain. Each hook can inspect the previous result, modify it, enrich it, or replace it entirely before passing it along. Multiple plugins can each register their own hook on the same pipeline, and they will all execute in order without conflicting.
public class MyCustomHook : MyCustomPipeline.Hook
{
public override ValueTask<SomeModel> ExecuteAsync(
int input,
SomeModel previousResult,
IPipelineContext context,
CancellationToken token = default)
{
// Modify or replace the previous result
previousResult.Name = "Modified by MyCustomHook";
return new ValueTask<SomeModel>(previousResult);
}
}
The declarative approach (deriving from Pipeline.Hook) is the recommended pattern. Because hooks are standalone classes, they are easy to find by searching the codebase, easy to unit test in isolation, and easy to understand when reading the code.
Hook ordering follows plugin registration order. If Plugin A registers a hook before Plugin B, Plugin A's hook will run first and Plugin B's hook will receive Plugin A's result as its previousResult.
Hook Chaining
A Default Hook completely replaces the pipeline's default implementation. Instead of augmenting or modifying the result after the default logic runs, a Default Hook becomes the new default logic. The original ExecuteDefaultAsync is bypassed entirely.
This is a powerful mechanism for scenarios where the built-in behavior is not suitable and you need to provide an entirely different implementation. For example, if the core platform calculates tax using a simple flat-rate model, a tax plugin could register a Default Hook to replace that with integration to a third-party tax service.
public class MyReplacementLogic : MyCustomPipeline.DefaultHook
{
public override ValueTask<SomeModel> ExecuteAsync(
int input,
IPipelineContext context,
CancellationToken token = default)
{
// This REPLACES the original ExecuteDefaultAsync entirely.
// The pipeline's built-in logic will NOT run.
var result = new SomeModel
{
Id = input,
Name = "Completely replaced by plugin"
};
return new ValueTask<SomeModel>(result);
}
}
When to use Default Hooks vs Regular Hooks
Note: Only one Default Hook can be active per pipeline. If multiple plugins register Default Hooks on the same pipeline, the last one registered wins.
Instead of creating a standalone hook class, you can register hooks imperatively inside your plugin's OnStartup method. The framework provides two methods for this:
| Method | Purpose |
|---|---|
| AppendHook() | Adds a hook to the end of the hook chain (runs after all other hooks) |
| ReplaceDefaultHook() | Replaces the default implementation (equivalent to a DefaultHook class) |
public class MyPlugin : PhoenixPlugin
{
public override void OnStartup(IPluginContext context)
{
// Append a hook using a lambda
MyCustomPipeline.AppendHook(
async (input, previousResult, ctx, token) =>
{
previousResult.Name += " (enhanced by plugin)";
return previousResult;
});
// Replace the default implementation
MyCustomPipeline.ReplaceDefaultHook(
async (input, ctx, token) =>
{
return new SomeModel
{
Id = input,
Name = "Fully replaced default"
};
});
}
}
Class-Based Hooks Are Preferred
While imperative hooks work perfectly well, class-based (declarative) hooks are preferred in most scenarios. The reasons are practical:
Use imperative hooks for quick prototyping or truly trivial one-liner modifications.
Pre Hooks run before the pipeline's default logic executes. They are a gate that can inspect the input and decide one of three things:
Pre Hooks are ideal for validation, authorization checks, input sanitization, or short-circuiting when a cached or pre-computed result is available.
public class ValidateInputPreHook : TestPipeline.PreHook
{
public override ValueTask<PreHookResult<int, SomeModel>> ExecuteAsync(
int input,
IPipelineContext context,
CancellationToken token = default)
{
// Option 1: Halt — stop pipeline, return custom result
if (input < 0)
{
return Halt(new SomeModel { Id = -1, Name = "Invalid input" });
}
// Option 2: ProceedWithInput — continue with modified input
if (input == 0)
{
return ProceedWithInput(1); // default to 1 instead of 0
}
// Option 3: Proceed — continue as normal, no changes
return Proceed();
}
}
Return Custom Result
Pipeline skipped entirely
Execute Pipeline
No changes to input
Modified Input
Then execute pipeline
Input Alterations are a specialized type of Pre Hook that focus specifically on transforming the pipeline's input before it reaches the default logic. Unlike a full Pre Hook, an Input Alteration cannot halt the pipeline — it can only modify the input and let the pipeline continue.
This makes Input Alterations ideal for scenarios like pre-processing, data enrichment, or performing database lookups to supplement the input data before the main pipeline logic runs.
public class EnrichOrderInput : DoSomethingPipeline.InputAlteration
{
public override ValueTask<OrderInput> ExecuteAsync(
OrderInput input,
IPipelineContext context,
CancellationToken token = default)
{
// Perform a lookup or add supplemental data
// before the pipeline's default logic runs
input.TaxRate = GetCurrentTaxRate(input.Region);
input.DiscountCode = NormalizeDiscountCode(input.DiscountCode);
return new ValueTask<OrderInput>(input);
}
}
Input Alteration vs Pre Hook
An Input Alteration is simpler than a Pre Hook because it can only modify the input. If you need the ability to halt the pipeline or return a custom result, use a Pre Hook instead. Input Alterations are best when you always want the pipeline to run — you just want to ensure the input is complete and correct first.
In a Parallel Pipeline, all hooks run simultaneously rather than sequentially. Each hook operates independently, and the pipeline returns a collection of all results rather than a single chained result.
The classic use case is shipping rate calculation: when a customer views shipping options, you need to query UPS, FedEx, USPS, and possibly other carriers. There is no reason to wait for UPS to respond before asking FedEx — all queries can run in parallel, and the results are collected into a list of available shipping rates.
public class GetShippingRatesPipeline
: ParallelPipeline<GetShippingRatesPipeline, ShippingRequest, ShippingRate>
{
public override ValueTask<ShippingRate> ExecuteDefaultAsync(
ShippingRequest input, IPipelineContext context,
CancellationToken token = default)
{
// Default/fallback rate (e.g., flat rate shipping)
return new ValueTask<ShippingRate>(
new ShippingRate { Carrier = "Standard", Cost = 9.99m });
}
}
// Execution returns a collection of all results
IEnumerable<ShippingRate> rates = await
GetShippingRatesPipeline.ExecuteAsync(request, context);
When in doubt, use Serial
If you are unsure whether to use a Serial or Parallel pipeline, default to Serial. Serial pipelines are the standard pattern and work correctly in the vast majority of cases. Only use Parallel pipelines when you have a clear use case where hooks are truly independent and would benefit from concurrent execution.
IPipelineContext is the gateway to all state and resources available during pipeline execution. Every pipeline method receives a context parameter, giving hooks and default logic access to authentication state, request information, database connections, caching infrastructure, and dependency injection services.
The context is designed to be a single, consistent entry point so that pipeline code never needs to reach outside the pipeline framework for common resources.
Auth State
User ID, Customer ID
Roles & Permissions
Request State
Headers, Cookies
Endpoint Path
Database
EF Core DbContext
Query & Write
IPipelineContext
Cache
Distributed Cache
Scope Cache
Scheduler
Background Jobs
Deferred Work
DI Services
Any Registered
Service via DI
Inside a pipeline or hook, the context is always available as a method parameter. Outside of pipelines (for example, in a controller that needs to create a context to call a pipeline), you can obtain an IPipelineContext through dependency injection or the [FromServices] attribute.
// Option 1: Dependency Injection (constructor)
public class MyController : Controller
{
private readonly IPipelineContext _context;
public MyController(IPipelineContext context)
{
_context = context;
}
}
// Option 2: [FromServices] attribute
public async Task<IActionResult> GetItem(
int id,
[FromServices] IPipelineContext context)
{
var result = await GetItemPipeline.ExecuteAsync(id, context);
return Ok(result);
}
// Inside a pipeline/hook, context is always a parameter
public override ValueTask<SomeModel> ExecuteDefaultAsync(
int input, IPipelineContext context, CancellationToken token)
{
// Access current user
var userId = context.Auth.CurrentUserId;
// Access request headers
var authHeader = context.Request.Headers["Authorization"];
// Access database
var db = context.Database;
// Access a DI service
var myService = context.GetService<IMyService>();
// ...
}
| Property / Method | Provides Access To |
|---|---|
| context.Auth | Current User ID, Customer ID, role and permission checks |
| context.Request | HTTP headers, cookies, endpoint path, query parameters |
| context.Database | Entity Framework Core DbContext for queries and writes |
| context.Cache | IDistributedCache for manual cache operations |
| context.Scheduler | Background job scheduler for deferred/recurring tasks |
| context.GetService<T>() | Any service registered in the DI container |
Phoenix provides multiple caching strategies for pipelines, ranging from simple attribute-based distributed caching to manual cache control and request-scoped memory caching.
[DistributedCached]The simplest way to cache a pipeline's output is to apply the [DistributedCached] attribute to the pipeline class. By default, this caches the result for 10 minutes. The cache key is automatically derived from the pipeline type and input value.
// Default: 10-minute cache
[DistributedCached]
public class GetProductPipeline
: SerialPipeline<GetProductPipeline, int, Product>
{
// ...
}
// Custom duration: 30-minute cache
[DistributedCached(CacheDuration = 30)]
public class GetCategoryTreePipeline
: SerialPipeline<GetCategoryTreePipeline, string, CategoryTree>
{
// ...
}
// Per-user cache: separate cache entry for each user
[DistributedCached(VaryByUser = true)]
public class GetUserDashboardPipeline
: SerialPipeline<GetUserDashboardPipeline, int, Dashboard>
{
// ...
}
| Property | Default | Description |
|---|---|---|
| CacheDuration | 10 (minutes) | How long the cached result remains valid |
| VaryByUser | false | If true, each user gets their own cache entry (keyed by User ID) |
For more control over cache behavior, you can access the IDistributedCache directly through the pipeline context. This is useful when you need to cache intermediate values, use custom keys, or implement conditional caching logic.
The context also provides a convenient Promiser-style method, ResolveAsync, which checks the cache for a given key and only executes the factory function if the key is not found.
// Direct IDistributedCache access
var cache = context.Cache;
var cachedValue = await cache.GetStringAsync("my-custom-key");
// Promiser-style: resolve from cache or compute
var product = await context.ResolveAsync(
"Product_" + productId,
async () =>
{
// This lambda only runs if the cache key is missing
return await FetchProductFromDatabase(productId);
});
[ScopeCached][ScopeCached] is a memory-only cache that lives for the duration of the current request or task scope. Unlike distributed caching, the data is never serialized or sent to an external cache store — it exists purely in the application's memory and is discarded when the scope ends.
This is an advanced and relatively rare pattern, useful when a pipeline is called multiple times within a single request and you want to avoid redundant computation without the overhead of distributed cache serialization.
Choosing the Right Cache Strategy
When you need to find existing pipelines in the codebase, there are a few reliable strategies:
Search by Class Name
Search the codebase for classes or file names containing Pipeline. By convention, every pipeline class should include "Pipeline" in its name (e.g., GetProductPipeline, CalculateTaxPipeline).
Trace from the Endpoint
If you know which API endpoint or controller you want to customize, open its handler and look for the pipeline it calls. Controllers almost always delegate to a pipeline via PipelineName.ExecuteAsync(...).
Search for Base Classes
Search for : SerialPipeline or : ParallelPipeline to find all pipeline definitions in the solution. This is the most comprehensive approach.
Naming Convention
Every pipeline class in the Phoenix codebase should contain "Pipeline" in its name. This is a project-wide convention that ensures pipelines are always discoverable through simple text search. When creating new pipelines, always follow this convention.
Pipelines execute steps serially by default using the SerialPipeline pattern. Each step receives the output of the previous step, creating a predictable data flow. If any step throws an exception, the pipeline halts and the error propagates to the caller via ExecuteSafelyAsync, which wraps the result in a success/failure envelope. This prevents partial execution from leaving data in an inconsistent state. See Serial Pipelines.
Yes. The hook system allows client-specific plugins to register pre-hooks and post-hooks on any pipeline. Pre-hooks execute before the main pipeline logic and can modify inputs or short-circuit execution entirely. Post-hooks execute after and can modify outputs. Client customizations register hooks during plugin startup, and they run in priority order alongside hooks from other plugins. See Hooks and Plugin System.
Hooks are registered per-pipeline and execute in priority order. Pre-hooks can alter pipeline inputs (InputAlterations), add validation logic, or short-circuit execution by returning early. Post-hooks can transform outputs, trigger side effects like notifications, or log audit events. Default hooks provide base behavior that plugins can override. Imperative hooks allow one-off registrations for specific execution contexts. See Pre-Hooks and Default Hooks.
The order-to-cash flow involves multiple coordinated pipelines: the ERP connector syncs invoice data via its sync pipeline, PayInvoicePipeline orchestrates payment processing (validate invoice → calculate surcharge → create payment → authorize → capture), and the connector's reconciliation pipeline writes the payment back to the ERP to close the accounts receivable loop. Each pipeline supports hooks for client-specific customization. See Order-to-Cash Flow.