Testing, Reliability & Releases

Test infrastructure, test patterns, build pipelines, scalability, release workflows, branching strategy, and failure mode handling across the Clarity platform.

science

Test Infrastructure

All integration tests in Clarity derive from the TestBase foundation class provided by Core.Testing. This class provisions an in-memory database and configures the full pipeline context, giving each test complete isolation from external services and other tests.

TestBase Foundation Class

Configure(builder)

Override to register additional services, mock dependencies, or customize the DI container for a specific test class.

GetPipelineContext()

Returns a fully initialized pipeline context with all registered services, ready for executing pipelines in the test environment.

GetDisposableDatabaseContext()

Creates an isolated, in-memory database context that is automatically disposed after the test completes. Ensures zero cross-test contamination.

Creating a Test Project

1

Create C# Project

Add a new C# test project inside the plugin directory. Use the NUnit or xUnit project template.

2

Reference Core.Testing

Add a project reference to Core.Testing to gain access to TestBase and all test utilities.

3

Derive from TestBase

Create test classes that inherit from TestBase. Override Configure() if needed.

4

Write [Test] Methods

Write test methods annotated with [Test]. Use the pipeline context and disposable database context.

Example: Ecommerce Integration Test

public class EcommerceTests : TestBase { [Test] public async Task Pricing_WithValidData_ReturnsValidPrice() { // Arrange var context = GetPipelineContext(); await using var db = context.GetDisposableDatabaseContext(); db.Set<ProductPrice>().Add(new ProductPrice { /* test data */ }); // Act var price = await CalculatePricesPipeline.ExecuteAsync(someInput, context); // Assert Assert.Multiple(() => { Assert.That(price, Is.Not.Empty); }); } }

In-memory Database Isolation

Each call to GetDisposableDatabaseContext() provisions a fresh in-memory database. Tests run in parallel without conflicts, and all data is automatically cleaned up when the context is disposed.

Test Coverage Strategy

The testing strategy prioritizes pipeline-level integration tests over isolated unit tests. This reflects the platform's architecture: pipelines orchestrate complex multi-step workflows, and testing them end-to-end provides higher confidence than testing individual components in isolation.

Core Payment Workflows: Authorize, capture, refund, void, partial payment, and declined transaction scenarios are tested against the Mock provider, validating the full payment state machine.

Pipeline Engine: Serial execution order, hook chaining, pre-hook short-circuit behavior, and error propagation via ExecuteSafelyAsync.

Connector Integration: Round-trip sync operations validated against sandbox ERP instances, verifying data integrity through the EkDB mapping layer.

pattern

Test Patterns

All tests in the Clarity platform follow the Arrange / Act / Assert pattern and use a strict naming convention to ensure readability and traceability across the test suite.

Arrange / Act / Assert

Arrange

Set up test data, initialize the pipeline context, create disposable database contexts, and seed any required entities.

Act

Execute the pipeline, service method, or operation under test. Capture the result or any thrown exceptions.

Assert

Verify the result matches expectations using NUnit assertions. Use Assert.Multiple for grouped checks.

Test Naming Convention

// Pattern: {CodeUnderTest}_{Scenario}_{ExpectedOutcome}

Naming Examples

Test Method Name Code Under Test Scenario Expected Outcome
PricingPipeline_WithValidData_ReturnsExpectedPrice PricingPipeline Valid input data Returns expected price
InventoryPipeline_WithNoWarehouse_ThrowsException InventoryPipeline No warehouse configured Throws exception
hub

Core Tests

The Core framework includes a comprehensive test suite that validates the foundational subsystems. These tests run on every build and serve as the quality gate for the pipeline engine, data model, expression builder, and contract generation.

SerialPipelineTests

Validates serial pipeline execution order, hook chaining behavior, and default hook registration. Ensures that hooks fire in the correct sequence and that pipelines propagate results correctly.

ParallelPipelineTests

Validates parallel pipeline execution, concurrent hook invocation, and result collection from multiple branches. Confirms that parallel results are aggregated correctly.

DataModelTests

Validates entity configuration, model builder mappings, schema separation, and relationship configuration. Ensures all data models are correctly mapped for both PostgreSQL and MSSQL.

ExpressoTests

Validates the Expresso expression builder, testing dynamic expression construction, predicate building, and filter composition for query operations.

ContractTests

Validates API contract generation, ensuring that endpoint contracts are correctly produced, serialized, and consistent across builds.

extension

Plugin Tests

Each plugin can maintain its own test project, organized alongside the plugin source code. Plugin tests use the same TestBase foundation and follow the same Arrange/Act/Assert patterns as core tests.

Common Plugin Test Types

ModuleTokenTests

Validates that module tokens are correctly generated, resolved, and scoped to the appropriate plugin namespace.

ModulesIntegrationTests

Full integration tests that exercise plugin pipelines end-to-end with the in-memory database and pipeline context.

ModulesApiTests

Validates API endpoint behavior, request/response serialization, and contract adherence for plugin-exposed endpoints.

Building Plugin Tests

# Build with SkipRootOverlay to avoid conflicts with the main solution dotnet build /p:SkipRootOverlay=true

The /p:SkipRootOverlay=true flag prevents the build from pulling in root-level overlay files, allowing plugin tests to be built in isolation without requiring the full solution to be compiled.

Test Workflow

Plugin tests follow the same pattern: derive from TestBase, obtain a pipeline context, execute a pipeline or service method, then assert on the results. The TestBase class handles all bootstrapping, DI registration, and database provisioning automatically.

build

Build Process

The Clarity build pipeline compiles both backend and frontend, packages them into Docker images, and produces deployment artifacts. The pipeline is orchestrated by Azure DevOps and runs on every commit to tracked branches.

Backend Build

dotnet build /p:SkipRootOverlay=true

Build Pipeline Stages

BuildBackend
dotnet build + test
BuildFrontend
npm ci + build
CreateArtifact
Docker + deploy.yml

Docker Image Tagging

Every successful build produces Docker images tagged with both the commit hash (for traceability) and latest (for convenience). This allows pinning to a specific build or always pulling the most recent.

CI Integration

The build pipeline runs on Azure DevOps. It triggers automatically on push to tracked branches, runs all tests, builds Docker images, and publishes deployment artifacts.

scale

Scalability

The Clarity platform is designed for horizontal scaling via Kubernetes. Both the backend and frontend are stateless, allowing replicas to be added or removed without coordination. Shared state is externalized to Redis and the database.

Scaling Components

Backend (Stateless HTTP API)

Each backend pod is a stateless ASP.NET Core process. Scale horizontally by increasing the replica count in the Kubernetes deployment. No session affinity required.

Frontend (Stateless Remix Server)

The Remix SSR server is stateless. Additional frontend pods handle more concurrent page renders. Static assets can be served from a CDN for further offloading.

Redis (Shared Cache)

Shared cache for multi-replica backends. Runs as a single instance sidecar by default; for production multi-replica deployments, consider an external Redis cluster for high availability.

Database (External)

Database scaling is provider-dependent. Azure SQL supports elastic pools for workload-based scaling. PostgreSQL supports read replicas for read-heavy workloads.

Scaling Architecture

Load Balancer
Ingress
N Backend Pods
Pod 1
Pod 2
Pod N
N Frontend Pods
Pod 1
Pod 2
Pod N
Shared Redis
Cache
External DB
SQL / PostgreSQL
tune

Resource Allocation

Kubernetes resource limits are configured per container in the deployment manifest. These limits define the maximum CPU and memory each container can consume, preventing any single container from starving others on the same node.

Container Memory Limit CPU Limit Notes
Backend 1.5Gi 500m Stateless, can scale horizontally
Frontend 0.75Gi 500m SSR rendering, can scale horizontally
Redis 128Mi 100m In-pod sidecar; consider external for multi-replica

Multi-Replica Redis Consideration

When scaling to multiple backend replicas, the in-pod Redis sidecar becomes per-pod rather than shared. For consistent caching across replicas, deploy an external Redis instance or cluster and update the connection string in the deployment secrets.

rocket_launch

Release Process

Releases follow a commit-based image tagging strategy. Every merged commit produces a tagged Docker image that can be deployed via the Kubernetes deployment manifest. The deployment strategy is Recreate, replacing all pods at once.

Image Tagging

# Docker images are tagged with the commit hash image-registry.../backend:{{commit}} image-registry.../frontend:{{commit}} # And also tagged as "latest" for convenience image-registry.../backend:latest image-registry.../frontend:latest

Deployment Manifest

# deployment.yml (excerpt) apiVersion: apps/v1 kind: Deployment spec: strategy: type: Recreate # All pods replaced at once template: spec: containers: - name: backend image: image-registry.../backend:{{commit}}

Release Pipeline

Commit
PR + Review
Merge
develop
Build
Pipeline
Docker Push
hash + latest
Artifact
deploy.yml
Deploy
to VM

Bug Fixes

Bug fixes follow the same pipeline. Hotfix branches are created from the affected version, fixes are applied, and the corrected image flows through the identical build and deploy pipeline.

ERP Version Upgrades

ERP version upgrades are handled per-connector. Each connector plugin may need code changes or dependency updates to support a new ERP API version. These changes flow through the standard release pipeline.

account_tree

Branching Strategy

The Clarity platform uses a multi-tier branching strategy coordinated across the main repository and its submodules. Branches follow a consistent naming convention and flow through a defined promotion path from feature development to production.

Branch Naming Conventions

Scope Pattern Example Purpose
Project projects/{xyz}/qa projects/acme/qa QA integration branch for a project
projects/{xyz}/hotfix/... projects/acme/hotfix/fix-pricing Urgent fix for a specific project
projects/{xyz}/feature/... projects/acme/feature/new-checkout Feature work scoped to a project
Plugin / Core feature/... feature/add-surcharge-pipeline New feature development
hotfix/... hotfix/payment-timeout Critical fix for a plugin or core

Promotion Flow

feature/*
Development
develop
Integration
QA
Testing
main
Production

Submodule Branch Coordination

When working on features that span Core and Plugins, the submodule references must be updated to point to the correct branch in each submodule. Submodule pointers are committed as part of the parent repository.

PR Workflow

Pull requests are created within each repository (Core, Plugin, or Client). Cross-repo PRs are linked via references in the PR description. All PRs require review before merging to develop.

error

Failure Modes

The payment system defines comprehensive status models for tracking payment outcomes through their lifecycle. These statuses enable the platform to handle partial failures, retries, and error recovery at both the aggregate and individual source level.

Payment Status Aggregation

PENDING

Payment initiated, awaiting processing

COMPLETED

All sources captured successfully

PARTIALLY_COMPLETE

Some sources captured, others pending or failed

ERROR

Processing error occurred

Payment Method Source Status Lifecycle

Status Description
Internal Pending Created, not yet sent to provider
Pre-processing Validation in progress
External Pending Sent to provider, awaiting response
Authorized Provider approved, funds held
Partial Authorized Partially approved
Captured Funds transferred
Refunded Payment reversed
Voided Authorization cancelled
Declined Provider rejected
Internal Error Platform error
External Error Provider error

Surcharge Status Lifecycle

Pending Completed PartiallyCompleted Abandoned Assigned Transient Refunded Declined Cancelled Authorized

Failure Handling Strategies

Decline Handling

When a payment is declined, the UpdateSurchargeForDeclinedPaymentPipeline is triggered to update surcharge records and notify the appropriate systems.

Partial Payments

When only some payment sources succeed, the aggregate status transitions to PartiallyComplete, allowing the system to track which sources still need resolution.

Refund Processing

Refunds are processed via the RefundPaymentPipeline, which reverses the captured funds and updates all related status records.

Connector / ERP Errors

Connector and ERP errors are logged with full context and counted for monitoring. Transient errors may be retried automatically depending on the connector configuration.

Payment Status State Machine

Pending
Authorized
Captured
Completed

Pending ↓

Declined

Authorized ↓

Voided

Captured ↓

Refunded
help

Frequently Asked Questions

The platform prioritizes pipeline-level integration tests that validate complete workflows end-to-end. This includes payment lifecycle tests (authorize through capture, refund, void), connector sync round-trips against sandbox ERPs, and pipeline engine tests validating hook execution order and error handling. Unit tests cover utility functions and data transformations. The Mock payment provider enables testing payment flows without external dependencies. See Test Infrastructure and Test Patterns.

Releases follow a structured process: feature branches merge to develop, undergo CI validation, then merge to a release branch for staging deployment and QA. Production deployments use Docker containers pushed to the container registry, with Kubernetes managing rollout. The build pipeline runs tests, generates container images, and tags releases. Hotfixes follow an expedited path from a hotfix branch directly to production. See Release Process.

The platform uses a Git-flow model: feature branches for new work, develop as the integration branch, release branches for staging, and main/master reflecting production. Plugin submodules have their own Git repositories and branching, enabling independent development cycles. This structure supports parallel work across core, plugins, and client customizations without merge conflicts. See Branching Strategy.

The platform targets 99.98% uptime. Availability is monitored through Kubernetes health checks (liveness and readiness probes), application-level health endpoints, and external monitoring. Kubernetes automatically restarts unhealthy pods, and the stateless architecture ensures that pod restarts don't lose in-flight data. See Deployment & Operations.

The platform scales horizontally through Kubernetes pod autoscaling. The stateless application design means additional instances serve requests immediately without session migration. Redis provides distributed caching, reducing database load. The pipeline architecture allows compute-intensive operations to be distributed. Resource limits are configurable per deployment. See Scalability and Resource Allocation.

The platform has analyzed failure modes across all layers: database connectivity loss (automatic retry with circuit breaker), payment provider timeouts (configurable timeouts with graceful error states), connector sync failures (retry queues with dead-letter handling), and infrastructure failures (Kubernetes self-healing with pod restart policies). Each failure mode has a documented mitigation strategy and monitoring alert. See Failure Modes.

Integration tests provision two or more tenant databases and execute operations within each tenant's context. The test suite verifies that queries, mutations, and pipeline operations in Tenant A's context never return or modify Tenant B's data. This includes testing EF Core global query filters, scoped DbContext resolution, JWT tenant claim validation, and background task tenant context isolation. Cross-tenant data leakage tests are a mandatory part of the test suite for any multi-tenant deployment. See Defense-in-Depth Isolation.