Repository layout, plugin architecture, deployment model, hosting stack, routing, configuration, and database support.
The Clarity Payment Hub repository follows a modular, three-tier architecture. Each directory serves a distinct role, with Git submodules connecting independent codebases into a single deployable solution.
Client
Entry point. Registers Core and all Plugins. Holds migrations and client-specific overrides.
Core
Foundation framework. Provides pipelines, IPlugin, auth, settings, CRUD, workflows, scheduling, and notifications.
Plugins
Add functionality. Each plugin is an independent project that implements IPlugin and registers its own services, models, and routes.
The dependency graph is strictly one-directional: Client depends on Core and Plugins; Plugins depend on Core; Core depends on nothing above it.
Plugins are registered in Client/Startup.cs using the fluent AddPhoenixPlugin<T>() extension method. Each call wires the plugin into the application builder, and the final .Build() call materializes the pipeline.
// Client/Startup.cs (excerpt) var app = builder .AddPhoenixPlugin<CMSPlugin>() .AddPhoenixPlugin<SitePlugin>() .AddPhoenixPlugin<CurrenciesPlugin>() .AddPhoenixPlugin<SalesCollectionsPlugin>() .AddPhoenixPlugin<PaymentsPlugin>() .AddPhoenixPlugin<MockPaymentsProviderPlugin>() // .AddPhoenixPlugin<NuveiPaymentsProviderPlugin>() .AddPhoenixPlugin<InvoicingPlugin>() .AddPhoenixPlugin<ProductsPlugin>() .AddPhoenixPlugin<ConnectCorePlugin>() .AddPhoenixPlugin<NetSuitePlugin>() .AddPhoenixPlugin<SysproPlugin>() .AddPhoenixPlugin<EpicorEaglePlugin>() .AddPhoenixPlugin<EpicorEclipsePlugin>() .AddPhoenixPlugin<InforSytelinePlugin>() .AddPhoenixPlugin<SapB1Plugin>() .AddPhoenixPlugin<ClientPlugin>() .Build();
// Core/Phoenix.Core/Plugins/IPlugin.cs public interface IPlugin { string Name { get; } string Schema { get; } void RegisterServices(WebApplicationBuilder builder); void OnStartup(WebApplication app); void OnModelCreating(ModelBuilder builder); virtual Type[] GetAdditionalDatabaseTypes() => Array.Empty<Type>(); }
RegisterServices
DI registration. Each plugin adds its services, repositories, and handlers to the container.
OnModelCreating
EF Core model configuration. Each plugin defines its entity mappings and schema.
Build
Application is materialized. The service provider and middleware pipeline are finalized.
OnStartup
Post-build initialization. Plugins run seed data, start background tasks, and configure middleware.
Plugins are loaded in the exact order declared in Startup.cs. The ClientPlugin is registered last to allow client-specific overrides of any previously-registered services. Each plugin receives its own EF Core schema, keeping database objects cleanly separated.
Backend (.NET)
Client + Core + Plugins compiled into a single process. Runs as ASP.NET Core on Kestrel.
Frontend (Node/Remix)
RemixUI application. Serves the UI and proxies API requests to the backend container.
Database (SQL)
External database. Connection string is injected per environment via Kubernetes secrets.
Optional (Redis, Elasticsearch)
Redis runs as a sidecar container. Elasticsearch available for search capabilities. Both provisioned via docker-compose locally.
Ingress
HTTPRoute
Gateway
Kubernetes Pod
Backend
.NET :8080
Frontend
Node :3000
Redis
Sidecar
External DB
NFS Storage
# Kubernetes/deployment.yml apiVersion: apps/v1 kind: Deployment metadata: name: {{acronym}}-{{env}} namespace: {{acronym}}-{{env}} spec: replicas: 1 template: spec: containers: - name: redis image: "redis" - name: backend image: image-registry.hq.clarityinternal.com/backend:{{commit}} env: - name: "ConnectionStrings__Clarity" valueFrom: secretKeyRef: name: db key: connectionString - name: "ASPNETCORE_ENVIRONMENT" value: "Production" - name: frontend image: image-registry.hq.clarityinternal.com/frontend:{{commit}} ports: - containerPort: 3000 env: - name: "API_URL" value: "http://localhost:8080"
One namespace per project/environment (e.g. {{acronym}}-{{env}}). Backend and frontend containers share a pod. Redis runs as a sidecar. NFS PVC is mounted for file storage.
Each client deployment uses a dedicated database instance, providing complete data isolation at the database level. There is no shared database or row-level tenancy — each tenant's data is physically separated. Connection strings, credentials, and configuration are managed per-tenant. The Client project layer holds all customer-specific overrides, pipeline hooks, and configuration, ensuring that core and plugin updates can be applied independently without affecting client customizations.
The CI/CD pipeline (Azure DevOps or similar) builds, containerizes, and packages the application through three sequential stages.
BuildBackend
Docker build + push
backend.dockerfile
BuildFrontend
Docker build + push
frontend.dockerfile
CreateDeployment
Token replacement
deployment.yml
BuildBackend
Builds the .NET solution using Kubernetes/backend.dockerfile. Pushes the image to the internal registry tagged with the commit hash and latest.
BuildFrontend
Builds the Remix application using Kubernetes/frontend.dockerfile. Same tagging strategy as backend.
CreateDeploymentArtifact
Performs token replacement on deployment.yml — substituting {{acronym}}, {{env}}, {{commit}}, {{subdomain}}, and {{domain}} — then publishes the artifact for deployment.
# Build and push backend image docker build -t image-registry.hq.clarityinternal.com/backend:$(Build.SourceVersion) \ -f Kubernetes/backend.dockerfile . docker push image-registry.hq.clarityinternal.com/backend:$(Build.SourceVersion) # Build and push frontend image docker build -t image-registry.hq.clarityinternal.com/frontend:$(Build.SourceVersion) \ -f Kubernetes/frontend.dockerfile . docker push image-registry.hq.clarityinternal.com/frontend:$(Build.SourceVersion) # Both images are also tagged with 'latest'
The platform is deployed on Kubernetes and is cloud-agnostic. No cloud-specific SDK or provider is hardcoded into the codebase.
Orchestration
Kubernetes (Deployment, Service, HTTPRoute, Namespace, PVC). Each project/environment gets its own namespace.
Container Registry
image-registry.hq.clarityinternal.com — images tagged by commit hash and latest.
Storage
NFS with storageClassName: nfs-client, ReadWriteMany access mode, mounted at /usr/share/phx for backend file operations.
Database
External database. Connection string injected from Kubernetes secret db.connectionString. Provider is determined by the connection string format.
Gateway
Kubernetes gateway.networking.k8s.io/v1 HTTPRoute with hostname template {{subdomain}}.{{domain}}.com.
The Kubernetes HTTPRoute defines how incoming traffic is distributed between the backend and frontend services. URL rewriting is applied to API routes so the backend receives clean paths.
| Path Match | Backend Service | Filters |
|---|---|---|
| /auth/microsoft/register/callback | nest-prd-backend | None |
| /auth/microsoft-callback | {{acronym}}-{{env}}-backend | None |
| /api/* | {{acronym}}-{{env}}-backend | URL rewrite (replacePrefixMatch: /) |
| / (default) | {{acronym}}-{{env}}-frontend | None |
The /api prefix is stripped before forwarding to the backend service, so backend controllers do not need to include /api in their route templates. Auth callback routes are handled directly by the backend without rewriting.
Configuration follows ASP.NET Core conventions with environment-specific overrides. Secrets are injected at runtime via Kubernetes, keeping sensitive values out of the codebase.
| Variable | Source | Purpose |
|---|---|---|
| ConnectionStrings__Clarity | K8s Secret (db.connectionString) |
Database connection string |
| AuthSettings__JwtKey | K8s Secret / ConfigMap | JWT signing key |
| AuthSettings__JwtIssuer | K8s Secret / ConfigMap | JWT token issuer URL |
| ASPNETCORE_ENVIRONMENT | Deployment manifest | Runtime environment (Production, Staging) |
| API_URL | Deployment manifest | Backend URL for frontend proxy (http://localhost:8080) |
ASP.NET Core merges configuration sources in order, with later sources overriding earlier ones:
// Configuration loading order (later overrides earlier) 1. appsettings.json // Base configuration 2. appsettings.{Environment}.json // Environment-specific overrides 3. Environment variables // Kubernetes-injected values 4. Kubernetes Secrets // Sensitive data (connection strings, JWT keys)
The double-underscore convention (ConnectionStrings__Clarity) maps to the nested JSON path ConnectionStrings:Clarity in appsettings.
Local development uses Docker Compose for infrastructure services and standard .NET / Node tooling for the application code.
Start Infrastructure Services
Spin up Redis and Elasticsearch using Docker Compose.
docker-compose up -d
Configure Backend
Copy the example settings file and configure your local connection string.
cp appsettings.Development.example.json appsettings.Development.json
# Edit appsettings.Development.json — set ConnectionStrings:Clarity
Set Connection String
Add your local database connection string to the settings file. Both PostgreSQL and MSSQL formats are supported.
// appsettings.Development.json { "ConnectionStrings": { "Clarity": "Server=localhost;Database=Clarity;User Id=sa;Password=..." } }
Run Backend
Start the .NET backend from the Client project.
dotnet run --project Client
Run Frontend
Start the Remix development server from the RemixUI directory.
cd RemixUI
npm run dev
Kubernetes resource requests and limits are defined per container in the deployment manifest. These values govern scheduling and OOM kill thresholds.
| Container | Memory Limit | CPU Limit | Notes |
|---|---|---|---|
|
dns
Backend
|
1.5Gi | 500m | .NET runtime, EF Core, plugin services |
|
web
Frontend
|
0.75Gi | 500m | Node.js Remix SSR |
|
memory
Redis
|
128Mi | 100m | In-memory cache sidecar |
Total pod footprint is approximately 2.375Gi memory and 1100m CPU. The deployment uses a single replica by default.
The Clarity Platform supports both PostgreSQL and Microsoft SQL Server through Entity Framework Core's provider-agnostic abstraction. The active database provider is determined at runtime based on the connection string format.
PostgreSQL
Open-source, preferred for new deployments.
Host=localhost; Port=5432; Database=Clarity; Username=postgres; Password=***
Microsoft SQL Server
Supported for existing enterprise installations.
Server=localhost; Database=Clarity; User Id=sa; Password=***; TrustServerCertificate=True
The platform inspects the connection string format at startup and configures the appropriate EF Core provider. No code changes are required to switch databases.
// Simplified provider selection logic if (connectionString.Contains("Host=")) { // PostgreSQL detected options.UseNpgsql(connectionString); } else { // SQL Server (default) options.UseSqlServer(connectionString); }
Schema Management
Each plugin owns its own database schema (e.g., payments, invoicing). Migrations are generated from the Client project which references all plugins.
Provider-Specific SQL
EF Core handles most differences automatically. Provider-specific SQL (e.g., JSON column types, full-text search) should use conditional compilation or provider checks.
Testing Across Providers
Both database providers are fully supported in all environments. Integration tests should be run against both providers to ensure compatibility.
Each client runs on a dedicated database instance with complete data isolation. The deployment model uses Docker containers orchestrated by Kubernetes, with per-tenant configuration managing connection strings, credentials, and feature flags. Client-specific customizations live in the Client layer and are applied via pipeline hooks, never modifying core or plugin code. See Deployment Model.
The backend runs on .NET 8 with ASP.NET Core and Entity Framework Core. .NET was chosen for its mature ecosystem, strong typing, excellent performance characteristics, enterprise-grade security libraries, and broad hosting compatibility. The frontend uses Remix v2 with React 18 and TypeScript, providing server-side rendering and a modern developer experience. See Hosting Stack.
The platform supports both PostgreSQL and MSSQL through Entity Framework Core's provider abstraction. Database provider selection is configuration-driven, allowing clients to use whichever database aligns with their existing infrastructure. Migrations, queries, and all data access work identically across both providers. See Dual Database Support and Database Details.
Resource limits are configured through Kubernetes resource specifications and application-level configuration. Memory and CPU limits are set per container, and horizontal pod autoscaling handles traffic spikes. The platform's stateless design means additional instances can be added without session management concerns. Redis handles distributed caching and session state. See Resource Limits.
Developers clone the repository, add plugin submodules, and run the application using standard .NET CLI or Visual Studio. Docker Compose provides local database and Redis instances. The Remix frontend runs in development mode with hot module replacement. Environment-specific configuration is managed through appsettings files and user secrets. See Local Development.
Clarity provides three layers of customization. Layer 1: Over 80% of customer needs are met through pure configuration — settings, feature flags, branding, and payment/connector preferences managed through the admin UI. Layer 2: Pipeline hooks allow code-level behavior changes without modifying core — a client registers pre-hooks or post-hooks on any pipeline from their Client layer. Layer 3: For unique requirements, a ClientPlugin registered last in Startup.cs can override services, add entities, and inject frontend routes. All layers are additive, ensuring core updates apply cleanly. See Customization Layers.
The evolution follows three phases. Phase 1: Fleet management automation (provisioning scripts, central dashboard, fleet-wide updates) — immediate ROI with zero architectural risk. Phase 2: Shared application tier with tenant resolution middleware, scoped DbContext, and database-per-tenant isolation. Phase 3: Full platform with self-service provisioning, tiered plans, and usage metering. The architectural foundations — pipeline context scoping, configuration-driven behavior, EF Core multi-tenancy support — are already in place. See Path to Multi-Tenancy.