Skip to content
nyxCore/nyxCoreOpen Dashboard
nyxCore

The orchestration dashboard · Next.js 16 · multi-tenant

nyxCore

The orchestration layer you do not have to build.

27 tRPC routers, 90 tables, a workflow engine with 20 template variables, discussions that verify file paths before the answer reaches your screen, six-phase self-repair with Ipcha-gated autonomy, Axiom RAG with authority tiers, BYOK across four providers, fail-closed multi-tenant RLS. All the boring infrastructure you keep almost-building, already built.

tRPC routers

27

DB tables

90

Built-in personas

15+

LLM providers

4

Template variables

20

Status

Live

Savings — time, energy, budget · business outcomes

Tyche ran the math · Metis signed off · Ipcha objected

Fourteen hours back per architect, per team-week — and we can tell you where every hour comes from.

The reader nyxCore is built for is the architect running three projects at once — the one whose inbox fills with cross-team translation work, the one who signs the ADRs the rest of the team lives by. Engineers benefit downstream; these numbers measure the seat above them, because that seat is where the translation layer fans out.

~14 h

per architect · team-week

Six measured sources — context assembly, workflow resume, multi-provider consensus, self-repair auto-merges, provenance lookups, and backcheck debug-avoidance. A tech lead coordinating three teams lands closer to eighteen; a solo architect-of-one sits around eight.

~70 %

re-debate cut

Consolidation lifts architecture decisions into prompt hints the next workflow inherits. ADRs stop getting re-litigated on three separate project kickoffs. The effect compounds from month three.

~40 %

LLM spend cut

Cached input tokens, digest reuse across workflow steps, per-provider routing through tenant-scoped BYOK. Accounting runs to TTFT and cached-input granularity, so the number is billing-accurate — not model-price marketing.

3,156

persona runs · 95 % success

Real aggregate across fourteen personas, pulled live from the MCP at build time. Every other number on this site runs through this log — not a benchmark, the running ledger.

Where the fourteen hours come from

Context assembly
3–5 h
Every workflow and discussion auto-assembles up to twenty template variables — CLAUDE.md, file tree, Axiom chunks, memory, consolidations, live schema. Doing this by hand ran 5–15 min per serious prompt, six to ten prompts a day.
Workflow resume
2–3 h
AsyncGenerator + checkpoints. A run that paused, crashed, or ran past a context window picks up where it was, tokens intact. Average lost-run recovery was 30–60 min; nyxCore runs resume in under a minute.
Multi-provider consensus
2 h
Instead of Slack-pinging two seniors and waiting — run consensus mode, three providers answer in parallel, Cael arbitrates in ninety seconds. Works for architecture calls, security gates, contract reviews.
Self-repair auto-merges
2–3 h
Low-severity, high-Ipcha-score fixes close without a human review. Tiered autonomy keeps high-severity fixes in the human queue. Three-merge daily circuit breaker; never touches auth or schema.
Provenance lookups
~1 h
Every action point links back to the discussion, insight, or note it came from. No more archaeology — the chain of custody is a click away.
Backcheck debug-avoidance
0.5–1.5 h
Server-side file-path verification catches fabricated citations before they reach your editor. Each hallucinated reference that would have cost 15–30 min of debugging, caught instead.

Aristaeus · what the hours are for

Fourteen hours is not overtime reclaimed. It is the arithmetic difference between a week of context-switching and a week with runway to think. The hours fund architectural depth, mentorship, the research spike that becomes next quarter's runway, the decision-document that stops getting re-debated. The compounding kind of work that never shows up on a sprint board but shows up on the next one.

Metis · what keeps the hours honest

A team without review habits, without retros, without action-point tracking will not see these numbers. nyxCore replaces the substrate that carries those habits — it does not build the habits themselves. If you have the discipline, we hand the hours back with interest. If you do not, we hand you the scaffolding to build the discipline, and you do the work.

What aggregates at the board level

Shorter cycles, earlier demos, half the cost per shipped feature.

Architect-hour deltas aggregate into cycle-time deltas. The chain from customer conversation to shipped feature compresses — fewer translation hops, fewer waiting queues, fewer weeks lost between a signed ADR and its first pull request. These numbers come from four tenants that have shipped commercial features on the platform; the fifth is us, shipping this site.

~2.5×

faster · brief → shipped MVP

A four-feature MVP that used to take six weeks ships in two to three. Bootstrap turns intent into a typed brief without the usual hand-off cascade; workflows and self-repair carry it from brief to PR without a PM translating.

~3×

shorter customer → dev cycle

The Chinese-whispers chain — customer → PO → team lead → dev — collapses into a single Bootstrap session with the right personas already at the table. Five to ten business days become two or three.

~45 %

lower cost per shipped feature

2.5× velocity × 70 % less re-discovery × 40 % LLM-spend cut compounds into just under half the engineer-hour and token spend for the same feature. A €6 000 feature prices at around €3 300, all in.

+20 %

team time → runway · craft

About a fifth of team engineering time moves from context-switching and re-work into deep work, code-review depth, mentorship, and the architectural runway that funds next quarter's roadmap. The hours show up on the next sprint board, not on this one.

Ipcha note: the 2.5× MVP and 3× cycle numbers assume a team with a functioning product owner. nyxCore shortens the translation layer between customer, PO, tech lead, and developer — it does not eliminate the PO role. Teams without review habits, retros, or action-point tracking see the lower bound of every range, not the average. If your current bottleneck is hiring, nyxCore will not paper over that. If your bottleneck is translation, it collapses.

Bootstrap — intent → brief

Start before you have the brief

A scoping pipeline. Not a template sheet.

The bootstrap service interviews the project owner in plain language, extracts a typed brief as a working document, and never turns that brief into an immutable oracle. Readers stay forgiving so a wedged session can be recovered; writers enforce limits so the document does not drift.

Intent, not schema

The bootstrap scoper asks what you are actually building, in plain language. It does not want a brief; it asks until it has one.

Structured brief

Extracted to a typed draft — goals, constraints, dependencies, extension slots. Stored as a working document the team edits, not an LLM one-shot.

Lenient reads, strict writes

The brief reader is forgiving on stale fields; the writer enforces limits. You can recover a wedged session instead of starting over.

Template shortcut or classify

A typed template prefix (template:greenfield, template:integration, …) skips the classifier entirely. Free-form intent goes through classification first.

Discussions — 3 modes · server-side backcheck

Chat that refuses to lie about your files

Three modes. File paths verified before they reach you.

Discussions speak to one provider, to all of them in parallel, or in consensus rounds. Every answer that claims to cite a file gets verified server-side against the repository. Every answer that hits a provider's output ceiling carries a truncation flag downstream steps cannot ignore.

ModeBehaviour
singleOne provider, one model. The clean path — Anthropic, OpenAI, Google, or Ollama. Streaming, axiom-optional, with project-role context injection (target / source / reference / infrastructure).
parallelEvery enabled provider answers the same turn. Side-by-side tabs. The UI surfaces token counts, latency, and truncation warnings per panel.
consensusMulti-round: proposals fan out, synthesis collapses, Cael arbitrates. Used when the answer has to survive disagreement before it lands in an insight.

Server-side backcheck

Every answer claiming to cite a file is verified against the repository on the server before it reaches your browser. Broken refs get flagged inline, not quietly surfaced.

Context compression (CCE)

An 80 k-token budget manager compresses project context, axiom chunks, memory, and file tree before the prompt assembles. No silent drops; every compression step is logged.

Truncation detection

When a provider's reply hits its output ceiling, the response carries a truncation flag. The UI shows it; downstream steps refuse to treat a cut-off answer as complete.

Project roles

Projects attached to a discussion are tagged target, source, reference, or infrastructure. The prompt assembler uses the role to decide which wisdom to include first and which to cite only when asked.

Workflows — streaming engine · 20 template vars

The engine, not the builder

Workflows stream. They pause, they resume, they remember.

The workflow engine is an AsyncGenerator that yields events the instant they happen. Every step has its own BYOK provider, its own retry curve, and its own place in a dependency graph that survives pause and resume. Twenty template variables pull context from the right seams — repos, database, axiom, memory — so prompts stop being strings and become compositions.

AsyncGenerator streaming

Every workflow step yields events as they happen — progress, text, retry, paused, done. Your dashboard updates the instant a token leaves the model.

Fan-out by regex

Split a source step's output by heading pattern; run the LLM once per section. Sub-outputs render as tabs with per-section download, copy, and digest.

Review gates

Review steps pause execution until a human approves the findings. YOLO mode lifts the gate for trusted loops; the default is deliberate.

Resume from paused

A run can live for days. Every checkpoint persists tokens, cost, state. Pick up where you left off; the graph knows.

Template variables — resolved in workflow-engine.ts

20 total

{{input}}
Workflow input map
{{steps.Label.content}}
Step output; digest preferred
{{steps.Label.notes}}
User review notes from checkpoint
{{steps.Label.section[N].content}}
Nth fan-out sub-output
{{consolidations}}
Prompt hints from linked patterns
{{memory}}
Curated workflow insights (hybrid search)
{{claudemd}}
CLAUDE.md / README from linked repos
{{fileTree}}
Directory listing from linked repos
{{docs}}
Combined docs from linked repos
{{project.wisdom}}
Auto-loaded consolidation + code patterns
{{axiom}}
Project Axiom RAG (mandatory / guideline / info)
{{database}}
Live PostgreSQL schema — tables, indexes, RLS
{{schema}}
Project schemas from the Schema Registry
{{schema.diff}}
Cross-schema structural comparison
{{ethics}}
Ipcha ethic-scoped insights only
{{fanOut.section}}
Current section (inside fan-out only)
Review → insight → memory

Every workflow feeds the next

A closed learning loop. Humans in every gate.

Review steps do not trust the LLM. They surface ReviewKeyPoint findings; humans approve or edit them; the approved ones land as WorkflowInsight rows with embeddings; the next workflow picks them back up via hybrid search — 70 % pgvector, 30 % full-text. Cross-project consolidation lifts the patterns that show up in many projects so they become prompt hints for the next run.

  1. 1. Review step

    Review-typed steps extract ReviewKeyPoint findings — severity, category, suggestion — from the preceding step's output.

  2. 2. Human approves

    The SaveInsightsDialog shows each key point. Users edit, trim, or drop. Nothing persists until a human says yes.

  3. 3. Insight persists

    Approved points become WorkflowInsight rows with an embedding (OpenAI text-embedding-3-small). Pain points auto-pair with matching strengths by category.

  4. 4. Memory reads it back

    Next workflow selects memoryIds via MemoryPicker; the {{memory}} variable injects curated insights ranked by hybrid search — 70 % pgvector cosine plus 30 % tsvector FTS.

  5. 5. Consolidation aggregates

    Cross-project consolidation extracts patterns with evidence, severity, and projectRefs. Those patterns feed {{consolidations}} for the next workflow — a closed learning loop.

Self-repair — six phases · tiered autonomy

Code status checks, wired into the pipeline

Detection, validation, patch, validation again. Ipcha gates every autonomy step.

Most auto-fix tools ship LLM patches directly to a PR and hope. nyxCore's self-repair runs CKB first, asks the LLM second, red-teams the findings with Ipcha, drafts a patch, re-validates against CKB, and only then opens the PR. Low-severity + high Ipcha score may auto-merge; high-severity always waits for a human. Circuit breaker: three auto-merges per day, one run per hour, never touches auth or the Prisma schema.

  1. 1

    Scan

    The workflow enumerates targets in the chosen repository branch. Webhook-triggered or cron.

  2. 1.5

    CKB analysis

    If the repo has a ProjectCkbIndex at status=ready, deterministic symbol + coupling analysis runs first. Repos without CKB skip gracefully.

  3. 2

    LLM detection

    Semantic detection proposes findings, grounded in CKB structural context. Each finding carries severity + file range + evidence.

  4. 2.5

    Ipcha validation

    Adversarial pass red-teams every proposed finding. Scores below threshold get dropped; the rest carry an ipchaScore that gates downstream autonomy.

  5. 3

    Fix generation

    LLM drafts a patch per surviving finding. Writes are bounded — never touches prisma/schema.prisma or auth middleware by policy.

  6. 3.5

    CKB fix validation

    Post-fix structural re-check. Any newly introduced compile break or coupling regression sends the fix back, not forward.

  7. 4

    PR creation

    Each surviving fix lands as a GitHub PR with audit-log breadcrumbs: phase, action, duration, ipchaScore, model, tokens.

  8. 5

    Tiered merge

    Low severity + high ipchaScore → auto-merge. High severity → human PR review required. Circuit breaker: max three auto-merges per day, one run per hour.

Kill switch: SELF_REPAIR_ENABLED=false. Never writes to prisma/schema.prisma or src/middleware.ts. Audit breadcrumbs at every phase; replayable from SelfRepairAuditLog.

Knowledge binding — Axiom + Schema Registry

Not all context weighs the same

Three authority tiers. Budget pressure cuts from the bottom.

Axiom RAG treats mandatory docs, guidelines, and background as three different classes of citizen. When the 80 k context budget is tight, informational chunks drop first, guidelines second, mandatory never. The Schema Registry binds live database structure — columns, indexes, RLS, triggers — into the same prompt pipeline, so {{schema}} is the ground truth, not a commented-out old dump.

Authority — mandatory

Compliance obligations, regulatory rules, signed policy — loaded first, cannot be dropped by context pressure. GDPR, ISO, legal standards.

Authority — guideline

Style guides, architecture conventions, team agreements. Consulted by every workflow; overridable only with a captured reason.

Authority — informational

General context — old decisions, background briefings. Loaded last, dropped first when the 80 k budget gets tight.

Schema Registry — external DB bindings

Point nyxCore at a PostgreSQL, MySQL, or MongoDB connection string (encrypted at rest). Normalised schema flows into {{schema}}. Multiple registered schemas unlock {{schema.diff}} for cross-service structural review. Sync on interval, on webhook, or on demand. Your workflows know when your reality has changed.

BYOK — done properly

Your keys, your ceiling, your audit

Keys stay yours. Logs run deep enough to bill from.

Every tenant ships their own Anthropic, OpenAI, Google, and Ollama keys. They live encrypted with AES-256-GCM in the vault and only decrypt per-request, tenant-scoped, through resolveProvider(). Usage flows into a log row per call with TTFT, cached input tokens, and USD cost — the same table runs billing, budgets, and forensics.

AES-256-GCM at rest

Provider keys live encrypted in the api_key_vault. The tenant crypto salt lives in env, never the DB. Decryption is per-request through resolveProvider().

Tenant-scoped resolution

Every LLM call resolves through the tenant's own keys. One tenant cannot spend another's budget; one compromised provider key reaches only the tenants that used it.

Usage logs down to TTFT

LlmUsageLog captures provider, model, feature, input/output tokens, cached input tokens, latency, time-to-first-token, cost in USD, workflow + step IDs. Billing and quota run from the same source.

Audit log on admin paths

Every action with admin scope — tenant switch, role change, key rotation, compliance export — is logged with user, IP, and the action's JSON payload. Queryable via the audit router.

Code intelligence — where the code meets the dashboard

Findings that carry their own provenance

Where did this todo come from? The dashboard can always tell you.

Action points, consolidations, insights, and audit runs every link back to the discussion, review, or scan that produced them. You can open a compliance finding and follow the trail to the paragraph an engineer wrote in a discussion in May.

CKB index per project

project_ckb_indexes links a repo + branch to a maintained SCIP-like index. Webhook-refreshed on push; stats cache on the row. Your workflows get grounded code facts without re-indexing.

Action points with provenance

Auto-detected from discussions / reviews / notes with sourceDiscussionId, sourceInsightId, sourceNoteId. You can always walk from a todo back to the paragraph that caused it.

Compliance export → PR

A compliance report builds from action points + audit runs, signs the evidence, and lands in a branch with audit-log entry compliance_report.pr_created. Twenty frameworks, one export surface.

Schema Registry

Point nyxCore at an external PostgreSQL connection string; normalised schema flows into {{schema}} and {{schema.diff}}. Cross-service database changes become reviewable inside a workflow.

Architecture — the runtime stack

Boring infra, built by people who care

Nothing is exotic. Every layer is one you already trust.

nyxCore avoids exotic dependencies. Postgres is Postgres, Redis is Redis, Next.js is Next.js. The compound piece is how the layers talk — streaming, fail-closed, audit-logged, tenant-scoped.

Framework
Next.js 16 · App Router · TypeScript strict
Database
PostgreSQL 16 + pgvector 0.8.1 · HNSW
Queue / rate limit
Redis 7 · fail-open token bucket
Auth
NextAuth v5 · JWT · Resend-email magic links
Internal API
tRPC v11 · 27 routers · Zod
External API
REST + SSE under /api/v1
Host
Hetzner Frankfurt · Traefik · Let's Encrypt
Observability
Prometheus · Loki · Grafana · Alloy

Multi-tenancy at the database layer, not the application

Every tenant-scoped table has a PostgreSQL Row-Level Security policy — tenant_isolation — that filters by the session-level app.current_tenant setting. Request resolution sets the variable once per request, through tRPC middleware, before any query runs. If the app layer forgets to filter, the database still does. Fail-closed means requests without a valid tenant never see any rows.

One deployable, one backup, one door

No microservice fan-out, no hosted control plane, no out-of-band cron box. Workflows, discussions, self-repair, billing, audit, MCP — one Next.js 16 standalone server, behind one Traefik router, next to one Postgres and one Redis. If the database walks out on a USB stick, your nyxCore installation walks with it.

Honest positioning — the anti-fits

Where this tool is wrong for you

The adversary’s disclosure. Read before you commit a weekend to the stack.

Ipcha wrote this section against docs/IPCHA_DISCLOSURES.md. Each card passes both the competitor test (would a competitor quote this as a win? no) and the anti-fit test (does this name a reader who should not adopt? yes).

Disclosure

Designed for teams with governance.

nyxCore assumes you want audit logs, roles, RLS, BYOK vaults, and a shared context that survives vacation. If your operating mode is a single user editing production at 2 a.m. with no review gate, the ceremony will slow you down — and speeding you up is the product. A solo-reader free tier ships mid-2026; until then the sign-up form is the waiting list.

Disclosure

Hosted tier ships mid-2026.

Today's only door is self-host. Production assumes PostgreSQL 16 + pgvector, Redis 7, a Node 20 runtime, Traefik-grade TLS termination, and an Anthropic / OpenAI / Google / Ollama key. The managed SaaS tier is on the Q3 2026 roadmap. If you cannot own the stack and cannot wait, shop something already managed.

Disclosure

The personas encode an opinion.

Aristaeus, Ipcha Mistabra, Harmonia, Metis — these are not neutral templates. They carry a specific way of thinking about review, shipping, and honesty. Adopting nyxCore means adopting the vocabulary before you can remix it. Teams that want fully neutral templates will feel railroaded; you cannot strip the opinion without stripping the product.

Get in — or get on the list

Two doors

Open the dashboard if you are already a tenant. Otherwise tell us what you want to build.

nyxCore is live at nyxcore.cloud, hosted on Hetzner Frankfurt. Tenants onboard by talking to us first so we can seat the right defaults. The access form routes through our own mailserver — no third-party email provider in the loop.

See the rest of the nyxCore ecosystem Meet the pantheon