Capital Refinery
Vendor landscape · 2026 edition

Private equity portfolio monitoring software, mapped honestly.

iLEVEL, Allvue, Chronograph, and Cobalt cover most of institutional PE's post-close window. Below: what each one is built for, where the line of the category falls, and the structural reason monitoring software is not — and was never going to be — a decision tool.

  1. 01

    The dominant monitoring stack

    Four vendors cover most of the institutional PE post-close window.

    Together they form the visibility layer for most institutional private capital. Each is mature, well-funded, and excellent inside its scope. Below we map what each one is built for, and the structural argument that applies to all of them.

    iLEVEL
    700+ firms

    SS&C / Solovis — the most widely deployed PE portfolio monitoring platform. KPI ingestion, dashboard rendering, fund-level rollups, board pack assembly.

    Allvue

    End-to-end front-to-back including monitoring, fund admin, and investor reporting. Strong on tightly integrated GP workflow.

    Chronograph

    GP-side and LP-side portfolio analytics with strong benchmarking and structured operator data collection.

    Cobalt LP

    FactSet's LP-focused portfolio monitoring and benchmarking platform — deep integration with LP reporting cadence.

    Where the line is

    All four are excellent at the job they were built to do. The question is whether 'render this quarter's KPIs' is the right unit of analysis for the post-close window — and the answer is: it is necessary, but it is not sufficient.

The structural argument

Visibility is not the same as decision capability.

Every team running iLEVEL or Allvue or Chronograph already has visibility. The harder question is what the visibility is for — and whether the platform that produces it is built around the right unit of analysis.

What teams think they have
What actually happens
01A monitoring dashboard that flags when something is wrong.
Today's KPIs with no link back to entry assumptions or the thesis they invalidate.
02A continuous re-test of every IC decision.
A quarterly board pack assembled by hand from the operator's PDF.
03A unified view from operator data to investment thesis.
The IC memo on a shared drive, the KPIs on a dashboard, no structural connection between them.
04A way to see which positions have the least runway.
A status colour on a tile that does not know how much time is left.
05Institutional memory that survives team turnover.
The reasoning behind the deal living in the head of one principal who is now staffed on a new transaction.

FundCount and Standard Metrics describe the dominant pattern as quarterly reconstruction: rebuilding last quarter's operator data into a refreshed model, manually, and comparing it to a static IC narrative.

Quarterly review cycle
Net retention: 0%
The rebuild tax
Net retention
across cycles: 0%
01
Gather operator data
Re-pull the board pack PDFs
02
Reconstruct context
Re-read the IC memo from the drive
03
Rebuild the narrative
Re-write the quarterly note
04
Present to IC
Then it all resets next quarter
Every quarterly review starts from scratch. The analytical state from last quarter is gone — the team rebuilds it from the same source documents, every ninety days.
REBUILD TAX · ~40% of analyst time
Why the dominant vendors cannot just 'add' decision integrity

The data model is the constraint.

The natural response is 'fine, the monitoring vendors will just ship a decision integrity feature.' That is not how this works. The reasons are structural.

01 / 03

The primary object is the KPI

Not the decision.

The monitoring stack's data model is 'operator submitted this number for this period.' It does not have a typed representation of the IC thesis to bind the KPI to. Adding decision validity would require rewriting the data model from the inside out — and the existing customer base does not need that rewrite.

02 / 03

The cadence is event-driven on the period

Not on the assumption.

Monitoring is event-driven on the operator KPI. Decision integrity is event-driven on the assumption. They sit at different layers of the stack and answer different questions. Both are needed — and they have to live in different systems because the underlying triggers are different.

03 / 03

The customer is the LP, not the IC

The reporting cadence reflects this.

Monitoring software is built around the quarterly board-pack and the LP letter. The IC is a downstream consumer, not the primary user. A decision integrity layer has the IC as the primary user — and the cadence has to be continuous, not quarterly.

What changes when the layer exists

The dashboard does not go away.

iLEVEL still ingests. Allvue still rolls up. The board pack still gets produced. The difference is what sits next to the KPI.

01 / 03

Decision validity next to every KPI

A second number on every position.

Next to the KPI on the dashboard, a decision validity score: how defensible the original IC decision is, given everything that has happened since. It moves with the underlying data — not on a quarterly rebuild cycle.

02 / 03

Time-to-consequence as a sortable metric

A third number above the KPI.

How long until the conditions the IC relied on are gone. The team can sort the entire book by this number — and the IC review reorders around runway, not around colour code.

03 / 03

Structured handoff between teams

Institutional memory survives turnover.

The next analyst inherits the structured reasoning, not a folder of memos written for a committee that no longer exists. The decision integrity record is the team's institutional memory of why they did what they did.

Showcase · 2 min
Portfolio monitoring with the decision layer attached.

A walkthrough of the same KPI surface most teams already see — with the decision validity score, the time-to-consequence ranking, and the bound assumption sitting on every position.

Recorded against a real seeded portfolio. Every figure on screen is reproducible from the parsed source documents.
REC · 02:14

See decision validity next to your KPIs.

If your team runs iLEVEL, Allvue, or Chronograph, the natural next question is what governs the decision behind the KPIs. Bring us a position from your book and we will show you on real data.