Private equity portfolio monitoring software, mapped honestly.
iLEVEL, Allvue, Chronograph, and Cobalt cover most of institutional PE's post-close window. Below: what each one is built for, where the line of the category falls, and the structural reason monitoring software is not — and was never going to be — a decision tool.
- 01
The dominant monitoring stack
Four vendors cover most of the institutional PE post-close window.
Together they form the visibility layer for most institutional private capital. Each is mature, well-funded, and excellent inside its scope. Below we map what each one is built for, and the structural argument that applies to all of them.
iLEVEL700+ firmsSS&C / Solovis — the most widely deployed PE portfolio monitoring platform. KPI ingestion, dashboard rendering, fund-level rollups, board pack assembly.
AllvueEnd-to-end front-to-back including monitoring, fund admin, and investor reporting. Strong on tightly integrated GP workflow.
ChronographGP-side and LP-side portfolio analytics with strong benchmarking and structured operator data collection.
Cobalt LPFactSet's LP-focused portfolio monitoring and benchmarking platform — deep integration with LP reporting cadence.
Where the line isAll four are excellent at the job they were built to do. The question is whether 'render this quarter's KPIs' is the right unit of analysis for the post-close window — and the answer is: it is necessary, but it is not sufficient.
Visibility is not the same as decision capability.
Every team running iLEVEL or Allvue or Chronograph already has visibility. The harder question is what the visibility is for — and whether the platform that produces it is built around the right unit of analysis.
FundCount and Standard Metrics describe the dominant pattern as quarterly reconstruction: rebuilding last quarter's operator data into a refreshed model, manually, and comparing it to a static IC narrative.
The data model is the constraint.
The natural response is 'fine, the monitoring vendors will just ship a decision integrity feature.' That is not how this works. The reasons are structural.
The primary object is the KPI
Not the decision.
The monitoring stack's data model is 'operator submitted this number for this period.' It does not have a typed representation of the IC thesis to bind the KPI to. Adding decision validity would require rewriting the data model from the inside out — and the existing customer base does not need that rewrite.
The cadence is event-driven on the period
Not on the assumption.
Monitoring is event-driven on the operator KPI. Decision integrity is event-driven on the assumption. They sit at different layers of the stack and answer different questions. Both are needed — and they have to live in different systems because the underlying triggers are different.
The customer is the LP, not the IC
The reporting cadence reflects this.
Monitoring software is built around the quarterly board-pack and the LP letter. The IC is a downstream consumer, not the primary user. A decision integrity layer has the IC as the primary user — and the cadence has to be continuous, not quarterly.
The dashboard does not go away.
iLEVEL still ingests. Allvue still rolls up. The board pack still gets produced. The difference is what sits next to the KPI.
Decision validity next to every KPI
A second number on every position.
Next to the KPI on the dashboard, a decision validity score: how defensible the original IC decision is, given everything that has happened since. It moves with the underlying data — not on a quarterly rebuild cycle.
Time-to-consequence as a sortable metric
A third number above the KPI.
How long until the conditions the IC relied on are gone. The team can sort the entire book by this number — and the IC review reorders around runway, not around colour code.
Structured handoff between teams
Institutional memory survives turnover.
The next analyst inherits the structured reasoning, not a folder of memos written for a committee that no longer exists. The decision integrity record is the team's institutional memory of why they did what they did.
A walkthrough of the same KPI surface most teams already see — with the decision validity score, the time-to-consequence ranking, and the bound assumption sitting on every position.
The research behind this.
- Why Your Monitoring Dashboard Isn't a Decision Tool
The structural argument, expanded across 11 minutes of reading.
Read → - Time-to-Consequence: The Metric Your Portfolio Tools Don't Have
How to formalize the metric senior PMs already compute in their head.
Read → - The Decision Integrity Gap
The full landscape analysis. The 6-layer stack and the missing 7th.
Read → - The 7 layers of the modern PE stack
Where monitoring sits in the broader picture — and what it depends on alongside.
Read →
See decision validity next to your KPIs.
If your team runs iLEVEL, Allvue, or Chronograph, the natural next question is what governs the decision behind the KPIs. Bring us a position from your book and we will show you on real data.