Cost & ROI¶
Provides pre-engagement cost/time estimates and post-engagement efficiency metrics. Tracks resource consumption (tokens, compute time, requests) and calculates ROI indicators like cost-per-finding and findings-per-hour.
Pre-engagement estimate¶
Before starting an engagement, the estimator calculates expected resource consumption based on:
- Target URL: affects recon and discovery scope
- Testing mode: stealth (default), fast, or bug-bounty
- Options: whether recon, mobile, and LLM testing are included
The estimate produces expected time in hours, token consumption, and cost in EUR.
Post-engagement metrics¶
After an engagement, actual consumption is recorded and compared against the estimate. The ROI calculation produces:
| Metric | Description |
|---|---|
total_cost_eur |
Total estimated cost in EUR cents |
finding_count |
Total findings discovered |
critical_count |
Critical severity findings |
high_count |
High severity findings |
cost_per_finding_eur |
Average cost per finding |
time_hours |
Total wall-clock time |
efficiency_score |
Findings per hour |
skill_efficiency |
Per-skill breakdown (findings, time, requests) |
API endpoints¶
Pre-engagement estimate¶
| Parameter | Type | Default | Description |
|---|---|---|---|
target |
query | required | Target URL |
mode |
query | stealth |
Testing mode |
include_recon |
query | true |
Include Phase 1 recon |
include_mobile |
query | false |
Include mobile testing |
include_llm |
query | false |
Include LLM testing |
Response (CostEstimate):
{
"engagement_ref": "",
"estimated_time_hours": 4.5,
"estimated_tokens": 2400000,
"estimated_cost_eur": 0,
"breakdown": {
"recon": {"time_min": 30, "tokens": 200000},
"discovery": {"time_min": 25, "tokens": 300000},
"scan": {"time_min": 15, "tokens": 150000},
"testing": {"time_min": 180, "tokens": 1500000},
"verification": {"time_min": 15, "tokens": 150000},
"report": {"time_min": 10, "tokens": 100000}
}
}
Cost with Claude Max
When using a Claude Max subscription (the default), estimated_cost_eur is 0 because there are no per-token charges. The token estimate is still useful for planning context window usage and session duration.
Record actual costs¶
| Parameter | Type | Description |
|---|---|---|
tokens_in |
int | Input tokens consumed |
tokens_out |
int | Output tokens consumed |
docker_seconds |
int | Docker container runtime |
total_requests |
int | HTTP requests sent to target |
wall_time_seconds |
int | Total wall-clock time |
finding_count |
int | Findings discovered |
Get ROI metrics¶
Response (CostROI):
{
"engagement_ref": "acme-2026-q1",
"total_cost_eur": 0,
"finding_count": 14,
"critical_count": 2,
"high_count": 5,
"cost_per_finding_eur": 0.0,
"time_hours": 3.8,
"efficiency_score": 3.68,
"skill_efficiency": [
{"skill": "test-injection", "findings": 5, "time_min": 45, "requests": 1200},
{"skill": "test-auth", "findings": 3, "time_min": 30, "requests": 450},
{"skill": "test-access", "findings": 4, "time_min": 35, "requests": 600}
]
}
Interpreting efficiency¶
efficiency_score |
Rating | Meaning |
|---|---|---|
| > 3.0 | Excellent | High finding density relative to time |
| 1.5 - 3.0 | Good | Normal engagement pace |
| 0.5 - 1.5 | Average | Consider scope or methodology review |
| < 0.5 | Low | Target may be well-hardened, or scope too broad |
The skill_efficiency breakdown shows which test skills produced the most value. This data helps optimize future engagements by allocating more time to high-yield skills.
Connections to other features¶
- Team Collaboration: when multiple pentesters work on the same engagement, Team Collaboration assignment data combined with cost data shows per-pentester efficiency
- Continuous Monitoring: recurring scans from Continuous Monitoring accumulate costs over time. The ROI feature tracks these separately to show the marginal cost of ongoing monitoring
- Learning Loop: skills with low efficiency scores and high finding counts in the Learning Loop may indicate areas where better payload recommendations could reduce testing time