Learning, Memory & Analytics¶
The bug bounty stack is designed to improve over time at the program level. RedPick persists both long-lived knowledge and compact session memory, then uses submission outcomes to change what gets tested next.
Long-Lived Program Knowledge¶
Each program has its own knowledge root under:
Common files include:
program-knowledge.jsonattack-surface.jsontest-history.jsontechniques-log.jsonpolicy-rules.jsonbounty-calibration.json
This layer answers questions like:
- what has already been tested
- what was productive
- what was repeatedly unproductive
- what policy restrictions matter here
- what severities have historically paid well on this program
Compact Session Memory¶
Long raw logs are expensive to replay in future sessions, especially in perpetual operation. To avoid that, RedPick writes compact memory artifacts:
session-memory.jsondiscovery-digest.jsoncandidate-findings.jsonnext-tests.json
Per-session copies are also kept under memory/.
These artifacts are the handoff between one hunt and the next. They are optimized for short briefings, not archival completeness.
What Compact Memory Captures¶
Typical compact memory captures:
- tested surfaces and dead ends
- suspicious endpoints and parameters
- weak signals worth revisiting
- top next actions for the next wave or next session
- findings that are confirmed versus still pending
This is especially important in the Codex-heavy perpetual loop where support lanes are supposed to compress, not expand, the amount of context Claude must reload.
Passive Recon Feedback¶
Passive recon enriches program knowledge between active hunts.
Examples of stored signals:
- new subdomains from Certificate Transparency
- indications of version or deploy changes
- observations that suggest retesting previously fixed issues
Those signals are pushed into:
- observations
- next steps
- attack surface updates
This keeps the next hunt focused on fresh surface instead of static replay.
Submission Outcome Learning¶
The post-submission learning loop updates program memory when a report status changes.
Accepted or resolved¶
When a finding is accepted or resolved, RedPick can:
- record the successful technique
- increment positive findings summary counters
- trigger cross-program technique transfer for similar tech stacks
- update payout calibration if bounty data exists
Duplicate¶
When a report is marked duplicate, RedPick records competition pressure for that program and vuln class. This does not only say "the bug was valid." It also says "this hunting lane is crowded."
Informative or not applicable¶
When a report is rejected as non-actionable, RedPick can:
- record the technique as blocked for that program
- add the vuln class to learned exclusions
- stop spending future cycles on the same class unless new evidence justifies reopening it
Triaged¶
Triaged state is tracked as a responsiveness signal. Fast triage indicates a healthy operator-program fit and feeds payout and efficiency analysis.
Bounty Calibration¶
When payout data is present, RedPick updates bounty-calibration.json and computes average payout by severity for that program. This becomes a better local signal than market-wide averages alone.
Cross-Program Transfer¶
The learning loop can also project a successful technique onto similar programs if the source and target appear to share the same stack or surface type. This is a controlled way to turn one valid report into several targeted next tests elsewhere.
Analytics Surfaces¶
The bug bounty dashboard also exposes aggregated views that help operators steer the overall strategy:
Statsfor submission counts, severity mix, total bounty, and timingEarningsfor lifetime and recent payout viewsPipelinefor current report statusSignal Scoresfor platform reputation and submission outcome quality- hunting window and seasonal pattern endpoints for timing analysis
These views are not only reporting outputs. They are decision inputs for what to hunt next and where the operator is currently most effective.
Why This Matters¶
Without memory and learning, bug bounty automation degrades into repeated cold starts and duplicate effort. RedPick instead treats every program as a persistent research target with:
- durable history
- compact handoff state
- policy memory
- evidence memory
- economic memory
That is what allows both the interactive session flow and the perpetual loop to stay productive over long time horizons.