Reporting & Output Tools¶
Report Generation¶
BeDefended /report Skill¶
Purpose: Generate professional penetration testing reports
Consolidates all findings into client-ready reports.
/report # Generate markdown report
/report --format html # HTML format
/report --format pdf # PDF format
/report --hwg # HWG compliance (Italian)
Features: - Executive summary - Finding tables - Detailed vulnerability descriptions - Full technical evidence propagation for pentest findings (raw HTTP request/response, headers, body) - CVSS 4.0 scoring - Remediation guidance - References (CWE, OWASP, CVE)
Evidence Propagation Standard¶
For pentests, report generation is not allowed to downgrade verified evidence into prose-only summaries. If a finding is HTTP-backed, the report pipeline must preserve: - the full raw HTTP request - the full raw HTTP response - complete headers and complete body - representative screenshots when the issue is visual or browser-driven
The same evidence package must remain consistent across the finding markdown, HedgeDoc/Outline notes, and generated .docx deliverables.
Data Format Conversion¶
jq (JSON Query)¶
GitHub: stedolan/jq Purpose: Parse and transform JSON
Extract and format data from JSON outputs.
# Extract all endpoints from test-plan.json
jq '.test_groups | keys[]' test-plan.json
# Pretty-print JSON
jq '.' raw-output.json > formatted.json
# Extract specific fields
jq '.[] | {endpoint: .endpoint, severity: .severity}' findings.json
Common Patterns:
jq '.[] | select(.severity == "Critical")' findings.json # Filter
jq '.[] | .endpoint' results.json # Extract field
jq 'group_by(.severity)' findings.json # Group by field
Pandoc¶
GitHub: jgm/pandoc Purpose: Universal document converter
Convert reports between formats.
docker run --rm -v $(pwd):/work pentest-tools \
pandoc report.md -o report.pdf \
--from markdown \
--to pdf \
--variable mainfont="IBM Plex Sans"
Conversions: - Markdown → PDF, HTML, Word, LaTeX - HTML → Markdown, PDF - Word → PDF, Markdown
Output Formats: - PDF (with styling) - HTML (for web) - DOCX (for Word) - LaTeX (for academic)
Data Aggregation & Analysis¶
Python Scripts¶
Purpose: Custom data aggregation and analysis
BeDefended includes helper scripts for report generation.
Example: Aggregate findings by severity
import json
import glob
findings = []
for file in glob.glob('findings/FINDING-*.md'):
with open(file) as f:
finding = json.load(f)
findings.append(finding)
# Group by severity
by_severity = {}
for finding in findings:
severity = finding['severity']
if severity not in by_severity:
by_severity[severity] = []
by_severity[severity].append(finding)
# Print summary
for severity in ['Critical', 'High', 'Medium', 'Low']:
count = len(by_severity.get(severity, []))
print(f"{severity}: {count} findings")
Visualization Tools¶
Mermaid Diagrams¶
GitHub: mermaid-js/mermaid Purpose: Create diagrams from text descriptions
Used throughout BeDefended docs for architecture visualization.
Diagram Types: - Flowcharts (execution flow) - Gantt charts (timeline) - Sequence diagrams (interaction flow) - Entity-relationship diagrams - Class diagrams
Example:
Spreadsheet Generation¶
Python Libraries¶
Purpose: Generate Excel reports for stakeholders
import openpyxl
from openpyxl.styles import Font, PatternFill
# Create workbook
wb = openpyxl.Workbook()
ws = wb.active
ws.title = "Findings"
# Headers
headers = ["Finding ID", "Title", "Severity", "CVSS", "Status"]
ws.append(headers)
# Add findings
for finding in findings:
ws.append([
finding['id'],
finding['title'],
finding['severity'],
finding['cvss'],
finding['status']
])
# Format
for row in ws.iter_rows():
for cell in row:
cell.font = Font(name='IBM Plex Sans')
wb.save('findings-report.xlsx')
Logging & Metrics¶
Pentest Timeline (JSONL)¶
Format: JSON Lines (one JSON object per line)
logs/pentest-timeline.jsonl records every event:
{"timestamp":"2026-03-13T10:00:00Z","phase":"Phase 0","event":"context-init-start"}
{"timestamp":"2026-03-13T10:05:00Z","phase":"Phase 0","event":"context-init-complete","duration_seconds":300}
{"timestamp":"2026-03-13T10:30:00Z","phase":"Phase 4","event":"finding-discovered","finding_id":"FINDING-001","severity":"Critical"}
{"timestamp":"2026-03-13T14:00:00Z","phase":"Phase 5","event":"finding-verified","finding_id":"FINDING-001","status":"verified"}
Analysis:
# Count findings by hour
jq -s 'group_by(.timestamp[0:13]) | map({hour: .[0].timestamp[0:13], count: length})' timeline.jsonl
# Find all Critical findings
jq 'select(.severity == "Critical")' timeline.jsonl
Agent Logs¶
Location: logs/agent-wave-N.log
Records each testing agent's execution:
[AGENT-001] Wave 0 | Skill: test-injection:sqli
[AGENT-001] Testing endpoint: /api/v1/users?sort=
[AGENT-001] Found vulnerability: SQL Injection (time-based)
[AGENT-001] Creating FINDING-001
[AGENT-001] Completed: 5 endpoints tested, 1 finding
Archival & Delivery¶
Git History¶
Format: Git commits for reproducibility
All reports and findings committed to git:
git log --oneline
# Sample output:
# a1b2c3d Phase 6: Report generation complete
# x9y8z7w Phase 5: All findings verified
# m1n2o3p Phase 4: Testing complete, 12 findings
Audit Trail: Every finding creation, verification, and cleanup logged in git history.
Report Packaging¶
Format: ZIP archive for client delivery
# Package all deliverables
zip -r pentest-report.zip \
report.pdf \
findings/ \
evidence/ \
CLAUDE.md \
logs/pentest-timeline.jsonl
Contents:
- Final report (PDF/HTML)
- Generated report artifacts (.docx where applicable)
- Individual finding files
- Evidence (HTTP requests/responses)
- Timeline (all events)
- Methodology documentation
Summary Table¶
| Tool/Format | Purpose | Output |
|---|---|---|
/report |
Generate report | MD, HTML, PDF |
| jq | JSON processing | Filtered/transformed JSON |
| Pandoc | Document conversion | PDF, HTML, DOCX, LaTeX |
| Python | Custom aggregation | Excel, JSON, reports |
| Mermaid | Diagram creation | SVG diagrams |
| JSONL | Event timeline | Chronological log |
| Git | Version control | Audit trail |