📊 מדדי אבטחה: מדידת מה שבאמת חשוב
"מה שנמדד מנוהל. מה שלא נמדד נפרץ." — Hagbard Celine
"אם אינכם יכולים למדוד את זה, אינכם יכולים להוכיח שזה עובד. אם אתם מודדים את הדבר הלא נכון, אינכם מוכיחים כלום. אם אתם רק מודדים מה שגורם לכם להיראות טוב, אתם מוכיחים שאתם משקרים לעצמכם."
🍎 התפוח הזהוב: מדדי יוהרה מול אבטחה אמיתית (או: איך לשקר בסטטיסטיקה תוך הרגשה מאובטחת)
צוותי אבטחה אוהבים מדדים. לוחות מחוונים מלאי מספרים. תרשימים צבעוניים. תדריכים מנהלים עם קווים העולים כלפי מעלה. זה תיאטרון אבטחה עם ערכי ייצור טובים יותר.
רוב מדדי האבטחה מודדים את הדברים הלא נכונים. FNORD. האם אתם מודדים אבטחה או מודדים את המראה של מדידת אבטחה? (שואל בשביל חבר. החבר הוא פרנויה.)
מספר המדיניות שנכתבה? לא רלוונטי אם לא נאכף (PDF לא עוצר האקרים). אחוז השלמת הדרכת אבטחה? חסר תועלת אם עובדים עדיין לוחצים על קישורי דיוג (ציות ≠ הבנה). מספר סריקות פגיעות? חסר משמעות ללא מהירות פריסת תיקונים (סריקת פגיעות שמעולם לא תוקנו זה כמו לאבחן סרטן ולחגוג את האבחנה). תעשיית האבטחה-תעשייתית מוכרת כלים המייצרים מדדים המוכיחים שקניתם כלים. הנמקה מעגלית היא מעגלית.
מדדו תוצאות, לא פעילויות. מדדו הפחתת סיכון, לא מאמץ שהושקע. או המשיכו למדוד קלטים ותתפלאו למה הפלטים עדיין גרועים. הבחירה שלכם. שום דבר לא נכון.
הארה למוסמכים: מדדי יוהרה גורמים למנהלים להרגיש טוב (דופמין מלוחות מחוונים ירוקים). מדדים אמיתיים חושפים אמיתות לא נעימות (כמו שתנוחת האבטחה שלכם מוחזקת ביחד באמצעות תקווה וסרט דבק). בחרו באמת על פני נוחות - האבטחה תלויה בכך. אבל האמת כואבת. זו הסיבה שרוב הארגונים בוחרים בנוחות. ונפרצים. המעגל נמשך. FNORD.
🛡️ חמש הקטגוריות של מדדי אבטחה שחשובים
1. זיהוי ותגובה
כמה מהר אתם מזהים ועוצרים התקפות?
MTTD: זמן ממוצע לזיהוי (שעות/ימים). MTTR: זמן ממוצע לתגובה (שעות). MTTR: זמן ממוצע להתאוששות (שעות).
2. ניהול פגיעויות
כמה מהר אתם מתקנים סיכונים קריטיים?
זמן לתיקון CVE קריטיים: ימים מהגילוי לפריסה. פגיעויות גבוה/קריטי פתוחות: ספירה מוחלטת, עם מגמת ירידה.
3. מגמות אירועים
האם אתם משתפרים או מחמירים?
אירועים לחודש: מגמת ירידה? חלוקת חומרה: יותר קריטיים או יותר אינפורמטיביים? אירועים חוזרים: לומדים מכשלונות?
4. בקרת גישה
למי יש גישה למה?
חשבונות עם הרשאות מוגזמות: ספירה, תדירות סקירה. חשבונות לא בשימוש: אישורים רדומים הם סיכון. כיסוי MFA: אחוז מערכות קריטיות.
5. מודעות אבטחה
האם משתמשים נופלים להתקפות?
שיעור קליקים בסימולצ יית דיוג: אחוז לוחצים על קישורים זדוניים. אימיילים חשודים שדווחו: מדד ערנות משתמשים. הפרות מדיניות: אירועים מטעויות משתמשים.
הארת כאוס: מדדים מניעים התנהגות. מדדו את הדבר הלא נכון, קבלו תוצאה לא נכונה. מדדו פגיעויות שנמצאו? צוותים מפסיקים לחפש. מדדו פגיעויות שתוקנו? צוותים צדים אובססיבית. זהו חוק קמפבל בפעולה - הקביעות הסטטיסטיות שנצפו נוטות להתמוטט ברגע שמופעל עליהן לחץ למטרות בקרה. הטילו ספק בכל מדד. במיוחד שלכם.
⏰ Leading vs. Lagging Indicators: The Paradox of Time
Lagging indicators tell you what happened. They're forensic evidence of your security posture—the crime scene photos of yesterday's decisions. Useful for autopsies. Not so useful for preventing the murder.
Leading indicators predict what will happen. They're the smoke before the fire, the tremor before the earthquake, the FNORD you almost didn't notice. They're uncomfortable because they reveal problems before they become disasters—and who wants to admit their house is on fire while there's still time to grab the extinguisher?
🔴 Lagging Indicators (Autopsy)
- Number of incidents: Counting corpses
- Breach impact: Calculating damage
- Time to remediate: How long you bled
- Compliance audit findings: Report card from last semester
- Security budget spent: How much you invested in yesterday
Looking backward feels safe. The disaster already happened. Nothing left to prevent.
🟢 Leading Indicators (Prevention)
- Vulnerability age distribution: How long known risks sit unpatched
- Patch deployment velocity: Speed from disclosure to protection
- Phishing simulation trends: User vigilance improving or declining?
- Security awareness engagement: Are people learning or clicking through?
- Access review completion rate: Proactive privilege hygiene
- Unpatched critical systems: Ticking time bombs still armed
Looking forward creates anxiety. The disaster hasn't happened yet. You could still prevent it. That means responsibility.
TEMPORAL ILLUMINATION: Organizations love lagging indicators because they measure failure that's already happened—no pressure to prevent what's done. Leading indicators reveal failure in progress—uncomfortable truths about what you're ignoring right now. Which explains why dashboards show mostly lagging metrics with reassuring historical trends. The past is safe. The future is terrifying. FNORD.
🤖 Automation: Measuring Without Theater
Manual security metrics are security theater with spreadsheets. If a human has to manually collect the metric, the metric lies. Not because humans are dishonest (though some are), but because manual collection introduces:
- Selection bias: "I'll just check the systems that usually pass..."
- Temporal drift: Metrics from last Tuesday pretending to represent today
- Sampling errors: "I checked 10 of 1000 servers, close enough!"
- Political optimization: "Let's wait until after that patch deploys to run the report..."
- Effort resistance: "This takes 4 hours, we'll do it quarterly instead of weekly"
Automated metrics don't lie. They just reveal truths you'd rather not see.
🛠️ Automation Stack Examples
- GitHub Advanced Security: Continuous code scanning, secret detection, dependency alerts
- AWS Config Rules: Real-time infrastructure compliance monitoring
- OpenSSF Scorecard: Weekly supply chain security assessment
- SonarCloud Quality Gates: Every commit quality and security validation
- FOSSA License Scanning: Continuous SBOM generation and license compliance
- GuardDuty: Automated threat detection without human intervention
📊 Continuous Measurement Patterns
- Every commit: SAST, secret scanning, dependency checks
- Every deploy: Container vulnerability scanning, SBOM generation
- Every hour: Infrastructure configuration compliance
- Every day: Vulnerability age trending, patch status
- Every week: OpenSSF Scorecard, access reviews, security posture
תחשוב בעצמך: If your security metric requires a human to manually collect it, ask why it's not automated. The answer usually reveals whether you're measuring security or measuring the appearance of measuring security. FNORD.
📊 Visualization: Dashboards That Tell Truth vs. Dashboards That Lie
Green dashboards are the opiate of security executives. "Everything's green! We're secure!" Translation: "I carefully selected metrics that make me look good."
Good security dashboards make you uncomfortable. They highlight problems. They trend risks. They show age distributions of unpatched vulnerabilities. They don't show "100% compliant" unless you're actually 100% compliant (you're not).
🎭 Dashboard Theater
- All green tiles: "Everything's fine!" (Narrator: It wasn't.)
- Percentage completions: "99% patched!" (1% = entire DMZ)
- Trend lines pointing up: "Improving!" (More scans ≠ more security)
- Compliance percentages: "97% compliant!" (3% = all authentication)
- Activity metrics: "Blocked 10M threats!" (9.999M = spam)
"Look at all these green numbers! We must be secure. Right? Right??"
✅ Truth Dashboards
- Red/Yellow/Green with context: "23 critical vulns, 19 >30 days old"
- Age distributions: Histograms showing how long risks persist
- Trend arrows (both directions): "MTTR improving, but MTTD worsening"
- Absolute counts: "4 unpatched critical systems in production"
- Ratio metrics: "800% net resolution rate (closed vs opened)"
Uncomfortable truths drive action. Comfortable lies drive breaches.
VISUALIZATION WISDOM: If your security dashboard doesn't occasionally make executives uncomfortable, it's not showing security—it's showing what you want them to see. The best metrics reveal problems while there's still time to fix them. The worst metrics hide problems until it's too late. Choose discomfort over delusion. Nothing is true when everything is green. FNORD.
🎯 The KPI Framework: What Actually Matters
Key Performance Indicators separate signal from noise. If you measure everything, you measure nothing. Focus on metrics that:
- Drive business decisions: "Should we delay release to patch this?"
- Reveal risk trends: "Are we getting more or less secure over time?"
- Enable prioritization: "What should we fix first?"
- Validate investments: "Is this security tool actually working?"
- Support compliance: "Can we prove we're meeting requirements?"
🔴 Critical KPIs (Business Impact)
Revenue protection, regulatory compliance, operational continuity
- Critical Vulnerabilities in Production: Target: 0 (Zero tolerance)
- Mean Time To Remediate (MTTR): Target: <7 days for Critical
- MFA Coverage: Target: 100% for production systems
- Backup Success Rate: Target: ≥99.5%
- RTO Achievement Rate: Service restoration within targets
- License Policy Violations: Target: 0 (Legal risk)
🟠 Operational KPIs (Risk Management)
Service quality, development efficiency, proactive security
- Vulnerability Age Distribution: How long risks sit unpatched
- Patch Deployment Velocity: Days from disclosure to deployment
- Alert Resolution Rate: Currently: 800% net positive
- Scanner Coverage Ratio: % of repositories with automated security
- Secret Detection Coverage: Currently: 100% (0 bypassed)
- Build Success Rate: Development velocity indicator
OpenSSF Scorecard Targets: Hack23 maintains ≥9.5/10 across all repositories (currently: CIA 7.2, improving). This single composite metric encompasses 18 security categories from security policy to SAST to signed releases. One number, comprehensive security posture visibility.
KPI REALITY CHECK: Your KPIs reveal your priorities. If you measure training completion but not phishing click rates, you care about compliance paperwork, not actual user vigilance. If you measure vulnerabilities found but not vulnerabilities fixed, you care about looking thorough, not being secure. Your metrics expose your values. Choose metrics that expose your actual security, not your desired appearance.
📋 Hack23's Security Metrics Dashboard
Our metrics program focuses on risk reduction: ISMS-PUBLIC Repository | Security Metrics
- OpenSSF Scorecard Target: ≥7.0/10 — Supply chain security assessment across all repos (CIA: 7.2, Black Trigram, CIA CM)
- SLSA Level 3 Build Provenance — Cryptographically signed attestations for all releases (non-falsifiable provenance)
- CII Best Practices: Passing+ — Open source maturity badges (CIA, Black Trigram, CIA CM all passing)
- SonarCloud Quality Gates: Passed — Zero high/critical vulnerabilities, <3% duplication, ≥80% coverage target
- Vulnerability SLAs — Critical CVEs: 7 days, High: 30 days, Medium: 90 days (live tracking in GitHub Security)
- FOSSA License Compliance — Automated SBOM generation, continuous dependency license monitoring
- GitHub Advanced Security — Secret scanning, Dependabot alerts, CodeQL SAST on every commit
- AWS Security Services — GuardDuty threat detection, Security Hub findings, Config compliance, Inspector vulnerabilities
META-ILLUMINATION: Security metrics aren't about proving you're perfect—they're about proving you're improving. Trend matters more than absolute numbers. A system that was 60% secure last quarter and is 75% secure this quarter is safer than a system claiming 100% security with no evidence of measurement. Progress beats perfection. Honesty beats theater.
🔍 GitHub Security Metrics: Real-Time Transparency
Hack23 practices radical transparency through live public security metrics. Our GitHub Security Organization Overview exposes actual vulnerability management performance:
- Current MTTR: 8 days (target: <7 days) — Honest about where we stand
- Alert Resolution Rate: 800% net positive (closed vs opened) — Aggressive remediation
- Secret Detection: 0 secrets bypassed (100% blocked) — Zero tolerance enforcement
- Alert Age: 29 days average (needs improvement) — Acknowledging gaps drives fixes
- Total Closed Alerts: 228 across all repositories — Historical remediation proof
- Autofix Adoption: 0 (opportunity for automation improvement) — Transparency includes weaknesses
This is what real security metrics look like. Not all green. Not perfect. Not hiding problems. Measuring actual security posture with automated tooling that can't be gamed.
TRANSPARENCY ILLUMINATION: Publishing your real security metrics publicly is Chapel Perilous for organizations. It reveals you're not perfect. Competitors see your weaknesses. But it also proves you're honest about security—which builds more trust than performative perfection. Customers prefer "We patch critical vulnerabilities in 8 days and are working to improve" over "We're 100% secure, trust us." One is evidence. The other is marketing bullshit. FNORD.
📊 Compliance Monitoring Metrics
ISO 27001 A.5.36 demands systematic policy compliance monitoring. Compliance metrics prove you're following your own policies—or expose that you're not. Most organizations measure compliance once per year (audit season theater). Continuous compliance monitoring reveals drift in real-time.
🏛️ Framework Implementation Coverage
- ISO 27001:2022: 63% control coverage (target: 85% by 2026)
- NIST CSF 2.0: 85% function alignment (mature)
- CIS Controls v8.1: 91% IG1, 81% IG2, 65% IG3
- OpenSSF Scorecard: Average 7.8/10 (target: 9.5+)
Honest assessment beats checkbox compliance
📋 Policy Compliance Metrics
- Policy Review Currency: 100% (all policies reviewed on schedule)
- Vulnerability SLA Compliance: Critical <7d, High <30d, Medium <90d
- Backup Success Rate: 99.7% (exceeds 99.5% target)
- Change Success Rate: Tracked per Change Management policy
- Incident Response Time: 18 minutes average (improving from 22 min)
Continuous measurement reveals policy effectiveness
Compliance isn't a destination—it's a trajectory. Measuring compliance maturity over time reveals whether you're building security culture or just checking boxes for auditors. Progressive improvement beats static certification.
🎯 Vanity Metrics vs. Real Metrics
🎭 Vanity Metrics
- ❌ Number of security policies
- ❌ Training completion percentage
- ❌ Vulnerability scans performed
- ❌ Security tools purchased
- ❌ Compliance certifications held
- ❌ Security team size
Why they're useless: Measure activity, not outcome.
✅ Real Metrics
- ✅ Mean Time To Detect/Respond
- ✅ Phishing simulation click rate
- ✅ Critical vulnerabilities patched <48h
- ✅ False positive rate in alerts
- ✅ Repeat incidents (learning failure)
- ✅ Accounts with excessive privileges
Why they matter: Measure security posture improvement.
🔍 The Five Principles of Effective Security Metrics
- Measure Outcomes, Not Activities - "Vulnerabilities fixed" > "vulnerability scans run"
- Focus on Trends, Not Snapshots - Direction matters more than absolute numbers
- Make Metrics Actionable - Every metric should drive specific decisions
- Avoid Perverse Incentives - Don't measure what can be gamed without improving security
- Report Honestly - Metrics revealing problems are valuable—hiding problems is deadly
ULTIMATE ILLUMINATION: Security metrics should make you uncomfortable. If your dashboard shows only green, you're measuring the wrong things—or lying to yourself.
🎯 Metrics for Different Audiences
Different stakeholders need different metrics:
Executive Metrics
- Risk reduction over time
- Compliance status
- Incident trend (severity & frequency)
- Budget efficiency (risk/$)
Technical Team Metrics
- MTTD/MTTR by incident type
- Vulnerability age distribution
- Alert false positive rate
- Coverage gaps by asset type
Audit Metrics
- Control effectiveness evidence
- Policy compliance rates
- Access review completion
- Training completion & testing
🎯 סיכום: מדדו מה שחשוב, הטילו ספק בהכל
מדדי אבטחה הם צפיות למודעות ההגנתית שלכם. כמו מדיטציה החושפת דפוסי מחשבה, מדדים חושפים דפוסי אבטחה. האם אתם הופכים מאובטחים יותר עם הזמן? איפה אתם החלשים ביותר? מה כדאי לתקן הבא? מדדים טובים עונים על שאלות אלה. מדדים גרועים נמנעים מהן.
חמש קטגוריות המדדים (חוק החמישיות מתעורר באופן טבעי): זיהוי ותגובה, ניהול פגיעויות, מגמות אירועים, בקרת גישה, מודעות אבטחה. כל אחת מודדת ממדים שונים של מציאות אבטחה. ביחד יוצרות תמונה שלמה - אם נמדד בכנות.
MTTD ו-MTTR מראים יכולת זיהוי. מהירות תיקון מראה ניהול פגיעויות. מגמות אירועים מראות יעילות למידה. שיעורי דיוג מראים השפעת מודעות. אבל רק אם אתם כנים לגבי המספרים.
רוב מדדי האבטחה הם תיאטרון יוהרה. לוחות מחוונים המראים רק אורות ירוקים. "מספר אימיילי דיוג שנחסמו" (מודדים את מסנן הדוא"ל שלכם, לא את האבטחה שלכם). "אחוז השלמת הדרכה" (מודדים ציות, לא הבנה). "סריקות פגיעות שבוצעו" (מודדים פעילות, לא תוצאות).
מדדי יוהרה מליטפים אגו. מדדים אמיתיים מניעים שיפור. בחרו אי-נוחות על פני אשליה.
🍎 ULTIMATE REVELATION: THE MEASUREMENT PARADOXMetrics measure security. But metrics also CREATE security culture—or destroy it. Measure vulnerabilities found? Teams stop looking (bad news punished). Measure vulnerabilities fixed? Teams hunt obsessively (problems rewarded). Measure false positive rates? Analysts tune alerts carefully. Ignore false positives? Alert fatigue kills detection.
You become what you measure. Choose metrics that reward the behavior you want:
- Measure time to remediation → Teams fix faster
- Measure phishing simulation trends → Users learn vigilance
- Measure repeat incidents → Organizations learn from failures
- Measure coverage gaps → Teams close blind spots
- Measure false positive rates → Analysts tune quality over quantity
But remember Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure." Metrics optimized become gamed. Question every metric—especially when it makes you look good. The metric that feels comfortable is probably lying to you.
Are you paranoid enough about your metrics lying to you? If your dashboard shows only green, you're either: (a) perfectly secure (you're not), or (b) measuring the wrong things. FNORD.
תחשוב בעצמך, schmuck! Question your security metrics. Especially when they tell you what you want to hear. Security theater performs measurement. Real security measures reality.
🔮 The Future of Security Measurement
Security metrics are evolving from static dashboards to dynamic consciousness. Imagine security measurement as continuous observability of your defensive reality:
- AI-driven anomaly detection: Metrics that notice when patterns shift before humans can
- Predictive security posture: Leading indicators forecasting breaches months in advance
- Self-optimizing controls: Metrics feeding back into automated remediation
- Blockchain-verified metrics: Immutable audit trails proving compliance over time
- Real-time risk quantification: CVE impact automatically calculated against your architecture
- Continuous compliance monitoring: Policy drift detected within minutes, not months
The psychedelic vision: Security metrics become a living nervous system for your infrastructure. Not dashboards you check—awareness you inhabit. Not numbers you report—consciousness you experience. Measurement as meditation. Metrics as enlightenment.
But only if you measure truth instead of comfort. Only if you choose reality over theater. Only if you're paranoid enough to question whether your green dashboard is hiding red reality.
All hail Eris! All hail meaningful measurement!
Ready to implement measurable security outcomes? Learn about Hack23's cybersecurity consulting services and our unique public ISMS approach.
All hail Eris! All hail Discordia!
"תחשוב בעצמך, schmuck! Question your metrics—especially when they make you look good. Nothing is true when everything is green. Are you paranoid enough about what your dashboards aren't showing you?"
🍎 23 FNORD 5
— Hagbard Celine, Captain of the Leif Erikson
P.S. If this post made you uncomfortable about your security metrics, good. Discomfort drives improvement. Comfort drives breaches. Choose paranoia over complacency. הטל ספק בסמכות—especially the authority of green dashboards telling you everything's fine. Because it's not. And admitting that is the first step to actual security. FNORD.