🗂️ Data Classification: Five Levels of Actually Giving a Damn
"Not all data is created equal. Some can be leaked. Some can't. Know the difference."
🍎 The Golden Apple: Treating All Data Equally Is Expensive and Stupid
Organizations protect everything equally—or nothing at all. Marketing blog posts get the same encryption as customer databases. Public documentation requires the same access controls as source code.
Data classification fixes this insanity.
Label data by sensitivity. Apply proportional controls. Encrypt secrets, not press releases. Restrict access to confidential data, not public websites. Security effort aligned with actual risk.
ILLUMINATION: Without classification, you're either over-protecting public data (wasting resources) or under-protecting secrets (inviting breaches). Both are expensive stupidity.
🛡️ The Five Levels of Data Classification
Level 1: Public
Designed for public disclosure.
Examples: Marketing materials, blog posts, public documentation. Controls: Minimal. Leak Impact: None—already public.
Level 2: Internal
Internal use, low sensitivity.
Examples: Internal wikis, meeting notes, org charts. Controls: Basic access control. Leak Impact: Embarrassment, not disaster.
Level 3: Confidential
Business-sensitive information.
Examples: Financial reports, customer lists, contracts. Controls: Encryption, role-based access, audit logging. Leak Impact: Competitive harm, regulatory penalties.
Level 4: Secret
Highly sensitive, restricted access.
Examples: Trade secrets, source code, cryptographic keys. Controls: Strong encryption, MFA, need-to-know access. Leak Impact: Severe competitive harm, security compromise.
Level 5: Top Secret
Mission-critical, existential risk.
Examples: Master keys, root credentials, M&A plans. Controls: Hardware tokens, air-gapped storage, executive-only access. Leak Impact: Company-ending.
CHAOS ILLUMINATION: Most organizations have three real levels: Public, Internal, Secret. Call them Level 1, 2, 5 if you want military cosplay—but pragmatism beats methodology theater.
📋 Hack23's Data Classification Policy (Deep Dive)
Detailed classification framework: ISMS-PUBLIC Repository | Data Classification Policy
- Classification Criteria - Confidentiality, integrity, availability requirements, regulatory constraints, business impact
- Handling Requirements by Level - Storage (encryption), transmission (TLS/VPN), access (RBAC/MFA), disposal (secure deletion)
- Labeling Standards - Document headers, file metadata, email subject tags, visual markings
- Access Control Matrix - Who can access what classification level, under what conditions
- Reclassification Process - When data becomes more/less sensitive, how to update classification
- Declassification Schedules - When confidential data becomes internal/public (contract expiration, product launch)
META-ILLUMINATION: Classification isn't bureaucracy—it's risk-based resource allocation. Spend security budget protecting secrets, not press releases.
🎯 Handling Requirements by Classification Level
| Requirement | Public | Internal | Confidential | Secret |
|---|
| Storage Encryption | No | Optional | Required | AES-256 |
| Transmission Encryption | No | TLS 1.2+ | TLS 1.3 | TLS 1.3 + VPN |
| Access Control | None | SSO Auth | RBAC + MFA | Need-to-know + MFA |
| Backup Retention | Variable | 30 days | 7 years | As required |
| Disposal | Standard delete | Delete + overwite | Secure wipe | Crypto shred |
🔍 Common Classification Failures
❌ Over-Classification
Problem: Marking everything "Confidential" to be safe.
Result: Users ignore labels, share freely anyway. Classification becomes meaningless.
Fix: Default to lowest appropriate level. Most data is Internal, not Confidential.
❌ Under-Classification
Problem: Marking secrets as "Internal" because classification is inconvenient.
Result: Sensitive data leaked with insufficient controls.
Fix: Make classification easy, not optional. Automate where possible.
❌ Inconsistent Labeling
Problem: Same document marked differently in different systems.
Result: Confusion, incorrect handling, security gaps.
Fix: Single source of truth for classification, propagate consistently.
❌ No Declassification
Problem: Data marked Confidential stays Confidential forever.
Result: Security burden grows exponentially without need.
Fix: Declassification schedules, periodic review, lifecycle management.
ULTIMATE ILLUMINATION: Data classification is risk management, not bureaucracy. If classification process hinders productivity without improving security, you're doing it wrong.
🎯 Practical Classification Implementation
Making classification work in reality:
- Default Classifications by System - Source code repos = Secret, Marketing CMS = Public, HR database = Confidential
- Automatic Labeling - Tag files based on storage location, creator role, content scanning
- Clear Guidelines - Decision tree: "Does this contain customer PII?" Yes = Confidential. "Is this public documentation?" Yes = Public.
- Technical Enforcement - DLP policies block Secret data in email, encryption enforced for Confidential, access controls automatic by classification
- User Training - Teach why classification matters, not just how to apply labels
🎯 Conclusion: Classify to Protect Efficiently
Data classification aligns security effort with risk. Protect secrets intensely. Protect public data minimally. Save resources for what matters.
Five levels (or three pragmatic ones). Handling requirements proportional to sensitivity. Technical enforcement, not user compliance theater.
Over-classification wastes money. Under-classification invites breaches. Accurate classification enables efficient security.
All hail Eris! All hail Discordia!
"Think for yourself, schmuck! Question every classification—including this blog post (probably Public)."
🍎 23 FNORD 5
— Hagbard Celine, Captain of the Leif Erikson