SYSTEM_CONSOLE v2.4.0

Regulatory mapping

How key regulatory frameworks map to context governance controls: GDPR, EU AI Act, HIPAA, and SOC 2.

LAST_UPDATED: 2025-05

Regulatory compliance is not a separate track from governance; it is governance with specific requirements. This page maps key frameworks to the controls described in this blueprint. While this serves as a starting point, it is not legal advice, as specific requirements depend on jurisdiction, industry, and use case.

Key Takeaways

  • • Most regulatory requirements map directly to controls already described in this blueprint.
  • • The EU AI Act introduces risk-based obligations that go beyond data governance.
  • • Compliance requires documented evidence, not just implemented controls.

GDPR

GDPR applies wherever personal data is processed for EU residents. For AI systems, the main obligations are around lawful basis, data minimisation, purpose limitation, and data subject rights.

GDPR requirement Relevant blueprint control
Lawful basis for processing Document purpose per context source in the source registry
Data minimisation Token budget controls; retrieve only what is needed for the query
Purpose limitation ABAC domain scoping; context sources tied to declared use cases
Accuracy Source versioning, freshness SLAs, conflict detection
Storage limitation / retention Retention expiry on chunks; automated index pruning
Right to erasure Tested deletion procedure for vector indexes and audit logs
Data subject access Retrieval traces allow reconstruction of what data was used per user
Security of processing RBAC/ABAC, encryption at rest and in transit, audit logging

EU AI Act

The EU AI Act introduces risk-based obligations for AI systems. High-risk systems (e.g., those used in employment, credit, healthcare, or critical infrastructure) face the most significant requirements.

High-risk system obligations

  • Risk management system: documented and maintained throughout lifecycle
  • Data governance: training and context data must meet quality and bias standards
  • Technical documentation: system design, capabilities, and limitations must be documented
  • Human oversight: meaningful human control mechanisms required
  • Accuracy and robustness: measurable and maintained over time
  • Logging: automatic logging of operations sufficient for post-hoc audit

Blueprint controls that map to AI Act

  • Source registry and classification model → data governance
  • Retrieval traces and audit store → logging requirement
  • Human-in-the-loop escalation paths → human oversight
  • Evaluation harness with quality metrics → accuracy and robustness
  • Incident runbook → risk management system
Note
General-purpose AI models (GPAI) used as components in downstream systems also carry provider obligations under the AI Act. If you deploy a third-party model, review what documentation the provider is required to supply.

HIPAA (US healthcare)

HIPAA applies when Protected Health Information (PHI) is involved. AI systems in healthcare settings must treat PHI as a restricted-class data type with specific safeguard requirements.

Administrative safeguards

  • Designated security officer
  • Workforce training
  • Access management policy

Physical safeguards

  • Workstation and device controls
  • Facility access controls

Technical safeguards

  • Unique user identification
  • Audit controls (logging)
  • Transmission encryption
  • PHI in context: Restricted classification, row-level controls, deletion SLA

SOC 2

SOC 2 is a trust service framework audited against five criteria. For AI systems handling customer data, Security and Availability are typically in scope.

Trust criterion AI governance relevance
Security Access control (RBAC/ABAC), encryption, audit logging, incident response
Availability Retrieval SLAs, pipeline monitoring, failover for context sources
Processing Integrity Quality gates, evaluation harness, grounding and citation controls
Confidentiality Classification model, least-privilege retrieval, credential scoping
Privacy PII handling, retention, deletion, purpose limitation

Evidence requirements

Implementing a control is necessary, but usually insufficient on its own. Auditors require evidence that controls are operating effectively, so plan evidence collection from the start.

  • Retrieval audit logs must be queryable by user, time range, and source.
  • Access policy changes must have a review and approval trail.
  • Evaluation results must be stored and time-stamped.
  • Incident records must be retained for the relevant statutory period.
  • Data subject rights requests (deletion, access) must have documented fulfilment records.

Failure modes

  • ! Controls are implemented but not documented; audit fails despite good technical practice.
  • ! GDPR retention limits are applied to the source database but not to the vector index.
  • ! AI Act high-risk classification is not assessed before deployment.
  • ! SOC 2 evidence is collected retrospectively under time pressure, introducing gaps.
  • ! Right-to-erasure requests are tracked in a spreadsheet with no SLA enforcement.

Checklist

  • Applicable frameworks identified and mapped to system components.
  • Each context source has a documented lawful basis and purpose.
  • EU AI Act risk classification assessed and documented before deployment.
  • Evidence collection is automated where possible (logs, audit trails).
  • Data subject rights requests have a tracked, SLA-bound fulfilment process.