PlaybookGLOBAL

FIPS 140-3 Compliance playbook

A practical path from scope to CMVP-ready evidence: boundary, approved mode, services mapping, documentation, and lab readiness.

Optimized for teams shipping crypto libraries, HSMs, secure elements, firmware, and embedded modules under the current CMVP program rules.

Author
Sorena AI
Published
Mar 4, 2026
Updated
Mar 4, 2026
Sections
7

Structured answer sets in this page tree.

Primary sources
5

Cited legal and guidance references.

Publication metadata
Sorena AI
Published Mar 4, 2026
Updated Mar 4, 2026
Overview

FIPS 140-3 compliance is not a document-only exercise. It is an engineering and assurance program: you must define a defensible cryptographic module boundary, implement the required protections, and produce an evidence pack that an accredited CST laboratory can test and the CMVP can validate. This page gives a practical sequence that matches how strong teams actually get through validation.

Section 1

What FIPS 140-3 compliance means in practice

FIPS PUB 140-3 specifies security requirements for cryptographic modules and defines four increasing, qualitative security levels across 11 requirement areas. In practice, teams use the term compliance in two different ways: a vendor may design to the standard and claim internal compliance, or the team may pursue formal CMVP validation through a CST laboratory and receive a CMVP certificate.

That distinction matters because the CMVP FAQ expressly separates a vendor claim of compliance from a validated module. If a certificate does not exist, the module is not CMVP validated, even if the engineering team designed around FIPS 140-3 requirements.

  • Compliance is a design and evidence target
  • Validation is the formal CMVP outcome after CSTL testing and CMVP review
  • Current planning should assume FIPS 140-3, not FIPS 140-2, because FIPS 140-2 active use for new systems ends on 21 September 2026
Recommended next step

Turn FIPS 140-3 Compliance playbook into an operational assessment

Assessment Autopilot can take FIPS 140-3 Compliance playbook from operationalizing the guidance into a tracked program to a reusable workflow inside Sorena. Teams working on FIPS 140-3 can keep owners, evidence, and next steps aligned without copying this guide into separate documents.

Section 2

Step 1: Define the module and freeze the boundary

The module boundary is the foundation of the whole project. Interfaces, roles, services, SSP handling, self-tests, physical assumptions, and the Security Policy all depend on that boundary staying stable.

If the boundary changes in the middle of lab work, you usually have to rework documentation, test procedures, and sometimes even the selected security level. Treat the boundary as an architecture decision with explicit change control.

  • Define what code, firmware, hardware, and services are inside the module
  • List every interface that crosses the boundary, including admin paths, debug paths, and API entry points
  • Pin the tested operational environments and build identifiers early
  • Document any embedded or bound modules with exact names, certificate numbers, and versions
Section 3

Step 2: Select the right security level and environment assumptions

Security levels are not procurement labels. They are assurance decisions tied to real deployment conditions, physical access risk, operator model, and the impact of SSP compromise.

Choose the level early because it changes engineering scope, evidence requirements, and lab expectations. A weak level decision usually shows up later as physical-security or operator-control rework.

  • Level 1 is the baseline entry point for many software modules and lower physical-risk deployments
  • Level 2 and Level 3 usually demand stronger physical and role-control discipline
  • Level 4 is for the harshest or least controlled environments and should be justified by actual threat assumptions
  • Write down the physical and operator assumptions so the level decision survives audit and customer review
Section 4

Step 3: Build an approved-mode design that you can prove

Approved mode is an operational state, not a slogan. Your services map, approved algorithms, self-tests, SSP protections, and mode indicators all have to support the same approved-mode story.

This is where current CMVP program material matters. The base standard is not enough by itself. Teams need the SP 800-140 series, the current Management Manual, and the current Implementation Guidance revision the lab will use.

  • Define approved mode entry and exit conditions
  • Map each service to approved security functions or clearly mark it as non-approved
  • Make service indicators and runtime behavior match the Security Policy
  • Ensure non-approved functions cannot be mistaken for approved security services
Section 5

Step 4: Build the documentation and test-evidence pack the CSTL will use

Validation speed depends heavily on documentation quality. The Security Policy, boundary diagrams, service inventory, SSP handling description, self-test behavior, and operational-environment definition must all align with what the CSTL will test.

Use version control and stable identifiers across the evidence pack. If the service list in the Security Policy does not match the code, logs, or test scripts, the CSTL will find it quickly and the project will slow down.

  • Prepare a Security Policy that matches the tested build and tested environments
  • Version every boundary diagram, service table, and build artifact
  • Package repeatable test materials, expected outputs, and failure-state evidence
  • Record the exact CMVP guidance baseline the submission was prepared against
Section 6

Step 5: Run the validation as a managed delivery process

The practical flow is vendor scope and evidence, CSTL testing, then CMVP review and validation. Teams that treat the submission as an ongoing delivery program do better than teams that treat it as a one-time documentation drop.

The current CMVP overview also distinguishes full five-year active validations from two-year interim validations. That matters for roadmap planning, customer messaging, and release governance.

  • Enter lab work only after the boundary, service map, and Security Policy are stable
  • Track CSTL questions and evidence deltas in a single controlled backlog
  • Keep validated version identifiers tied to release management and customer claims
  • Plan for certificate type, active-list duration, and any interim-validation implications
Section 7

Step 6: Protect the validated state after release

The post-validation risk is uncontrolled change. Platform updates, compiler changes, dependency changes, and module refactors can break the evidence story even when the feature set looks similar.

A practical maintenance model classifies changes, gates releases that affect boundary or approved mode, and keeps the evidence pack current enough that audits and customer reviews stay predictable.

  • Pin validated versions, environments, and toolchains internally
  • Run validation-impact review for boundary changes, algorithm changes, SSP changes, and major platform changes
  • Refresh evidence for self-tests, SSP handling, and lifecycle controls on a defined cadence
  • Keep marketing and procurement language aligned to the exact validated scope
Primary sources

References and citations

csrc.nist.gov
Referenced sections
  • Official FAQ covering compliant versus validated claims and embedded-module treatment.
Related guides

Explore more topics