Everyone is talking about the benefits of compliance automation now. Consultants sell it. Vendors promise it. Executives demand it. Yet most compliance automation today focuses on reporting automation. While important, superficial reporting, or even excellent reporting of poor security outcomes, is not helpful on its own. The part separating marketing from actual security is in automating control validation.
This is where compliance automation crashes into engineering reality, and where most initiatives die a quiet death in the gap between two worlds.
The Complexity Challenge
Compliance professionals live in a world of control reporting. Cryptographic protection, configuration management and vulnerability scanning controls exist as abstract requirements that are elegant in their simplicity.
Engineers live in a world of extremely diverse and complex technology stacks, each with its own quirks and failure modes. This is where the abstract meets the concrete, and where most compliance automation efforts have historically failed.
The traditional approach treats these as separate problems. Compliance teams define policies, and engineering teams implement controls. Auditors verify compliance through techniques shrouded in mystery. Everyone pretends this works until there's a breach, and then congress yells, "something must be done!" and the dysfunctional cycle repeats itself.
The Paradigm Shift - Treating Compliance Outcomes as Unit and Function Tests
Software engineering actually solved this problem decades ago, they just haven't applied it to compliance. Via unit and function tests, every piece of code is tested to ensure that it does what it claims to do.
The same principle applies to compliance controls. The only way to truly know that FIPS is enabled is to test it. The only way to truly know if STIGs are applied and vulnerabilities remediated or mitigated is to scan for them and validate the results.
Specific, comprehensive unit and function testing for compliance outcomes baked into the deployment and monitoring process is the only approach that scales across the complex, heterogeneous environments that modern organizations actually run. And it's the only way to bridge the gap between compliance requirements and engineering implementation.
The Engineering Approach to Compliance
Let's take a concrete example. Consider the challenge of validating that SC-13 (cryptographic protection), CM-3 (configuration management), and RA-5 (vulnerability scanning) are implemented for containers running a web server in AWS Elastic Container Service.
The compliance requirement is straightforward: use FIPS-approved cryptography, apply security configurations, scan for vulnerabilities. The engineering reality is messier: verify FIPS mode at both kernel and OpenSSL levels, validate STIG implementation across the entire stack, scan container images while filtering out false positives and adjusting for actual risk.
Here's how you build unit tests for compliance that actually work:
Testing FIPS Implementation
FIPS compliance is a cryptographic state that must be verified at multiple layers. The most reliable test exploits a fundamental property of FIPS mode: non-approved algorithms simply don't work.
# Test: OpenSSL FIPS mode
test_openssl_fips() {
# In FIPS mode, MD5 is disabled and should fail
if echo "test" | openssl md5 >/dev/null 2>&1; then
echo "FAIL: OpenSSL FIPS mode not enabled (MD5 succeeded)"
return 1
fi
echo "PASS: OpenSSL FIPS mode enabled"
return 0
}
This simple test case is objective, repeatable, and impossible to fake. Either FIPS mode is properly implemented, or the tests fail. No interpretation required.
For STIG and vulnerability verification, apply these same type of test as the example above but define specific failure thresholds and fail the test if those thresholds are exceeded. Tools like OpenSCAP and Grype may be used directly in the pipeline itself to verify status. Grype is nice because it even integrates with CISA KEV, VEX and Vulnrichment and EPSS for easy triage within the pipeline itself.
Putting It All Together: CI/CD Pipeline Integration
The individual tests mean nothing in isolation. The power comes from integrating them into your development workflows, where they run automatically on every build and deployment.
This pipeline runs the same way on every commit, whether you're using GitHub Actions, Jenkins, GitLab CI, or any other automation platform. The specific technology doesn't actually matter, only the principle does. Compliant outcomes become part of the development process.
Beyond the Theater
This approach scales across technology stacks precisely because it's based on verifiable technical properties rather than trust or documentation. Whether you're running Windows containers with IIS, Linux VMs with NGINX, or Kubernetes clusters with microservices, the same principle applies: test the actual implementation, not the intended implementation.
The compliance evidence generated by these tests is structured data that proves specific controls are implemented correctly. This data can be fed into policy engines for automated decision-making, and even directly into real-time customer facing dashboards, but that's another story entirely.
Conclusion
The hardest part of compliance automation is verifying that your systems actually implement the controls you claim they do. But by treating compliance outcomes as standard unit and function tests, it empowers engineering teams to bake compliance into the build itself. Reporting becomes as simple as letting your data tell your customers how trustworthy you are.