Why Independence in Cyber Security Matters—Especially Now

Independent cyber security

DevSec Hackathon: Creating Automated Security Tests for a Resilient Infrastructure

Bridging Development & Security for a Common Goal

Automated Security Testing

When you bring a development team and a security architecture team together to solve a shared challenge, incredible things can happen. That’s exactly what we experienced in our recent DevSec Hackathon—a collaborative effort to ensure our reference environment, which we automatically reinstall rather than patch, remains stable and secure after every update.

Our mission was clear: develop automated security tests to confirm that our infrastructure functions seamlessly after each fresh reinstallation. Since reinstalling from the latest codebase eliminates drift but introduces unknowns, robust testing is essential to prevent breakages in critical areas like web services, network access, and security controls.

Why a Hackathon?

Instead of relying on incremental fixes, our approach requires frequent full reinstallations of our reference environment to minimise vulnerabilities. However, reinstallation risks breaking dependencies, misconfiguring settings, or introducing unforeseen failures.

The solution? Automated testing. By running validation tests after every rebuild, we ensure that security controls, network configurations, and critical services operate as expected—without human oversight.

Building the Testing Framework

Key Roles & Contributions:

  • Tom R: Took the lead in deciding what tests needed to be created. His tasks included setting testing objectives, such as “Is the web server accessible over HTTPS?” and mapping out potential issues that might arise post-reinstallation.
  • Syed: Turned Tom’s test ideas into scripts. Working with a combination of Bash-based tools (instead of the team’s usual Python environment), he set up an automated testing pipeline that could run quickly and reliably.
  • Chris: Created a script to run all of the test scripts in parallel, and log the results. He also served as our quality assurance and validation specialist. He reviewed both the scripts and the reference environment itself for any errors, mismatches, or oversights that could lead to false positives (or negatives!) in our testing.

From Brainstorming to Bash Scripting

We kicked off with whiteboard sessions, mapping dependencies, identifying potential failure points, and structuring our test framework.

Transitioning from whiteboards to keyboards, the team adjusted to a Bash-first approach rather than Python—prioritising efficiency and speed. While outside our usual workflow, this shift encouraged a command-oriented mindset for infrastructure validation. This was an adjustment for some, but it was also an opportunity to learn a new skill set and explore a different scripting approach.

The Results: 30 Automated Security Tests and Counting

By the end of the hackathon, the team had built over 30 automated security tests based on 8 test templates—everything from checking HTTPS availability on the web server to validating that key security controls load correctly. Our long-term goal is to scale to 1,000 tests, covering every nook and cranny of our infrastructure.

While 30 tests may seem like a modest start, each test is crucial for ensuring that when we commit a change to the reference environment, we know that all core services and virtualisation components remain intact. This foundation paves the way for the comprehensive library of tests we plan to develop over time.

Why All This Matters

A typical approach to infrastructure updates involves patching existing systems. In our workflow, however, we choose to reinstall the reference environment from the latest codebase. While reinstallation reduces the risk of “drifting” configurations and lingering vulnerabilities, it also introduces fresh unknowns. If something is incorrectly configured in an environment that’s constantly being rebuilt, it may go unnoticed without thorough, automated security checks.

By integrating these tests into our pipeline, we ensure that every time we spin up the environment from scratch:

  1. Critical endpoints are accessible.
  2. Security controls and monitoring systems are working.
  3. Configuration errors don’t silently slip through the cracks.

 

Lessons Learned (and Next Steps)

  • Collaboration is Key: Bringing together different skill sets—development, security architecture, and QA—accelerated our progress and sparked new ideas.
  • Bash before Python: While it wasn’t our usual choice, Bash scripting helped us quickly learn and understand the nature of the testing to enable us to automate infrastructure checks using Python later. It also encouraged the team to think about testing from a simpler, command-oriented perspective.
  • Keep Iterating: With 30 tests now in place, we’re committed to iterating toward 1,000 tests. This won’t happen overnight, but each new test brings us one step closer to bulletproof confidence in our reference environment.

 

The DevSec hackathon was intense but incredibly productive—a testament to what happens when you let teams cross-pollinate ideas and focus on a common goal. We’re excited to keep building on this foundation, ensuring that future re-installs of our reference environment pass every test we throw at them.

Final Thoughts: Securing the Future

Our DevSec Hackathon was more than just a coding sprint—it was a blueprint for resilience. By aligning development and security, we’ve built an automated safety net that prevents issues before they impact production.

As we scale our testing framework, we’re setting new standards for automated security validation—one test at a time. Here’s to the first 30 tests—and the 970 more to come!

Read more about security testing here.

Learn more about our security architecture services here.

X
LinkedIn
Facebook
Email
WhatsApp