Atomic Red Team — Validating Controls
We have countless tools and systems which are supposed to detect bad actors, but do we know if they actually work? If a threat actor attacks us today, would we be able to find them before they cause harm?
The Atomic Red Team project helps solve this problem. We can run these specific tests to track how our systems detect and respond to individual attacks, letting us know if our response tools are working as intended. When paired with published intelligence, we create a strong threat modeling environment to validate our tools are effective.
Atomics
The atomics are individual tests mapped to specific MITRE ATT&CK IDs and test our implemented controls for detection and prevention. Each atomic has a description of the technique and specific tests on how to accomplish them.
For example, the OS Credential Dumping: LSASS Memory (T1003.001) technique on the Atomic Red Team discusses why LSASS memory is a target for attackers. Then a test is presented on how to dump the memory of the process with the procdump.exe
executable.
The procdump.exe
by itself is not malicious — it’s a common debugging tool used to solve problems. However, when we gather a copy of the lsass.exe
process, we can carve secrets from the dump. This action should not be common in the environment, and if it does happen, we want to be notified ASAP.
The test itself is pretty straightforward, we need to download the procdump.exe
tool first. Then, we can run the test using the information from the atomic.
procdump.exe -accepteula -ma lsass.exe lsass_dump.dmp
When running the test, we notice that there’s a lot of red text. Reading closer, we can see that there is a ResourceUnavailable
issue.
Reviewing the Windows Defender logs shows that the attack was properly detected and prevented. Great news, we know that the mitigations we have in place work!
Building an Attack Flow
When we leverage CTI for threat modeling, we can build a flow of attacks a known threat actor uses. These flows use a mix of ATT&CK IDs to gain access throughout the attack — we can focus on these to make sure we have detections in place.
Techniques for Volt Typhoon were recently posted in a Joint Cybersecurity Advisory outlining various ways this threat actor gains access and moves laterally within an environment.
In this example, we can see different IDs for WMI, PowerShell, and CMD. When we look up the IDs within the Atomic Red Team, we can see there are published tests for these IDs. We can leverage these tests within the Atomic Red Team and reference how they were used by the threat actor within the JCA. For example, Volt Typhoon is known the use WMIC to gather information about systems — is this legitimate activity for end users?
When we run this test, we can see that it ran without being blocked by Windows Defender. While there is no direct risk to this information, is this an action our end users take? Should legitimate users be doing this, or is this an indication that a user may be compromised?
Now we can check our other detection tools to determine how they respond.
- Can we find this activity in logs? Where are those stored?
- Do any tools alert on this activity? Should they?
- Can we change settings to better detect this activity as it occurs?
Conclusion
Threat modeling through published intelligence is one of the most effective ways to prove our systems can properly detect and respond to known threats. Leveraging published TTPs and the Atomic Red Team project, we can identify gaps between known techniques for threat actors and what our tools can protect against.