One skill that the Zscaler Deception team has become really good at is analyzing adversary tactics, techniques, and toolsets and hypothesizing how we could disrupt the adversary playbook with deception.
What we want to talk about today is a concept we like to internally call “Deception Engineering”.
It’s a wordplay on Detection Engineering. In addition to focusing on writing rules to detect a specific type of technique or threat, security teams can also focus on how to deceive a technique or threat. We see it as an important component of Detection Engineering.
Deception Engineering is the process of building, testing, and implementing deception-based defenses within the enterprise to disrupt adversary operations and playbooks.
Deception engineering broadens the scope of detection engineering to FORCE adversary behavior in addition to detecting threat behavior.
The fundamental idea here is for defenders to use their knowledge and control of the environment they are defending and use it to their advantage to guide adversary choices within their environment.
One of the main advantages of the deception approach is its simplicity. Thinking about deception use cases is simple because the underlying concept of “decoys” or “fakes” is also extremely simple.
Here are a couple of examples:
If defenders are able to control the motivation of the adversary and control the choices they make in an environment, the resulting telemetry is:
The deception engineering process is actually quite simple to adopt.
Broadly, here's what it looks like:
We’ve described the process in detail below.
Tool / Technique / Tactic / Use Case
The description of the tool/technique/tactic you wish to deception engineer. For broader use cases, like “Ransomware Detection”, you may need to iterate over this process multiple times.
List all attack tools that can execute a technique or tactic
Describe the steps of the attack as if executed manually
Hypothesize Deception Opportunities against attack steps
Identify parameters that must be configured to ensure your deception opportunities mimic real implementations
OpSec Safety Considerations
Call out any opsec safety considerations as not all deception opportunities are opsec safe.
Choose techniques viable in production with due consideration to operational challenges
What telemetry can you gather if an adversary acts on deception
Is this a first or second-order detection?
Test cases and their efficacy.
First-order Detection- When an adversary interacts with a deceptive component that directly generates alert telemetry.
Second-order Detection - When an adversary acts on deceptive information that leads them to a deceptive component that generates telemetry.
Our approach is to create a specification that addresses each of these steps before we take the step of rolling out defenses to production.
Let’s apply the deception engineering process with an example.
For brevity, we have excluded a few steps in the process, but each step is an important consideration and must be documented when planning your deception approach.
Review the MITRE ATT&CK mapped technique details here for Group Policy Preferences Saved Password (GPP)
The GPP Saved Passwords issue has been around for close to a decade. The feature allowed Active Directory administrators to manage local administrator passwords via Group Policy.
However, the policy configuration XML file that maintained the details of the policy, stored the password to the local administrator account encrypted in the file. With a decryption key made available, any domain user regardless of privilege could simply search for the file, decrypt the password and gain administrator access to all the machines to which the policy applied.
We know for a fact now, that the Conti ransomware group had GPP password extraction as part of their technique arsenal as revealed in their playbook. Here's a screenshot from the playbook.
Many organizations have already addressed this problem. But from a deception engineering lens, this is a missed detection opportunity.
If a Conti Ransomware affiliate was to target an organization, can we deceive them when they execute this tactic? Can we fool them into believing that they have access to legitimate credentials via GPP? Can we divert their attention as they execute their playbook and capture telemetry about their presence? And what telemetry can we give the defender so that they are able to act quickly to stop the threat?
Let’s find out.
We want to deception engineer a specific technique, in this case GPP, so we have built out a minimalist representation of specific steps we need to address for implementing our solution.
The next step in this process is to identify tools used to exploit this technique. The idea is to study and document how these tools works under the hood so we can abstract the attack flow to build an attack story. This is especially important if we want to build tool-agnostic deception use cases.
Here we have included two tools, get-gpppassword.ps1 and get-decrypt.py.
Once we have understood how these tools work, we should be able to extract the attack story. Take a look at how we have abstracted the attack. Essentially:
Now that you have the story in place, add connections to possible deception opportunities. This step requires some creative thinking.
Essentially you can:
Now, what telemetry can you actually get if you were to implement these deception opportunities?
Because of the way Group Policy works, it may be very difficult to reliably distinguish between malicious and regular activity when it comes to access to the decoy Groups.XML. But it’s still valuable because we can use this setup as a Lure.
Think of lures as pointers that tell the adversary “Hey here’s some interesting credentials and where you can apply it. Why don’t you go try a login?”
This results in what we call a “second-order detection”. Our deceptive setup doesn’t generate telemetry unless the adversary acts on the information we have planted in AD.
So the idea is if you see a login attempt against decoy systems you have setup using the credentials you have configured in the Groups.XML file, there’s only one way that could have happened.
Someone is exploiting GPP.
The last step of this process is to test your deception setup. Use the tools you documented in the previous steps.
Deception Engineering empowers defenders to control adversary choices in their network.
It provides defenders with options, where the existing approaches might be overwhelming, complex, and prone to false positives, for an alternative that’s easier to understand, lower on false positives, and really fun to set up and implement.
Do you want to brainstorm a deception engineering use case?
Write to me!