Red Teaming is the process of using tactics, techniques and procedures ( TTPs) to emulate a real-world threat, with the goal of measuring the effectiveness of the people, processes and technologies used to defend an environment.
— Joe Vest & James Tubberville
Red teams challenge the assumptions of the security of a system (e.g. the sufficiency of just patching, or installing an IPS, or continuously monitoring, etc) by emulating the TTPs of an APT against said system.
Red teaming differs from penetration testing in that red teaming measures the overall response of the entire system’s security posture (not only the vulnerabilities or misconfigurations present, but also how the personnel and security systems respond to/handle an intrusion); whereas in a penetration test, only a limited subset of the system (a single technological stack) is tested for vulnerabilities, and the overall responsiveness of the security team is not measured.
In addition, unlike penetration tests whose purposes are often to simply obtain the highest possible privileges and identify (hopefully) all vulnerabilities, a red team engagement has a specific objective, such as obtaining access to a specific financial asset. Obtaining the highest privileges in a domain is not the top priority, because the goal is not to “get root/domain admin,” but to compromise a system like how a real adversary would. To achieve the objective, red teamers will remain stealthy and not act unnecessarily (“If Bob from Accounting can access the objective, then that’s all they’ll do”). As such, red teaming is focused on assessing a system more holistically and realistically than a penetration test would.
- joaoviictorti/RustRedOps: collection of red teaming techniques and malware writting in Rust
Engagement Planning
Scoping
Unlike penetration tests whose scope entails what to include, scopes of red team engagements are more concerned with what to exclude since red teaming is more comprehensive.
Threat Model
Red teamers must build a profile around the potential threat against the assets. Red teamers will act as if they are the actual threat / adversaries, and must emulate the intents (e.g., to attain a specific objective), TTPs, habits, etc. This will be easier if the organization knows what they are against/targeted by, though smaller companies tend to not have an accurate idea.
Breach Model
A model of how the red team will breach and gain access to the target system. The red team either achieve this by itself (through reconnaissance) or with the assistance of the organization (assume breach scenario / ceded access). Typically if the red teamers start with trying to gain access and still has not succeeded after 25% of time passed, they should switch to an assume breach scenario since the engagement is more about testing the security posture of the system rather than looking for vulnerabilities.
Rules of Engagement
The rules of engagement (RoE) are a set of rules restricting the engagement stipulated by the organization and often contain the following:
- Engagement objective(s)
- Time span
- Target(s) of the engagement (e.g. IP ranges, domains, etc)
- Legal & regulatory requirements
- Restrictions
- Emergency contacts
For physical red teaming scenarios, red team operators should obtain a “get-out-of-jail letter” signed by the client, so that in the case the intrusion gets called out, the red teamers wouldn’t be prosecuted or apprehended by authority.
Record Keeping
Keep a record of activity (i.e. terminal logging, 1 FPS screen recordings) so that in the case an intrusion occurs, the DFIR team can distinguish between the red team and any actual intruders.
C2 frameworks also record activities and offer default reporting templates.
For the best record, note that timestamps when tools are used throughout the engagement and take screen recordings.
Data Handling
Don’t do stupid shit and don’t view sensitive data (e.g. personal data, medical records, etc). Do not actively compromise the CIA triad of the client’s system unless specified in the RoE.
Costs
Make sure to account for costs of the engagement beforehand:
- travel
- cost of commercial tools (licenses)
- infrastructure costs (e.g. C2 servers, domain names)
- time spent planning, in meetings, researching, setting up infrastructure, etc.
Post-engagement & Reporting
Try not to report a red team engagement like penetration test (where the main content would consist of vulnerabilities and remediations). Instead, incorporate the following:
- Attack Narrative: chronological observations of the system’s vulnerabilities & security posture (e.g. after exploitation) during the engagement
- Recommendations: Self-explanatory. Make sure to discuss with blue team first to figure what actually on the other other before giving any recommendations. For instance, it could be that the blue team didn’t notice the logs or that they noticed but didn’t act in time—recommendations would certain be different.
- Indicators of Compromise (IoC): While they may not fit in the main observations of the report, they can be an appendix to the report as a proof of work. IoCs may include domain names, IP addresses, specific information regarding the assets, hashes, filenames, etc.