Welcome to the final installment of our blog series for IT professionals transitioning into security! In Part 10, we wrap up John’s Red Team and Blue Team training journey with Microsoft Defender XDR and Microsoft Sentinel by introducing the concept of Purple Teaming. This is the next evolution in John’s learning path – a collaborative approach that blends Red and Blue team efforts. In this post, we’ll explain what Purple Teaming is in simple terms, how it bridges offensive and defensive work, and why it’s valuable for building a mature security practice. We’ll also explore practical examples of Purple Teaming using Microsoft Defender XDR and Sentinel on John’s test machine test001, including attack simulations, detection validation, tuning analytics rules, and fostering teamwork between Red and Blue.

By the end of this post, you’ll understand how Purple Teaming improves your organization’s detection quality and response capability. More importantly, you’ll see how practicing Purple Teaming can help an IT pro like you grow into a security leader. Let’s dive in with a friendly, accessible look at Purple Teaming and how John can continue his journey beyond Red and Blue!

What is Purple Teaming?

Purple Teaming might sound like a new team, but it’s about collaboration. In simple terms, Purple Teaming means having the Red Team (attackers) and Blue Team (defenders) work together as one unit toward a common goal. Instead of operating in separate silos or a competitive “red vs. blue” game, a Purple Team approach joins forces to improve the organization’s security. As one expert puts it, purple teaming is an amalgamation of the blue and red teams into a single team to provide value to the business. In other words, the traditionally separate offensive and defensive groups collaborate on a unified goal – improving cybersecurity.

Think of Purple Teaming as a bridge between Red and Blue. In earlier parts of this series, John learned how to think like an attacker (Red Team exercises) and detect and respond to attacks (Blue Team operations). Now, Purple Teaming allows John to blend these skills. During a Purple Team exercise, Red and Blue team members share information openly, align their objectives, and even sit side-by-side (literally or figuratively) to test and refine the organization’s defenses. This cooperative strategy forges a connection between Red and Blue team members to strengthen the security posture of the entire organization.​ Importantly, Purple Teaming is not necessarily a separate physical team that you need to hire for.

In many cases, it’s a mindset and a process where existing Red and Blue resources collaborate. In a small company, it might be the same people switching hats from Red to Blue. There might be dedicated Red and Blue teams in larger organizations, and a “Purple Team” function ensures they work in concert. The key is that everyone works as a coordinated team rather than adversaries. This breaks the old notion that the success of one side (Red’s successful breach or Blue’s successful detection) means the failure of the other. In Purple Teaming, success is measured by how well both sides can improve their defenses – it’s a win-win for security rather than a competition.

To summarize in simple terms, Purple Teaming is the combination of the Red Team and Blue Team working together, continuously learning from each other to make the organization more secure. Now that we know it, let’s consider why this approach is valuable.

Why Purple Teaming is Valuable

1. Unified Goals and Better Alignment: The primary value of Purple Teaming is aligning the goals of attackers and defenders. Instead of the Red Team trying to “win” by finding holes and the Blue Team trying to “win” by catching intrusions, both teams share the same goal: finding and fixing weaknesses. This unified approach means Red Team activities directly inform Blue Team improvements. As a result, there’s no finger-pointing or secrecy – it’s all about making the organization safer. The Red Team can still simulate attacks, but now they do it in a way that helps the Blue Team learn and adapt in real time. Blue Teamers, in turn, are eager to see the Red Team succeed in finding gaps because that gives them opportunities to strengthen defenses. This shift in mindset creates a positive feedback loop of continuous improvement.

2. Continuous Improvement of Detection and Response: Purple Teaming enables a continuous cycle of testing and tuning your security controls. Each simulated attack (Red) is like a live test of your detection and response (Blue). Did our systems catch it? If yes, great – can we respond faster or automate the response? If no, why not – and how can we detect it next time? The organization steadily improves detection quality and response capabilities by repeatedly going through this cycle. Over time, your detection rules in Microsoft Sentinel get refined and cover more attack techniques, and your Microsoft Defender XDR alerts become more accurate. It’s like sharpening a sword: every exercise hones the blade. In a mature security practice, this continuous honing is crucial. Threats evolve constantly, so having a Purple Team mindset ensures your defenses evolve in tandem by regularly validating them against real-world tactics.

3. Breaking Down Silos and Fostering Collaboration: In many IT and security teams, people operate in silos – the network team, the sysadmins, the security analysts all doing their own thing. Purple Teaming encourages breaking down those silos, at least between offense and defense. Red Team folks often have an “attacker mindset” and deep knowledge of how systems can be broken; Blue Team folks have lots of experience with monitoring and incident response. When these perspectives combine, each side better appreciates the other’s challenges. This collaboration improves communication, skills, and teamwork. For example, a Red Teamer might learn more about what telemetry is available (or missing) in Microsoft Defender, which could influence how they design their next test. A Blue Teamer might learn about a new hacking tool or technique from the Red side, helping them recognize it in the future. Over time, this cross-pollination builds a stronger, more versatile team. In John’s case, by practicing Purple Teaming, he’s effectively training himself to think like an attacker and a defender – a well-rounded security professional!

4. Efficient Use of Resources: Not every organization has the luxury of large dedicated Red and Blue teams. Often, especially for those transitioning from IT, you might be a “team of one” or part of a small security team wearing multiple hats. Purple Teaming is a cost-effective approach because it leverages the people you have without needing an adversarial setup. Some companies even combine roles into a formal Purple Team that coordinates attack and defence. This can reduce the need to bring in external penetration testers frequently or make those engagements more effective by ensuring internal teams are prepared to collaborate. In short, Purple Teaming can be scaled to the resources available – from John running solo exercises in his lab to a full enterprise exercise with multiple teams – and it focuses effort where it matters most.

5. Measurable Improvements and Maturity: Purple Teaming provides measurable outcomes demonstrating security maturity. Each time Red and Blue collaborate on a scenario, they can document what was learned and improved. Over time, these results can be mapped against frameworks like the MITRE ATT&CK® matrix to show coverage of attack techniques. You can see, for example, that “We tested 10 critical attack techniques this quarter, detected 8 of them initially, added detections for the other 2, and now we cover all 10.” This metrics-driven improvement turns a basic security program into a mature one. It helps justify security investments and demonstrates the value of teamwork to management. Purple Teaming is a key stepping stone for an IT pro’s organization to go from just reactive firefighting to proactive, structured defense.

Now that we’ve covered the why, let’s get hands-on and see how John can practice Purple Teaming using Microsoft Defender XDR and Microsoft Sentinel in his environment.

Purple Teaming in Practice with Microsoft Defender XDR and Sentinel

To illustrate Purple Teaming, we’ll follow John as he performs a small Purple Team exercise in his lab. Recall that John has a test machine, test001, which is onboarded to Microsoft Defender for Endpoint (part of Microsoft’s XDR suite) and connected to Microsoft Sentinel (a cloud SIEM/SOAR) for log analysis. In previous Red Team training, John learned how to simulate attacks on test001, and in Blue Team training, he set up Sentinel analytics to detect suspicious activities. Now, he will combine these skills.

A Purple Team exercise typically involves a few key steps: planning an attack scenario, executing the simulation (Red Team’s part), monitoring and detecting (Blue Team’s part), and then analyzing together and tuning defenses. Let’s walk through these steps with practical examples:

1. Planning an Attack Simulation (Red + Blue Together)

Every Purple Team exercise starts with planning. John doesn’t do this alone – even though he’s one person in a lab scenario, he’s playing both roles, so to speak. In a real setting, the Red and Blue folks would brainstorm together. The planning stage is about deciding which attack technique or scenario to test. A great way to choose is using the MITRE ATT&CK framework, which lists common tactics and techniques adversaries use. For instance, John might identify that he hasn’t tested how well his tools detect lateral movement or credential dumping. The team picks a technique that is relevant and realistic for their environment.

John decides to simulate a simple but insightful attack: a malicious PowerShell execution that downloads a file from the internet – often seen in malware attacks (this maps to MITRE ATT&CK technique T1059.001: PowerShell). This choice is good because it will exercise endpoint detection (the Defender should log PowerShell activity) and Sentinel’s ability to catch suspicious command-line usage. The planning is done collaboratively: John’s “Red side” identifies the command to run (the attack), and his “Blue side” sets expectations for what logs or alerts should result. If John had a colleague, they’d agree on the goals: e.g., “We want to see if our systems catch a PowerShell script that downloads an executable. If not, we’ll create a detection for it.”

John could use an existing attack script or tool but opts to use Atomic Red Team, an open-source library of small tests for adversary techniques. Atomic Red Team provides ready-made scripts to simulate specific tactics safely. For example, it has an atomic test for PowerShell download and execution. Using a predefined atomic test ensures John executes a known quantity (safer and easier to repeat). Of course, one can also perform manual testing – e.g., writing a one-liner PowerShell command to fetch a file – but Atomic Red Team gives a structured approach with a library of techniques to choose from. (In case you’re curious, Atomic Red Team is maintained by Red Canary and maps tests directly to MITRE technique IDs, making it easy to pick a technique and execute a test for it.)

Plan Summary: John plans to run a PowerShell-based attack on test001 that should trigger an alert. The success criteria for the Red Team is simply executing it. The success criteria for Blue Team is detecting it (via Microsoft Defender for Endpoint or Sentinel logs). Both sides will then analyze what happened and improve any gaps.

2. Executing the Attack Simulation (Red Team Action)

With the plan in place, John now puts on his Red Team hat and executes the simulated attack on test001. He uses the atomic test script for the PowerShell DownloadFile. In practice, this might be as simple as opening PowerShell on test001 (with administrator rights if needed) and running a command that tries to download a file from a remote server. For example, an atomic test may use PowerShell’s System.Net.WebClient to download a file (often a harmless file for testing) and save it to disk. This mimics standard malware behavior where PowerShell is leveraged to fetch payloads from the internet.

John runs the test. From his perspective as the Red Team, the attack is now executed on the endpoint. Perhaps the script even tries to run or save the downloaded file – either way, it’s malicious-like activity. In a real Red Team, he’d note precisely what actions were performed (time, process names, etc.), so they can be correlated later.

During this execution, Microsoft Defender for Endpoint (part of Defender XDR) actively monitors test001. The defender’s agent on the machine will log the PowerShell activity, and if the behavior is known to be suspicious, it may even trigger an alert on its own. For instance, Defender has heuristics for detecting when PowerShell is used to download files or when scripts are obfuscated. John doesn’t interfere with that – he lets the tools do what they’re designed to do. The beauty of Purple Teaming is that he wants the Blue side to catch him (unlike a stealthy Red engagement where he might try to evade detection).

Let’s say John’s simulated attack triggers an alert in Microsoft Defender. Perhaps Defender flags it as “Suspicious PowerShell behavior: PowerShell downloading a file.” Immediately, this alert would surface as a security incident in the Defender XDR portal. Because John has integrated Defender XDR with Microsoft Sentinel, that incident can also flow into Sentinel, or at least the raw event logs will be there. Now, it’s time for John to switch to his Blue Team role and see what was detected.

(On the other hand, it’s possible that the built-in defenses trigger no immediate alert. That’s okay, too – the Blue Team’s job will be to hunt through the logs to find the activity. We’ll cover that scenario as well.)

3. Detecting and Monitoring the Attack (Blue Team Action)

Now wearing the Blue Team hat, John checks Microsoft Defender and Sentinel to assess the outcome of the simulation. This is where the detection validation happens.

If an Alert Was Triggered: John finds that Microsoft Defender for Endpoint did raise an alert for the PowerShell download activity. Great! This means the built-in security controls recognized the behavior as malicious or at least suspicious. In Microsoft Defender XDR, he sees an incident listed, perhaps containing details like the process (powershell.exe), the command line used, the user who ran it, and the fact that a file was downloaded from an external URL.

John opens Microsoft Sentinel to view the logs and incidents. In Sentinel, he might see an incident corresponding to this alert because it’s connected to Defender data. He can drill down to see the timeline of events: for example, Event ID 4688 (process creation) logs show PowerShell launching, along with any command-line parameters captured. He might also see specific Sentinel Analytics Rule matches if he had any custom rules for such behavior. Perhaps John had previously created a Sentinel analytic rule to catch any process that uses keywords like “DownloadFile” or “Invoke-WebRequest” (standard in PowerShell web downloads). If so, that rule would have fired and been listed, confirming that Sentinel successfully detected the activity alongside Defender’s alert. In a Purple Team exercise, seeing these detections fire is encouraging because it validates that the defenses are working as intended.

John reviews the details and confirms that the simulated attack triggered the alert. For example, in the Sentinel logs, he finds that an event recorded the PowerShell command line containing “DownloadFile”, which matches his test. The Blue Team perspective would document this: “Attack X executed at 2:15 PM was detected by Defender and Sentinel analytics rule Y.” This is a win for detection. Still, the exercise isn’t over – now they consider the response and any gaps.

If No Alert Was Triggered (or Partial Detection): Let’s consider the alternative: suppose Microsoft Defender did not flag the activity outright. This could happen if the simulation were subtle or not recognized as malicious by the default heuristics. In that case, it’s even more valuable as a learning moment. John (Blue Team) doesn’t see an alert, so he goes hunting in Microsoft Sentinel. Using Sentinel’s powerful log search (KQL queries), he searches for evidence of the attack on test001. He might query the SecurityEvent table for Event ID 4688 (process creation events), where the process name is powershell.exe and the command line contains keywords like “DownloadFile” or the URL he knows was used. Sure enough, his query surfaces an event: it shows PowerShell ran with the suspicious command. This confirms the activity occurred and was logged, but no alert was generated.

Finding the footprints of the attack without an alert is why Purple Teaming is useful – it uncovers detection gaps. John notes, “We didn’t catch the PowerShell download attempt with an alert.” Now, he can proceed to improve the defenses (which we’ll do in the next step). The key for now is that Blue Team could still detect after the fact via hunting, thanks to having centralized logs in Sentinel. This is why having an SIEM like Sentinel is so valuable: even if an automated alert is missed, a human analyst can query historical data to find malicious actions.

Monitoring and Collaboration: Throughout this detection phase, in a real scenario, the Red Team member who executed the attack would sit with the Blue Team analyst (or at least communicate). They’d confirm things like the exact time of the attack, what should be showing up in logs, etc. John, who is doing both roles, can simulate this by double-checking the timeline. This collaboration ensures that the Blue Team isn’t blindly searching for a needle in a haystack – the Red Team can point them right to the needle. It’s a very efficient way to validate security monitoring.

John also considers the response aspect. What would the Blue Team do now if this were an actual attack? They’ve detected a malicious PowerShell. An incident responder might want to stop the threat (kill the process, isolate the machine) and investigate further. Microsoft Defender XDR has options to isolate the machine or remediate the threat. In Sentinel, one could trigger a Logic App playbook for a response. John notes whether his team had any automated response in place. Let’s say, for now, that detection was the focus and response will be addressed in the improvement step.

4. Analyzing Results and Tuning Defenses (Purple Team Collaboration)

After the simulation and detection phases, it’s time for the Purple Team to huddle and analyze the results together. This is where Red and Blue genuinely discuss what happened and decide on changes.

John sits down to answer some questions: Did everything go as expected? Were we able to detect the attack easily? If an alert was fired, was it timely and accurate? If no alert, how can we create one? Could we respond quickly enough? The answers will guide how to tune and improve their security controls.

Let’s break down the possible outcomes from the PowerShell test and how John (as part of the Purple Team) would address them:

  • Case A: Detected by Defender XDR and Sentinel: The team confirmed that the simulated attack was caught in this case. That’s good news – the existing detection rules worked. But the Purple Team’s job isn’t done. They will still review if the detection was efficient and if any noise or false positives occurred. Suppose the Sentinel analytic rule that fired was overly broad (maybe it triggers on any PowerShell usage of WebClient, which could catch some admin scripts as well). John might decide to tune the analytics rule to be more precise – for example, only trigger if the command is run by a user account that generally wouldn’t run such scripts or exclude known safe scripts. Tuning reduces false positives, making future alerts more trustworthy. This fine-tuning is integral to Purple Teaming: You refine the detection logic based on actual observations.

Additionally, the team considers response improvements. Since an alert was generated, how quickly was action taken? If this were real, would the SOC team be notified immediately? Maybe John realizes that while the alert was logged, no one would see it until checking Sentinel or Defender manually. That’s a gap – so he might configure an alert notification (email or Teams message) for such incidents or, even better, automate a response. For instance, John can create a Sentinel Playbook (an automated workflow) that triggers when this alert occurs. The playbook could isolate the machine test001 or kill the PowerShell process automatically, then send a notification to the security team. Implementing this automated response can significantly reduce response time – sometimes even mitigating the threat before a human gets involved. John tests this by rerunning the simulation to see if the playbook kicks in and successfully stops the process. He knows the team’s response capability levelled up when it does!

  • Case B: Not initially detected by rules (had to hunt it): The Purple Team identified a detection gap in this scenario. The analysis here is straightforward: we need to create or enhance a detection rule so this activity doesn’t go unnoticed next time. With the Red/Blue input, John proceeds to create a new analytics rule in Microsoft Sentinel specifically for this scenario. For example, he builds a KQL query that looks for any process execution events where PowerShell runs with specific suspicious parameters (like DownloadFile or other markers of web download). He tests this query against the logs (it successfully finds the event from the simulation) and then saves it as a scheduled analytic rule that will generate an alert if triggered in the future. He might also adjust Microsoft Defender’s settings if possible – for instance, ensuring that Cloud-Delivered Protection and Tamper Protection are on (since those can improve detection of script-based attacks).

After creating the new detection logic, John reruns the atomic test (or a similar one) to validate that his new alert works – essentially re-testing the scenario. This time, bingo! Microsoft Sentinel fires the alert as soon as the malicious PowerShell runs, just as intended. The gap has been closed. Purple Teaming took a weakness (lack of alert) and turned it into a strength (new detection in place).

Furthermore, the team discusses if there are broader lessons. If one PowerShell attack slipped by, are there others? This might lead to a review of all their detection rules around scripting or other MITRE techniques. Commonly, one exercise triggers a broader tuning effort. They might also schedule additional atomic tests for credential dumping (MITRE technique T1003) or lateral movement (T1021) to check those detections. In John’s case, he makes a note to test a credential dumping tool like Mimikatz on test001 next to ensure Defender XDR catches it (Defender for Endpoint typically does detect Mimikatz, but it’s good to verify in the lab). Each test either validates existing security controls or highlights something to fix – both outcomes are beneficial.

  • Case C: Detected, but too slowly or with confusion: Another possibility is that something was detected, but the team’s response or understanding wasn’t smooth. Maybe the alert fired, but it wasn’t clear what it meant, or the team analyzed it too long. Purple Teaming is also about improving processes and people, not just technology. If John found interpreting the alert difficult, he could improve the alert’s metadata or playbook to include helpful information (like mapping it to the MITRE technique, adding runbook instructions, etc.). If the team’s communication was slow, they might establish a better protocol for Purple Team exercises, like having a live chat open during the test where Red announces “attack started” and Blue says “alert received” in real time. This kind of practice prepares everyone for actual incidents – it builds muscle memory for communication.

After these adjustments, John updated his documentation of the exercise. He logged what was done, what was detected, and what changes they made. Over time, these notes form a library of tested scenarios and a checklist of the organization’s detection coverage. Some teams even create a table or dashboard that tracks different attack techniques and whether they are detected (and by which tool). In Microsoft Sentinel, John could use a workbook or the content from the community (like the Atomic Red Team Sentinel Workbook​) that helps visualize detection coverage. This is an optional but powerful way to measure improvement.

Key point: This tuning phase exemplifies the essence of Purple Teaming—collaborative improvement. Red and Blue examine the evidence together and decide on the next steps. There’s no blame, only learning. The outcome is stronger defenses and an even more in-sync team.

5. Repeating and Expanding (Continuous Purple Teaming)

While this was one exercise, Purple Teaming is meant to be an ongoing practice. John realizes that there are many techniques and scenarios to explore. Today, it was malicious PowerShell; next week, it might be a phishing email scenario or testing ransomware behavior in a controlled setting. The idea is to continuously test different angles of attack and ensure the defenses hold up.

He schedules regular Purple Team sessions (even if it’s just himself in the lab or with a colleague). For example, once a month, they might pick a new set of Atomic Red Team tests or a realistic attack chain to simulate. They might even emulate known threat actors by stringing together multiple steps (initial access, execution, persistence, etc.) to see if they can detect each stage. Microsoft Defender XDR and Sentinel are great platforms for end-to-end testing because Defender covers multiple domains (endpoint, identity, cloud apps, etc.) and aggregates everything. John can simulate an email-based attack (phishing) that drops malware on test001, then see how Defender for Office 365, Defender for Endpoint, and Sentinel all work in concert to catch various stages.

Over time, this continuous Purple Teaming builds a robust detection and response framework. John’s analytics rules library grows, his team’s confidence increases, and they start thinking proactively. Purple Teaming often naturally leads to more threat hunting because once baseline detections are strong, the team can hunt for more subtle signs of threats that might not have explicit alerts yet. It’s a virtuous cycle: simulation leads to detection improvements, which leads to more curiosity and hunting, which leads to catching even stealthier behavior, and so on.

Key Takeaways from Purple Teaming

Let’s highlight the key lessons John (and we) learned about Purple Teaming in this journey:

Key Takeaway 1: Purple Teaming is about collaboration, not competition. Red and Blue work together with a shared goal—improving security. This joint approach bridges the traditional gap between “attackers” and “defenders,” aligning their efforts to strengthen the organization’s defenses.

Key Takeaway 2: Continuous Testing = Continuous Improvement. By regularly simulating attacks (using tools like Atomic Red Team or manual techniques) and validating detections in Microsoft Defender XDR and Sentinel, Purple Teams create a feedback loop that steadily enhances detection quality. Gaps are identified and fixed, and false positives are trimmed, leading to more effective and accurate alerts.

Key Takeaway 3: Better Response and Team Readiness. Purple Team exercises don’t stop at detection; they also improve response. Teams can practice their incident response plans in a low-stakes setting, implement automation (like Sentinel playbooks) to react faster, and ensure everyone knows their role when an actual attack hits. The result is a faster Mean Time to detect and respond, limiting potential damage.

Key Takeaway 4: Skill Growth and Communication. Participating in Purple Teaming helps individuals grow. Blue Team members learn attacker tactics and how to think like hackers, while Red Team members learn the challenges of defense and how to make their attacks more effective (or help improve defenses). It breaks down communication barriers—people start speaking a common language of security. It’s the perfect training ground for someone like John to become proficient in offense and defense.

Key Takeaway 5: Building a Mature Security Program. Finally, Purple Teaming is a hallmark of a mature security practice. It demonstrates proactiveness and dedication to improving security continuously. It’s cost-effective, making the most of in-house talent and tools. By adopting Purple Team principles, organizations (even small ones) can elevate their security posture significantly without huge budgets – just by using structured collaboration and the tools at hand (like Defender XDR and Sentinel, which many already have in their Microsoft 365/Azure stack).

Conclusion: From IT Pro to Security Leader – The Journey Continues

As John wraps up this final Purple Team exercise on test001, he looks back at how far he’s come. A while ago, he was an IT professional stepping gingerly into cybersecurity. Throughout this series, he learned to think like a Red Team attacker and mastered Blue Team techniques to detect and respond using Microsoft’s powerful tools. He has learned to bring it all together as a Purple Team practitioner.

Purple Teaming has shown John that security isn’t just about attackers vs. defenders – it’s about working together to outsmart real adversaries. By practicing these collaborative exercises, he’s not only improved his environment’s security but also transformed his own skills. He’s become the kind of security professional who understands both sides of the equation, capable of leading a team through complex threat scenarios and making things safer.

This is a new beginning for readers like you, following in John’s footsteps. Your journey doesn’t end here. The beautiful thing about cybersecurity is that it’s a continuous learning process. Purple Teaming is your gateway to keep learning and leveling up. You don’t need a big budget or a formal title to start doing it – even in a small IT shop or on your home lab, you can apply these principles. Over time, you’ll build intuition on both offense and defense, setting you apart as a security leader who can bridge gaps between teams and focus everyone on what matters.

So, as you close this blog series and step into your real-world environment, be like John: stay curious, stay collaborative, and keep testing yourself and your systems. Every simulated attack that you turn into a learning opportunity is one less actual attack that can surprise you. Every detection rule you fine-tune is an attacker’s life made more difficult. And every time you share knowledge with colleagues (Red or Blue), the organization strengthens.

Thank you for joining us throughout this Red and Blue Team training journey using Microsoft Defender XDR and Microsoft Sentinel. We hope you found it enlightening and empowering. Now practice Purple Teaming with a friendly attitude and a hacker’s curiosity. With each exercise, you’re not just guarding the network – you’re growing into the kind of security professional who leads by example. This is how you become a steadfast defender of the digital realm, one collaborative step at a time.

Keep learning, keep collaborating, and remember: We are stronger together as a Purple Team. Good luck on your journey to becoming a security leader!

Thanks,

John Sr.