Skip to main content

The breach is confirmed. Every decision made in the next 24 hours either contains the damage or compounds it. Most security teams know what to do technically. What breaks down is coordination: who's talking to whom, on which channel, and whether that channel is already compromised.

This guide gives CISOs and IR leads a phase-by-phase breakdown of effective data breach response, covering the calls, decisions, and communication protocols that determine how fast you get back to business.

 

Hour 0–1: Confirm, Declare, and Stand Up Command

Before any containment action, you need a verified incident. Responding to a false positive wastes critical time and burns team trust. Confirming a real breach before declaring also means you have documented evidence to anchor the entire response timeline.

Confirm the breach with specific evidence: which system, which account, what indicators of compromise. Note who made the determination and at what time. Log this. It's the first entry in your compliance and insurance record, and everything that follows depends on it.

Once confirmed, assess initial scope. What data may have been accessed? Which systems are affected? How long ago did the initial access occur? These answers are rarely complete at the one-hour mark, but the goal isn't completeness. It's a working hypothesis that guides your first containment decisions.

Declare internally and trigger IR plan activation. The declaration step is often assumed but rarely documented in advance: who is authorized to declare, and what does that declaration trigger? If your organization hasn't defined this, you'll lose time to consensus-building when time is already short. With a dedicated platform, IR activation time drops from an industry average of roughly five hours to under one hour. That difference is felt immediately.

Stand up your incident command structure. Assign four roles if they aren't pre-assigned: Incident Commander, Technical Lead, Comms Lead, and Legal/Compliance Liaison. Every decision about who talks to whom, who authorizes containment actions, and who briefs the board flows from this structure. If roles are decided under fire rather than in peacetime, expect delays and conflicting instructions.

The most overlooked action in the first hour is the one with the highest downstream consequence: assess whether your primary communications channels are compromised. SSO credential theft means attackers who have logged in once may now have access to every system tied to those credentials, including Slack, Microsoft Teams, and corporate email. The risk today isn't hackers hacking in. They're logging in. This is exactly what happened during the Suncor breach: the response team was on a conference call coordinating their response to a ransomware attack, and an attacker spoke up on the line asking how they expected to respond when he was sitting there listening. Assume your coordination channels are monitored and move response communications to an out-of-band environment immediately. Don't coordinate a breach response over the same systems the attackers may be reading.

Preserve forensic integrity before moving on. Don't wipe, reimage, or restart affected systems yet unless they present an immediate operational risk. Every system touched before forensic imaging is potential evidence degraded.

 

Hour 1–4: Contain, Activate, and Move Everything Out-of-Band

Isolate affected systems from the network without shutting them down. Revoke compromised credentials and force MFA re-authentication across the environment. Block known attacker indicators of compromise at the perimeter and endpoint. Preserve logs from your SIEM, endpoint detection, network traffic, and identity systems. These are the forensic foundation for everything that follows, including any regulatory investigation or insurance claim.

Technical containment and team activation need to run in parallel here. Many organizations focus on containment and let coordination lag, then discover two hours in that half the team received notifications through a compromised channel.

Activate your incident response playbook. This step requires a question that should have been answered long before the incident: is your playbook accessible outside your primary environment? A playbook stored in SharePoint, a Google Drive, or an internal wiki isn't accessible if those systems are compromised or shut down as part of containment. If it lives in the same system being attacked, it's a liability. Your playbook needs to live in an out-of-band environment, ready before anything happens.

Stand up your virtual bunker. This is the secure, out-of-band coordination hub where all response activity gets centralized from this moment forward. No Slack. No corporate email. No Teams calls. Every decision, every finding, every action taken gets documented in one place. That centralized record is what regulators, insurers, and auditors will want to review, and it's what separates a coordinated response from one that falls apart under pressure. See how out-of-band communications work in practice.

Assemble your full IR team now. Internal stakeholders should include security, legal, IT leadership, executive communications, and HR if employee data is involved. External parties include your IR retainer firm if you have one under contract, your cyber insurance carrier, and outside counsel. Manual call trees at 2 a.m. aren't a reliable activation method. We built our quad-band mass notifications for exactly this scenario: simultaneous text, email, voice, and push alerts that get distributed team members into the virtual bunker fast. Review what ShadowHQ Notify provides for team activation.

Begin documenting the incident timeline immediately. What happened, when it was discovered, who found it, what actions have been taken, and who authorized them. This documentation is your compliance and insurance record, and it starts now, not after containment.

 

Hour 4–12: Scope the Damage, Manage Up, and Start the Notification Clock

Forensic investigation deepens now. The first priority is identifying what data was accessed or exfiltrated, because the data classification determines which notification obligations apply: personally identifiable information, protected health information, financial records, intellectual property. Each category triggers different regulatory timelines.

Confirming the access vector is just as urgent. You can't close a door you haven't found yet, and containment can't be considered complete until you know how the attacker got in. Assume the attacker is still present until forensic evidence proves otherwise. That assumption drives monitoring posture for the next 12 to 48 hours.

Legal and compliance review begins in parallel with forensic work, not after it. Notification obligations run on their own clocks. GDPR's 72-hour reporting requirement to supervisory authorities starts from when you became aware of the breach, not when the investigation is complete. HIPAA, state breach notification laws, and SEC disclosure rules each have their own timelines and scope definitions. Outside counsel should be engaged now to begin mapping these obligations and to establish attorney-client privilege over investigation findings, which becomes material if litigation follows.

Notify your cyber insurance carrier. Failure to notify promptly is one of the most common ways organizations inadvertently affect their own coverage. Your policy likely specifies a notification timeline; your outside counsel can confirm. This call also opens up carrier resources: IR firm support, forensic specialists, and legal guidance that may be covered under your policy.

Executive and board communications require care at this stage. Prepare a brief, factual status update: what's confirmed, what's still under investigation, what actions are being taken, and what the projected timeline looks like. No speculation. No minimizing. No premature claims about containment status. Establish a single spokesperson for each audience. Fragmented executive communications during a breach create secondary problems: contradictory statements, misplaced confidence, and confused stakeholders making decisions based on incomplete information.

Assess third-party exposure. Were partner organizations, vendors, or customers affected by this breach? Do your contracts or applicable law create downstream notification obligations? This analysis takes time and requires legal input, which is another reason to have counsel engaged by this phase rather than later.

Begin drafting external notification language now, but don't send it yet. Work with legal and communications to prepare accurate, appropriately scoped language for customer or public notification. External notification timing is a legal and strategic decision, and having draft language ready prevents a last-minute scramble when the call is made.

 

Hour 12–24: Stabilize, Notify, and Set Up for the Long Haul

Confirm containment is holding with active monitoring, not assumption. Is the attacker evicted, or simply quiet? Continue monitoring SIEM alerts, EDR telemetry, network traffic, and identity systems for lateral movement or new indicators of compromise. Validate that no additional systems have been affected since initial containment. Document every monitoring action taken and its outcome.

Most significant data breaches take days or weeks to fully investigate and remediate. The goal at this stage isn't closing the incident. It's achieving a defensible containment posture, executing required notifications, and setting the team up for a sustainable response.

Begin remediation planning. Patching, credential resets, and architecture changes should be documented and prioritized by risk. Some remediation steps can wait for full forensic completion; some can't. Your technical lead and outside counsel together determine what's urgent versus what should wait for the investigation to fully close. Document what has been planned versus what has been completed. This is material for insurance and regulatory purposes.

Execute required notifications. If the 72-hour GDPR clock is running, regulator notification goes out now. Submit the formal claim to your cyber insurance carrier. Notify affected individuals if the legal threshold is met and scope is sufficiently confirmed. Contact law enforcement if the incident warrants it. The FBI's IC3 and CISA are appropriate contacts for significant incidents, particularly involving critical infrastructure or ransomware.

Stakeholder reporting is a distinct workstream from incident documentation. The board and executive team need a status brief: what happened, current containment status, what's known and unknown, and projected timeline for resolution. This is a factual summary designed to enable informed decision-making at the leadership level. Generating this report without pulling the IR team off the active incident is exactly what a structured reporting export is built for. Our platform details cover the audit trail and reporting functionality that supports both the internal brief and the compliance record.

Establish shift management before fatigue sets in. A 24-hour incident becomes a two-week investigation. If the team that handled the first 24 hours is still running on adrenaline and caffeine by hour 30, decision quality degrades. Establish shift rotations, clarify decision-making authority for off-hours situations, and define escalation protocols for the teams coming on. Brief incoming shifts in the virtual bunker, in the same out-of-band environment where the entire response has been documented. The brief is already there. The timeline is already documented. Incoming team members get up to speed without a phone call at 3 a.m.

Begin a running list for post-incident review. The question to document now is: what do you wish you had prepared before this incident? The gaps you discover in the first 24 hours are the gaps your tabletop exercises should have found. Write them down while they're visible.

 

What Separates Teams That Respond Well from Teams That Don't

Preparation before the incident determines performance during it. Every pattern we see across our customers points back to the same principle: teams that built the infrastructure before the worst phone call of their career handled it. Teams that assumed their existing tools would hold together under attack found out otherwise.

An out-of-band communication channel already stood up means you don't lose the first hour figuring out how to coordinate when your primary tools are compromised. If your incident response coordination lives in the same environment that's compromised, you're responding blind. Our incident preparedness planning guide covers a structured approach to building this before anything happens.

Playbooks that live outside your primary systems can actually be run during a crisis. Pre-assigned roles mean nobody spends the first hour negotiating who declares the incident, who briefs the board, or who calls the insurer. These are peacetime decisions, not ones to sort out under attack.

Regular tabletop exercises reduce surprises during real incidents. The coordination gaps, the missing contacts, the playbook steps that don't match reality: these surface during a tabletop, not during an actual breach. Across our customers, 85% run regular tabletop exercises, compared to roughly 40% industry-wide. That difference shows up directly in response quality. Running a tabletop is more accessible than most teams assume.

Fast activation gets the right people in the room before critical decisions have already been made without them. When activation takes five hours instead of under one, containment doesn't start at hour two. It starts at hour six. A single source of truth for documentation gives you a clean record from minute one, which is the legal and compliance record that regulators and insurers will want, and it's what protects you and your organization if the response itself ever comes under scrutiny.

No team handles a breach well on the first try without preparation. The teams that respond from a position of strength have built the infrastructure, run the exercises, and established the protocols before anything happened. If your current setup relies on corporate email or Slack when things go wrong, that's worth examining before your next incident, not during it.

 

Take the ShadowHQ Readiness Assessment to see where your response posture stands today, or book a 30-minute demo to see the virtual bunker in action.

 

See The Virtual Bunker For Yourself