Ransomware doesn’t only concern encrypted files. It’s concerned with supply chains, business continuity and risk at the board level. Below are ten significant incidents from a variety of sectors and each one with a brief “what happened,” why it mattered, as well as an actionable takeaway that to apply it today.
1.) Colonial Pipeline (2021) — Fuel for the fire
What occurred: DarkSide affiliates breached a previous VPN account which was not MFA-compliant which led to a shutdown lasting days of the biggest refinery pipeline across the U.S. Colonial paid about $4.4M; the DOJ then seized $2.3M of that payment (63.7 BTC).
What caused it to shake the business A single identity gap has swung into physical-world chaos and national media coverage. You can do it now: Secure MFA that is secure against phishing (FIDO2/passkeys) on All remote access, close accounts that aren’t being used and set temporary, re-authentic sessions for admin applications. Reuters
2.) JBS (2021) – Meat processor and global shock
What was the result: Ransomware disrupted operations across North America and Australia; JBS confirmed that it paid $11 million with Bitcoin.
The reason it caused a stir in the industry: The article explains how supply chains that are concentrated (a small number of large factories) increase cyber-security risk. Now do this: Separate production networks from corporate IT. Practice manual failovers to ensure payments and logistics.
3.) Kaseya VSA / REvil (2021) – One vendor has numerous victims
What was the result: REvil abused Kaseya’s RMM platform to spread ransomware to MSPs and customers. There were less than 60 direct Kaseya customers were affected, however, there were between 800 and 1,500 downstream businesses were affected.
What caused it to shake the industry: Leveraging supply chain leverage in software made a single flaw global downtime. Now do this: Treat Update/orchestration tools as the crown jewels as crown jewels: strict allow-listing JIT administration, validation of code-signing as well as out-of-band emergency revocation pathways.
4.) Norsk Hydro / LockerGoga (2019) -“We won’t pay “We won’t pay” at industrial scale
What was the result: LockerGoga ransomware forced the aluminum giant to perform manual processes; the initial costs reached $40 million. Hydro publically pledged the company would never pay and conducted a non-destructive recovery.
The reason it caused a stir in the business A rare case study of big-manufacturing in resilient response and communications. Now do this: Create runbooks for OT/plant in manual mode, and then practice Transparent communications with stakeholders.
5) Garmin / WastedLocker (2020) — When SaaS downtime grounds planes
What was the result? Ransomware stopped Garmin services (including updates for aviation) Reports indicated 10 million-dollar settlement to restore the operation.
What caused it to shake the business The presentation demonstrated how availability of software can influence the regulation of services that go in addition to fitness information. You can do it now: Test restore-time for critical SaaS/self-hosted service and to maintain the inactive “golden” recovery path.
6) NotPetya (2017) — A wiper wearing a ransom mask
What was the result? The worm was destructive and was spread through a poisoned software update and harmed businesses across the world. Maersk alone has reported $200 to $300 million in losses, with global damage estimates range from the $10 billion to $10B. (Merck later agreed to settle the $1.4B insurance dispute related to the incident.)
The reason it caused a stir in the business “Ransomware” can be Purely destructive Supply chain trust and segmentation are the key to success. Now do this: Enforce Application allows listing Signed-update provenance, signed-update, and network isolation, which can Survive A compromise in the AD AD compromise in AD.
7.) MGM Resorts & Caesars (2023) -Social engineering and ransomware
What transpired: Social-engineering of help desks allowed accessibility to Identity providers. Caesars reported to have paid $15 million; MGM’s time to repair and downtime were estimated at in the vicinity of $100 million in the impact. ALPHV/BlackCat as well as “Scattered Spider” were linked.
What caused it to shake the business It is proven that identity + process failures beat fancy exploits. Now do this: Phishing-resistant MFA, number-matching A strict set of help desk factors reset procedures and JIT (not in standing) Access to admin. Cybersecurity Dive
8.) Change Healthcare (2024) — Healthcare’s singular point of failure
What occurred: ALPHV/BlackCat crippled claims/pharmacy rails. Blockchain watchers flagged the possibility of a $22 million payment. A third party later claimed ALPHV “exit-scammed,” underscoring the lack of any guarantees real-world reality of making payments.
What caused it to shake the industry: One intermediary’s downtime rippled across pharmacies, providers and patients across the country. You can do it now: Mandatory MFA on Every remote path that has been used in the past Business-continuity plans for clearinghouses as well as contractual SLAs for incidents involving vendors. JAMA Network
9) CDK Global / BlackSuit (2024) — 15,000 dealerships, paper forms
What occurred: Ransomware at CDK Global affected dealer management systems that are used by 15,000 auto dealers; the attribution was attributed at BlackSuit and BlackSuit, with reports of huge ransom demand and payments tracked to the chain.
What caused it to shake the industry: A SaaS hub issue has halted the daily operations of an entire industry. You can do it now: Map SaaS platform dependencies Require tabletop evidence from vendors and builder’s prior to building manual contingencies for sales/repairs/payments.
10.) CNA Financial (2021) -CNA Financial (2021) – “don’t make me say the number” payment
What occurred: The reports indicated that CNA paid $40 million to restore control after ransomware, one of the biggest corporate transactions made at the time.
What caused it to shake the business Shown how the cost of downtime can influence decisions in favor of the payment. Now do this: Consult with counsel on Sanctions risk Pre-negotiate IR insurance providers and vendors, and determine if you can get out weeks of outages that are only partial.
The patterns you can act upon today
-
Identification > Endpoint Stop push-only and legacy authentication. Instead, use the FIDO2/passkeys for administrators or remote access. Be aware of the signs of fatigue in MFA patterns.
-
Backups which actually allow for the restoration of: Immutable or offline copies regular drills, and recovery time goals that are measured by the hours (not the days).
-
Segment to fail: Assume an IDP/AD takeover, ring-fences OT payments, CI/CD and file transfer systems.
-
Vendor & SaaS due diligence: SBOM/provenance for update channels; demand incidents SLAs and contact tree, and receipts for tabletop services from the key vendors.
-
Least privilege and JIT Get rid of standing globally admin time-box elevation, and re-authening sensitive consoles.
-
Identification of identity abuse: Alerts for mass MFA denies, followed by one approval new factor enrollment, rapid grant requests from groups, and OAuth app consent increases.
-
Helpdesk hardening Two-channel verification prior to factor resets; issue an “we will never DM for an approval” policy.
-
Crisis communications and legal channels: Decide now how you will communicate with customers as well as regulators and partners in the event of a system failure, and what you’ll do about alternative ransom options.
A quick reference table
| Incident | Year | Impact of business (examples) | Key lesson |
|---|---|---|---|
| Colonial Pipeline | 2021 | Fuel disruption; $4.4M paid; $2.3M recovered | MFA all over the world; retire older VPNs Axios |
| JBS | 2021 | Operation halted; $11M payment | Segment plants fallbacks manual Reuters |
| Kaseya / REvil | 2021 | 800-1,500 downstream firms impacted | Treat update/orchestration as crown jewels Kaseya Helpdesk |
| Norsk Hydro | 2019 | 40 million early cost and refused to pay | Transparent Recovery and OT runbooks BankingInfoSecurity |
| Garmin | 2020 | Multiple-day SaaS/aviation downtime; $10 million was reported | Check restore time of SaaS WIRED. SaaS WIRED |
| NotPetya (Maersk, etc.) | 2017 | Maersk $200-300M; global ~$10B damage | Supply chain and segmentation, not only Backups Forbes |
| MGM / Caesars | 2023 | MGM ~$100M hit; Caesars ~$15M paid | Identity & help-desk rituals; anti-phish MFA Reuters |
| Change Healthcare | 2024 | National claims/pharmacy disruption; $22M payment reported | Vendor single-points require actual BCPs for WIRED |
| CDK Global | 2024 | ~15,000 dealers disrupted; BlackSuit attribution | SaaS contingency management at an the scale of industry Reuters |
| CNA Financial | 2021 | $40M reported ransom | Pre-decided breach economics, sanctions and sanctions position Bloomberg |
Executive checklist (printable)
-
100% of admins on FIDO2/passkeys • SMS is disabled for administrators
-
Standing global admin = 0 ; JIT elevation with re-auth
-
Backup/restore drill within the last 90 days; offline copy verified
-
Conditional access on IDP + short sessions for critical consoles
-
Help-desk factor-reset: two live controls (ticket + callback)
-
Vendor tabletop proof for top-10 SaaS/platform dependencies
-
DLP/egress alerts on bulk Downloads via file-transfer/EDI
-
Clear ransom decision tree (counsel + insurer + regulators)