What can we learn from the Largest Data Breaches of the Decade

Between 2015 and 2025, mega-breaches have changed our perception of the risk. Here’s a simple and practical guidebook based on what went wrong.

A 60-second visit that includes “the big ones”

  • Yahoo (2013/2014 The breach was disclosed in between 2016 and 2017): ultimately all 3 billion accounts were affected. It’s still the largest single security breach to date.

    Reuters

     

  • Equifax (2017): 147.9M Americans were exposed to data due to the discovery of an not patched Apache Struts flaw and an expired certificate late detection.

    Oversight Committee

     

  • Marriott/Starwood (2014-2018): 339M guest records, which include details of the passport and passport numbers; later penalized to the UK ICO.

    Hunton Andrews Kurth

     

  • Capital One (2019): cloud/WAF misconfigurations enabled data theft. The attacker was later found guilty.

    Department of Justice

     

  • SolarWinds (2020): supply chain intrusions through Orion updates has affected government agencies as well as businesses around the world.

    CISA

     

  • Colonial Pipeline (2021): ransomware disrupted fuel distribution. The company was able to pay $4.4M and the FBI took back a portion of it.

    Wikipedia

     

  • MGM Resorts & Caesars (2023): social engineering of help desks lead to disruptions and a cost of $100 million at MGM.

    SEC

     

  • 23andMe (2023): credential stuffing exposed information about ~6.9M users through connected features.

    The HIPAA Journal

     

  • MOVEit (2023): a single MFT zero-day ripple that spanned more than 2,700 organizations or 93M people–a reality check for the supply chain.

    Wikipedia

     

  • AT&T (2024): ~73M current/former account holders in a dark-web dataset–data dating back years.

    Reuters

     

  • Change Healthcare / UnitedHealth (2024): ransomware + the absence of an MFA on the server; ~190M Americans potentially affected and a ransom of $22 million was paid.

    AP News

     

  • Stolen Snowflakes (2024-2025): data stolen from multiple users (e.g., Ticketmaster, Santander) through stolen credentials. Not an exploit for a platform however, it is an identity theft story from a identity theft by a third party story.

    ThreatDown by Malwarebytes

     

The headlines above aren’t all an example of a traditional “breach” (e.g., Cambridge Analytica was data misuse) however, these management principles are similar: excessively collected data and weak controls equals massive consequences.

Federal Trade Commission 

Ten lessons security professionals have actually implemented the following day

  1. Patch like your brand’s depends on the patch (because the patch does). Equifax shows how an unintentional critical patch (Apache Struts, CVE-2017-53838) and an expiring TLS inspection certificate prevented detection which turned a fixable vulnerability into a disaster. Now: Monitor the time to patch critical patches (SLA 7 days or less) Automate vuln-to-ticket and keep track of exploit telemetry — not only versions.

    Oversight Committee

     

  2. Remove it if there’s no reason to require it. AT&T’s breach impacted 65.4M old accounts. Old data is lingering and liability also. Now: implement retention of data by the system and automate the purge of old PII. Show it by logging deletions.

    Reuters

     

  3. Use third party suppliers as your attack source. SolarWinds and MOVEit proved that the risk of suppliers is your own risk. Take action now: Keep an electronic bill of materials (SBOM) and require coordination disclosure and patch SLAs within contracts, and segment/monitor files transfer as well as management software.

    CISA

     

  4. Cloud is secure–misconfigurations aren’t. Capital One’s theft rode a WAF/metadata misconfiguration path (SSRF). Do now: Enforce least-privilege IAM, block instance-metadata access by default, and run continuous misconfig drift checks.

    Department of Justice

     

  5. Identity has become the latest perimeter, allowing you to close all the gap.

    • MFA across the board even service/admin accounts (Change Healthcare did not have MFA on the server that was critical).

    • It is possible to detect token abuse and sessions hijacking (Okta HAR sessions and sessions tokens).
      Now: FIDO2 for admins restricted access for conditional users, tokens that are short token-signing key monitoring, as well as fast session cancellation.

      AP News

       

  6. Consider that you IdP and emails could get at risk of being targeted. Microsoft’s Storm-0558 fake tokens that read emails from 25 orgs and the U.S. CSRB called it unavoidable. Take action now the following: Independent cloud identity reviews and externalized key custody when feasible, and a rigorous key hygiene and rotation, including the ability to detect anomalous signatures.

    Microsoft

     

  7. People and process surpass “perfect tech.” MGM’s problems with outages that were escalating began with phone-based social engineering at Help desk. What you can do now: Help desk strengthening (no resets on chat/phone without re-proofing with high-assurance), “assume-breach” tabletop drills, as well as just-in-time administrative elevation.

    Reuters

     

  8. Resilience to ransomware is a business. Colonial Pipeline and Change Healthcare demonstrate that paying a ransom does not assure speedy recovery or data security. Consider the following: offline, immutable backups and hour-zero runbooks that have been tested segmentation to keep crown jewel operations running even under pressure.

    Wikipedia

     

  9. Transparency of incidents is important (and regulators are aware).
    Marriott’s fine as well as Facebook’s $5 billion FTC penalty highlight the fact that privacy, disclosure quality programs and diligence of vendors are issues at the board level.
    Do now: Pre-write comms templates Legal review lanes and contact trees for regulators.

    GDPR Register

     

  10. Create a credential-stuffed system that assumes your users use passwords for multiple times. 23andMe demonstrates how the proximity of features (e.g. relations/ancestry connections) could increase the risk. Now: Implement MFA opt-out frictions, breached-password checks, dynamic throttling and feature-level abuse limitations.

    The HIPAA Journal

     

A realistic 90-day schedule (use this as a checklist)

Week 1 Stabilize your position and evaluate your risks

  • Send an crown-jewels Map (systems plus data and third party).

  • Make sure to turn on MFA for the entire organization particularly for administrators as well as service accounts. VPN/SSO.

  • Patch/mitigate all known exploited vulns (KEV liste) and remove old remote admin and MFT tools.

    Wikipedia

     

Week 3-6: Reduce blast radius

  • Inforce the least privilege and remove old access; use JIT administrator.

  • Segment crucial workloads and set up break-glass accounts using hardware keys.

  • Implement retention and deletion for old customer data (prove it by keeping logs).

    Reuters

     

Week 7-10 and respond more efficiently

  • Install key/token for abuse detection (O365/Google IdP/Workspace).

  • It is required to require the help desk to verify the identity of callers and prevent resets using chat or by voice, if there are no high-assurance elements.

  • Run a ransomware tabletop: restore from backups, validate RTO/RPO, test comms to customers/regulators.

    Reuters

     

From 11-13 November Secure on the supply chain

  • Get third-party software and services in stock; obtain SLOs and patches SLAs written.

  • Gate vendor connectivity is governed by policy-based network as well as the monitoring of behavior • Distinguish MFT/automation software.

    CISA

     

Measures that actually alter over the results

  • MTD/MTTR for identity anomalies as well as spikes in egress

  • % critical vulns patched <=7/15 days (internet-facing/internal)

  • Administrator accounts that have MFA that is resistant to phishing (target 100 percent)

  • Token Keys that signify hygiene of the key (rotation HSM usage Anomaly alerts)

    U.S. Department of Homeland Security

     

  • The third-party insurance: % vendors with SBOM, vuln SLA and incident clauses

  • Data minimization: % systems enforcing automated PII deletion after X days

    Reuters

     

Final takeaway

The most prominent incidents weren’t “zero-days versus perfect defenses.” They were simple security gaps–unpatched systems, stored-forever information, soft help-desks optional MFA and the spread of vendors. The good news is that the solutions are measurable and easily observable.

New Posts

How Hackers Deceive You With Fake Job Offers: Identifying and avoiding Employment Scams

How Hackers Deceive You With Fake Job Offers: Identifying and avoiding Employment Scams

In the current fast-paced, online world, finding work is now a more online activity. From professional…

The Rise of Voice Phishing (Vishing) Calls: How to Recognize and Protect Yourself

The Rise of Voice Phishing (Vishing) Calls: How to Recognize and Protect Yourself

In a world that technology is constantly evolving and cybercriminals are constantly coming up with…