Top 10 Social Engineering Attacks in 2026 (and the Red Flags People Missed)
By Irene Holden
Last Updated: January 9th 2026

Too Long; Didn't Read
AI-driven impersonation (deepfake video and AI-cloned voice), Business Email Compromise and vendor/invoice fraud, help-desk vishing, spear-phishing, and insider-assisted attacks top the 2026 list - they succeed because attackers exploit human and process gaps more than technical holes. These plays helped shape a landscape where reported cybercrime losses reached about $16.6 billion in 2024, with BEC responsible for over half of social-engineering losses (roughly 21,500 complaints and about $2.9 billion), and high-profile examples ranging from a $25.6M deepfake wire to multi-million-dollar vendor frauds and insider-enabled exposures.
You’ve probably seen those “Top 10 Biggest Hacks” posts that feel like a SportsCenter reel for cybercrime: huge dollar losses, famous company logos, dramatic headlines. They’re a useful hook, but they’re also a little misleading. Just like a dunk on a highlight reel hides the boring missed pass that decided the game, most breach headlines hide the quiet, human-sized mistake that actually turned the ball over.
The scoreboard vs. the quiet turnover
On the scoreboard, the numbers are brutal. The FBI’s Internet Crime Complaint Center reports that cybercrime victims collectively lost about $16.6 billion in 2024 alone, according to an analysis of FBI-confirmed cybercrime losses. Drill down and you see that Business Email Compromise (BEC) is one of the biggest drivers of financial damage. A social-engineering breakdown from DeepStrike notes that BEC is responsible for more than half of social-engineering-related losses, with nearly 21,500 BEC complaints and about $2.9 billion in reported losses in a single year (DeepStrike’s social engineering statistics).
Why “Top 10” lists can trick your brain
The paradox is that lists like this can make you feel informed while still hiding the real lesson. Each “incident” on a top-10 list is actually dozens of tiny, human decisions: a rushed approval, a help-desk exception, a skipped callback, a permissions pop-up someone clicked without thinking. As one intelligence team put it, “the use and impact of GenAI is very likely to shape the cyber threat landscape to an even greater extent… Social engineering will remain one of the most exploited threat vectors.” - ZeroFox Intelligence. The big stories are AI-deepfake heists and mega-ransomware events, but the decisive moment is almost always a subtle fake-out that could have been stopped.
Watching the tape like a defender
This article is the film room, not the sizzle reel. For each attack, you’ll get the “highlight” that made headlines, but we’ll spend more time hitting pause and rewind on the exact second things went wrong: the setup, the human decision, the red flags everyone missed, and the simple control or habit that would have changed the play. The goal isn’t fear; it’s pattern recognition. These are learnable skills, and they’re exactly what entry-level analysts, help-desk techs, and future security engineers are hired for: seeing past the scoreboard to the quiet, fixable moments that keep your organization off the highlight reel in the first place.
Table of Contents
- Why the highlight reel misleads
- Nucamp Cybersecurity Bootcamp
- Deepfake all-hands video scam (Arup)
- Help-desk vishing that led to ransomware (MGM & Caesars)
- AI-cloned voice scams
- Vendor email takeover (Grand Rapids Public Schools)
- Toyota Boshoku business-email compromise
- Google & Facebook Rimasauskas invoice fraud
- Children’s Healthcare of Atlanta BEC
- RSA SecurID breach via spear-phishing
- Insider bribery and social engineering (Coinbase)
- Spot-the-scam checklist
- Frequently Asked Questions
Check Out Next:
If you want to get started this month, the learn-to-read-the-water cybersecurity plan lays out concrete weekly steps.
Nucamp Cybersecurity Bootcamp
Before you can break down game film, you need to know what you’re looking at. That’s essentially what Nucamp’s cybersecurity path is built for: giving beginners and career-switchers a structured way to understand how real attacks unfold, instead of just scrolling past another “Top 10 Hacks” headline. It’s designed so you can keep your day job, learn the language of security, and start spotting the quiet human mistakes that decide most incidents.
The setup: a 15-week, beginner-friendly path
Nucamp’s cybersecurity track runs over 15 weeks, split into three intensive four-week courses plus a transition week: Cybersecurity Foundations, Network Defense and Security, and Ethical Hacking. The format is 100% online, with weekly live workshops of about four hours in small groups (capped around 15 students) and a total time commitment of roughly 12 hours per week including self-paced work. Tuition when paid in full is about $2,124, with a modest registration fee and options for early-bird or installment pricing, which puts it well below the five-figure price tags many bootcamps charge for similar timelines.
| Program | Duration | Approx. Tuition | Format |
|---|---|---|---|
| Nucamp Cybersecurity Path | 15 weeks (part-time) | $2,124 (paid in full) | Online, evenings/weekends |
| Typical Cybersecurity Bootcamp | 16-24 weeks | $10,000+ | Full-time or hybrid |
What you actually work on in “film room”
Each of the three courses tackles a different part of the play. Cybersecurity Foundations builds core concepts like the CIA triad, common attack types, and policy/compliance - basically, how things shouldafter that bad click or fake phone call, covering protocols, vulnerabilities, firewalls, IDS/IPS, VPNs, and segmentation so one social-engineering success doesn’t become a full breach. Ethical Hacking then lets you see the game from the attacker’s sideline: doing reconnaissance, vulnerability assessment, and tightly controlled exploitation exercises under explicit legal and ethical constraints, so you learn how real attackers build convincing pretexts without ever crossing the line yourself.
Credentials and cert prep that hiring managers recognize
As you progress, you earn Nucamp’s CySecurity, CyDefSec, and CyHacker certificates, which map well to entry-level expectations for security analysts, SOC roles, and junior pen testers. The curriculum is also aligned to industry exams like CompTIA Security+, GIAC GSEC, and EC-Council CEH, all of which test your understanding of social engineering and defensive controls. A global snapshot from the Cybersecurity Ventures almanac highlights millions of unfilled cyber roles worldwide, which is why structured, cert-aligned training has become such a common on-ramp for career changers.
Outcomes, reputation, and why it fits career-switchers
On the outcomes side, Nucamp reports a graduation rate of about 75% for this path and a community reputation anchored by a 4.5/5 Trustpilot rating across roughly 398 reviews, with the vast majority rated five stars. The program has also been recognized by outlets like Fortune as a “Best Overall Cybersecurity Bootcamp,” which helps when you’re trying to stand out for that first analyst or IT security role. Combined with career services - portfolio support, mock interviews, and job-search coaching - you get more than just highlight reels of scary attacks; you get guided reps in slowing the tape down, reading human-driven attacks in context, and responding in ways that are both technically sound and legally and ethically solid.
Deepfake all-hands video scam (Arup)
Every top-10 list needs a jaw-dropping clip, and the deepfake “all-hands” scam that hit a multinational firm in Hong Kong is exactly that. In 2024, an employee joined what looked like a routine video conference with several familiar faces, including the company’s CFO. Everyone sounded right, moved naturally, and referenced real projects. By the end of the call, the employee had authorized a series of urgent transfers totaling about $25.6 million. Only later did investigators conclude that every “colleague” on that call was a AI-generated deepfake - the only real person was the victim.
The highlight: a fake all-hands that felt real
On the surface, this looked like a normal high-stakes meeting: a senior executive personally walking a trusted employee through “confidential” wire transfers that had to be completed immediately. Attackers had cloned the CFO and other leaders using stolen video and audio, then stitched those synthetic identities into a live conference. Instead of a sketchy email or obvious scam link, the request came from a polished video call - the kind of channel most people instinctively trust more than text.
Off-the-ball movement: how attackers set up the play
Behind that one spectacular play was a lot of quiet prep. Attackers likely harvested public talks, internal recordings, and social media clips to train generative models that could mimic executive voices and faces. They gathered org charts, project names, and internal jargon so the conversation would feel natural. Security pros have been warning that “the days of scattered phishing emails with bad grammar and obvious red flags are gone” as AI lets attackers match tone, accent, and style with unsettling precision - as one expert told SC Media in a debrief on recent brand attacks.
“Attackers aren’t just breaking in, they’re advancing at an alarming rate using AI... the days of scattered phishing emails with bad grammar and obvious red flags are gone.” - Subject-matter expert, SC Media
The red flags that vanished in the moment
When you slow the tape down, a few things still stand out. Large, unusual transfers were initiated only in a live meeting, with no matching purchase orders, tickets, or signed approvals. The “CFO” was suddenly hands-on with payment operations they normally delegate, and the whole scenario was wrapped in secrecy: this was “too sensitive” to verify with anyone else. There may also have been subtle glitches - slight audio delays, odd eye contact, or executives appearing from unfamiliar environments - that felt off but were easy to ignore under pressure.
Changing the defensive playbook, ethically
Defending against this isn’t about spotting pixel-level artifacts; it’s about refusing to let any single channel, no matter how realistic, override your normal controls. That means hard rules like dual approval and documented backup for high-value wires, plus out-of-band verification any time an executive asks for money movement or security changes - call back on a known-good number, ping them in your official chat, or confirm through your ticketing system. It also means training: running safe simulations with synthetic media, teaching teams that “voice and video ≠ identity,” and reinforcing that it’s not just allowed but expected to pause, verify, and say, “I need to follow the process,” even when the person on screen looks exactly like your CFO.
Help-desk vishing that led to ransomware (MGM & Caesars)
Some of the most expensive “plays” on the ransomware highlight reel don’t start with malware at all - they start with a friendly voice on the phone. That’s what happened to hospitality giants MGM Resorts and Caesars Entertainment in 2023, when a group often called Scattered Spider reportedly used phone-based social engineering to talk their way past the front line: the IT help desk. From there, ransomware and extortion followed, costing the companies millions in disruption and recovery, as chronicled in post-incident reviews like Integrity360’s roundup of major attacks.
The highlight: ransomware built on a fake support call
On the scoreboard, this looks like a “sophisticated ransomware operation.” On the tape, the first move is surprisingly simple: a phone call to the help desk. Attackers phoned in, claimed to be legitimate employees who were locked out ahead of some urgent task, and convinced support staff to reset passwords, bypass or reset multi-factor authentication (MFA), or grant new access. They didn’t need zero-days; they needed believable stories. Public data from LinkedIn, company bios, and previous breaches gave them real names, roles, and manager chains to sprinkle into their pretexts, turning a cold call into what felt like a normal, stressful support situation.
The script break: where the defense bent
Slow the tape down and you see a few subtle but critical breaks in the playbook. Caller ID or callback numbers were treated as partial proof of identity, even though both are easy to spoof. Help-desk staff “flexed” verification procedures to be helpful when the caller sounded panicked or important. High-impact actions - like resetting MFA or granting access to sensitive tools - went through without any out-of-band verification, such as checking with the user’s manager or messaging them on an internal channel. Research into emerging social-engineering tactics notes that this exact help desk & support impersonation pattern is becoming a go-to move for attackers, especially as AI tools make it trivial to gather convincing details and even generate scripts ahead of time, a trend highlighted in Memcyco’s 2026 social engineering tactics report.
“Be politely paranoid. If you receive a request for a sensitive action… verify who they say they are with a second verification.” - Rachel Tobac, CEO, SocialProof Security
Defensive adjustments for future analysts and support pros
For defenders - and especially for anyone starting in IT support or junior security roles - the lesson isn’t “never help users.” It’s that sensitive actions must ride on process, not on how convincing a caller sounds. That means hard requirements for dual verification before resetting MFA or privileged accounts (for example, confirming through an internal chat or ticketing system), strict scripts that can’t be bypassed for “urgent exceptions,” and least-privilege controls so a single reset doesn’t quietly open the whole network. Practicing realistic vishing scenarios in training and labs helps you build the muscle to do exactly what Tobac recommends: stay helpful, but politely paranoid, so you don’t let a smooth pretext turn into the first domino in a ransomware chain.
AI-cloned voice scams
When people talk about AI in cyberattacks, this is the clip they think of: the phone rings, and it’s your boss, your bank, or your kid, sounding exactly right and demanding something urgent. Over the last couple of years, attackers have started using AI-cloned voices in real scams, from “CEO” calls ordering last-minute wires to fake “family emergencies” asking for fast money. The FBI even issued a public warning about AI-generated voice phishing that impersonated senior U.S. officials, combining texts and robocalls to push people into sharing credentials and one-time codes, as analyzed in detail by BlackFog’s breakdown of the FBI alert.
Why the fake voice beats your instincts
If you pause the tape here, the scary part isn’t the tech, it’s the psychology. Attackers use cloned voices to slam on your emotional brakes: fear (“your account is compromised”), panic (“your relative is in trouble”), or obligation (“the CEO needs this done now”). At the same time, most of us still treat a familiar voice as proof of identity, so once the sound matches, we stop questioning the story. Threat-intel teams have been blunt about where this is going:
“In 2026, the use and impact of GenAI is very likely to shape the cyber threat landscape to an even greater extent than observed in 2025... Social engineering will remain one of the most exploited threat vectors.” - ZeroFox Intelligence
Slowing the tape: red flags in AI-voice scams
On replay, AI-voice scams usually show the same tells, just dressed up in a better accent:
- A “known” person calling from a completely new number and insisting you stay on that call or text thread.
- Pressure to move fast and keep it quiet: “do this before the meeting,” “don’t loop anyone else in,” “we’ll fix the paperwork later.”
- Requests that don’t fit the role, like a CEO personally chasing down wire details or a bank “verifying” your one-time MFA code.
- Refusal when you suggest calling back on the main office number or confirming via your usual corporate chat.
Defensive playbook: treat voice as just one signal
The adjustment, for both everyday users and future analysts, is to stop treating voice as a magic key. Make “I’ll call you back on the official number” your default for anything involving money, access, or sensitive data. Lock in strong financial workflows that always require written documentation and dual approval for wires, no matter who calls. In corporate environments, teach explicitly that voice ≠ identity, and empower help desks, finance teams, and managers to slow things down, cross-check through known-good channels, and log suspicious calls. That shift from trusting the highlight (a perfect-sounding voice) to reviewing the full play (context, channel, and process) is exactly the kind of habit security teams look for when they hire.
Vendor email takeover (Grand Rapids Public Schools)
Not every costly play looks dramatic from the stands. In 2025, Grand Rapids Public Schools in Michigan reportedly lost about $2.8 million in a vendor impersonation scam that barely moved the needle on the news cycle but is the kind of attack security teams worry about most. Attackers quietly compromised a school employee’s email account, watched ongoing conversations with an insurance provider, and then slipped in modified payment instructions so that legitimate insurance payments were diverted into a fraudulent bank account.
The move that decided the game wasn’t a big “you’ve been hacked” moment. It was a tiny change to bank account details, inserted into the middle of an existing, trusted email thread. Because the messages came from a real internal address and fit perfectly into an ongoing conversation, the finance team treated them as routine. This is a textbook example of account takeover and thread hijacking, a pattern that shows up again and again in incident write-ups and in resources like Teramind’s collection of real BEC examples.
| Step | Normal Vendor Payment Flow | Hijacked BEC Flow | Risk Point |
|---|---|---|---|
| Initiation | Invoice received from known vendor contact | Email sent from compromised internal account in existing thread | Trust in familiar thread and sender |
| Change Request | Rare changes to bank details, via formal process | “Updated” account info embedded in normal-looking message | Subtle bank detail change goes unnoticed |
| Verification | Callback or secondary check with vendor | No out-of-band verification performed | Single-channel trust in email |
| Payment | Funds sent to long-standing, verified account | Funds wired to attacker-controlled account | Loss realized; detection delayed |
When you pause the tape, the red flags become easier to see. A long-standing payment destination suddenly changed without a clear business reason. The only “approval” for that change came via email, instead of a vendor portal or contract system. No one picked up the phone to call the insurance provider on a known-good number and confirm, “Did you really switch banks?” And because the attackers were already inside a trusted mailbox, they could time their request to hit during busy periods, when staff were more likely to click through and less likely to question the details.
For defenders and career-switchers looking at roles in security or finance-adjacent IT, this case is a reminder that email threads are not sacred ground. Even government and law-enforcement guidance has started highlighting how account takeover fraud often hinges on quiet changes to payment details made through compromised email, leading the FBI to issue specific alerts about this kind of pattern, as noted in legal analyses of rising account takeover fraud tied to email compromise. Strong controls here look boring but powerful: always verifying bank detail changes out-of-band, requiring multi-person approval for payee changes in your accounting system, and training staff to treat “this looks like the same thread as always” as just one signal, not a guarantee that the play is safe.
Toyota Boshoku business-email compromise
On the surface, the Toyota Boshoku case looks like a classic big-number highlight: in 2019, this auto parts supplier in the Toyota group reportedly lost about $37 million after a Business Email Compromise (BEC) scam slipped past internal controls. An employee in accounting received a convincing message that appeared to come from a trusted business partner, asking to update the bank account used for large electronic payments. The message landed during a busy period, looked professional, and was treated as routine - until the money was gone.
The highlight: one email that changed the score
In football terms, this wasn’t a trick play; it was a simple fake handoff that worked because everyone was already moving in the wrong direction. Attackers posed as a legitimate supplier, used polished corporate language, and requested a “normal” operational change: update the bank details before the next big transfer. The dollar amounts, project context, and timing all fit what Toyota Boshoku staff expected to see, which is why the fraudulent instructions sailed through. Case roundups like the one from Gatefy’s review of famous social engineering attacks point to this incident as a textbook example of how BEC exploits trust in familiar business processes rather than technical vulnerabilities.
Rewinding the tape: where the process broke
If you pause and step through the play, the weak points come into focus. A major financial change - altering the destination account for multi-million-dollar payments - was initiated and approved based solely on an email, without any secondary verification through a known phone number, vendor portal, or contract system. Established dual-authorization rules were bent or bypassed under time pressure. The language in the email may have been just generic enough to be slightly off from the real contact’s usual style, but in a rush, no one stopped to re-read it critically.
| Control Point | What Should Happen | What Reportedly Happened | Impact |
|---|---|---|---|
| Bank Detail Change | Requested via formal vendor process with verification | Requested via standalone email to accounting | Fraudulent account accepted as valid |
| Approval Workflow | Dual approval and documentation required | Single employee acted on email under time pressure | No independent sanity check |
| Out-of-Band Verification | Callback or cross-check with known vendor contact | No callback; email treated as authoritative | Millions wired to attacker-controlled account |
Lessons for future defenders and analysts
For anyone entering cybersecurity, finance IT, or GRC roles, this incident is a reminder that some of the most important defenses are procedural, not technical. Bank details are “passwords for money” and should be treated with the same caution: mandatory callbacks to verified numbers, multi-person approvals for changes, and a hard rule that no one - not even a senior executive or long-time vendor - can bypass those steps via email alone. BEC remains one of the most financially damaging forms of cybercrime, and being the person in the room who can slow the tape, spot a suspicious change request, and point back to policy is exactly the kind of off-the-ball awareness hiring managers look for in junior security and risk professionals.
Google & Facebook Rimasauskas invoice fraud
On any list of wild social engineering plays, the long-running invoice hustle that hit Google and Facebook stands out. Between 2013 and 2015, Lithuanian national Evaldas Rimasauskas and his co-conspirators reportedly siphoned off more than $100 million from the tech giants by impersonating one of their real hardware vendors, Quanta Computer. They didn’t brute-force accounts or drop zero-days; they registered a company with Quanta’s name, forged contracts and invoices that looked exactly like the real thing, and convinced accounts payable teams to wire huge sums to bank accounts they controlled.
The setup: building a fake “real” vendor
Instead of a one-off phishing email, this was a full-on fake vendor operation. The scammers incorporated a business with the same name as Quanta Computer, opened bank accounts in its name, and created near-perfect copies of purchase orders, contracts, and letterheads. They targeted employees at Google and Facebook who were already used to processing large, legitimate payments to the real Quanta. When the forged invoices arrived, they referenced actual projects and realistic dollar amounts, so they blended seamlessly into everyday workflow. Analyses like the Infosec Institute roundup of famous social engineering attacks call out this case precisely because it shows how far criminals will go to imitate trusted vendors.
Slow-motion review: real vendor vs. fake vendor
| Signal | Legitimate Quanta Relationship | Rimasauskas’ Fake Setup | What Should Have Happened |
|---|---|---|---|
| Company Entity | Known Taiwanese manufacturer with established contracts | Newly registered company using the same name in another jurisdiction | Legal and procurement review of new entities, even with familiar names |
| Bank Accounts | Verified accounts on file, rarely changed | Fresh accounts in different countries, “urgently” adopted | Out-of-band confirmation with vendor before accepting new banking details |
| Communication Channel | Vendor portals, known contacts, controlled email domains | Emails and documents sent outside normal vendor-management channels | Treat emails alone as insufficient to change payment destinations |
| Documentation Trail | Contracts stored in official systems and workflows | Forged PDFs and letters attached directly to emails | Cross-check with contract repositories before paying large invoices |
When you replay the tape, the pattern is less “Hollywood hack” and more a series of small process failures. A look-alike corporate entity sailed through onboarding. New international bank accounts were added based only on emailed instructions. Invoices and “official” letters that lived entirely in inboxes were never reconciled against the contract systems that should have been the single source of truth. It’s exactly the kind of multi-step, trust-exploiting social engineering that has led organizations like the RSA Conference to describe social engineering as a top cybersecurity threat, even for companies with world-class technical controls.
Defensive habits this play exposes
For aspiring analysts, GRC specialists, or security engineers, the Rimasauskas scam is a case study in why vendor and payment processes matter as much as firewalls. Centralizing vendor onboarding, separating purchasing from payment approval, requiring multi-person sign-off on changes to banking information, and mandating callback or portal-based verification for any new account details would all have made this con much harder to pull off. Just as important is culture: teaching finance and procurement teams that a familiar logo or name in an email is just one signal, not the whole story, and that it’s not only acceptable but expected to pause the play, pick up the phone, and verify before sending seven or eight figures out the door.
Children’s Healthcare of Atlanta BEC
Some plays swing on star power, and that’s exactly what happened when Children’s Healthcare of Atlanta reportedly lost about $3.6 million in a Business Email Compromise (BEC) scheme. In 2025, a scammer impersonated the hospital system’s CFO and convinced the accounts payable team to reroute payments for a regular vendor into an attacker-controlled bank account. No malware, no network exploit - just a very convincing “message from the top” that slipped past the usual checks.
The fake play call from the top
According to incident summaries collected by organizations like Hempstead’s review of famous phishing incidents, the attacker posed as the CFO and emailed accounts payable with “updated” banking details for a familiar payee. Because the name and role at the top of the message matched a real executive, staff treated the request as a high-priority directive rather than just another change form. The fact that this came through a normal-looking internal email channel made it feel like business as usual, even though the content was anything but.
Pause and rewind: authority bias in action
When you slow the tape, the turning point is less technical and more psychological: authority bias. The sender appeared to be the CFO, so established processes were relaxed or skipped altogether. A single email triggered a critical change to bank details, and no one hit pause to verify through a separate channel or involve a second approver. This is exactly the kind of pattern broken down in social-engineering explainers like Sprocket Security’s overview of prominent social engineering attacks, where attackers lean hard on urgency and status to push people past their normal skepticism.
| Signal | Normal Expectation | What Happened | Risk Introduced |
|---|---|---|---|
| Who’s Asking | CFO sets policy, rarely edits individual vendor details | “CFO” personally directs a bank account change | Staff assume status overrides normal checks |
| Channel | Formal vendor or finance system for changes | Change requested entirely via email | Email treated as authoritative on its own |
| Workflow | Dual approval and documented verification | Single team acts on request without callback | No independent confirmation before funds move |
| Urgency | Routine processing timelines | Implied high priority from an executive | Pressure to “just get it done” reduces scrutiny |
Controls that take status out of the equation
The big defensive lesson here is that good process has to apply equally to everyone, especially executives. That means codifying “no VIP exceptions” for high-risk actions: changes to vendor bank details must always go through a formal workflow with multi-person approval and out-of-band verification to a known contact, whether the request appears to come from an intern or the CFO. On the technical side, extra protections around executive accounts - strong authentication, strict email authentication (SPF/DKIM/DMARC), and clear banners when messages originate outside the organization - help employees spot look-alike or spoofed messages before they act. For career-switchers heading into security or finance-adjacent roles, being able to recognize how authority and urgency can short-circuit process - and to design controls that keep everyone honest to the playbook - is a core part of keeping incidents like this off your organization’s scoreboard.
RSA SecurID breach via spear-phishing
Among all the social-engineering plays that security professionals still study, the breach of RSA’s SecurID program is near the top of the film reel. In 2011, attackers used a small, targeted spear-phishing campaign to compromise systems at RSA, ultimately stealing sensitive information related to its SecurID two-factor authentication tokens. The incident is estimated to have cost RSA about $66 million in direct and indirect losses and triggered an enormous clean-up operation for customers who relied on SecurID, as detailed in case studies like the Police1 analysis of the RSA compromise.
The highlight: “2011 Recruitment Plan” and a single bad click
The initial attack didn’t look like a nation-state operation; it looked like a slightly odd internal email. A small group of RSA employees received a message with the subject line “2011 Recruitment Plan” and an attached Excel file. The email was reportedly sent to a distribution list that wasn’t high-profile, which made it seem less suspicious. When one employee opened the attachment and enabled active content, a then-unknown Adobe Flash vulnerability inside the spreadsheet was triggered, installing the Poison Ivy remote access tool. From that foothold, attackers escalated privileges, moved laterally, and eventually exfiltrated data tied to SecurID token generation, as reconstructed in technical breakdowns and training decks like those shared on Slideshare’s RSA data-breach case study.
Replaying the crucial seconds: red flags in context
| Signal | Normal Internal Email | RSA Spear-Phish | Defensive Read |
|---|---|---|---|
| Subject & Audience | HR or leadership sends recruitment plans to relevant managers | “2011 Recruitment Plan” sent to a small, mixed group | Ask: “Am I the right person to receive this?” |
| Content Style | Summary in body, attachments for detail | Thin email body; value locked in the attachment | Treat attachment-only value as higher risk |
| Attachment Behavior | Static docs, no need to enable macros/active content | Requires enabling active content / plugins | Never enable active content unless absolutely necessary |
| Sender Pattern | Known HR or leadership contacts | Unusual sender for recruitment material | Verify unexpected sensitive docs with the supposed sender |
Why this still matters for today’s phishing defense
From a 2026 viewpoint, the mechanics of that spear-phish look almost quaint compared to today’s AI-polished emails, but the underlying play is the same: a believable topic, sent to just the right people, asking them to open an attachment and ignore one or two small “this is weird” feelings. Modern attackers now use generative tools to remove obvious spelling mistakes and tune messages to specific roles, a trend highlighted in real-world social engineering roundups from outlets like Dark Reading’s coverage of real-life social engineering. That makes the human habit of reading for context even more important.
For aspiring security analysts and defenders, the RSA case is a reminder that one successful phish can be enough to start a cascade, even at a security company. Practical mitigations combine technical and human layers: attachment sandboxing and strong endpoint detection to catch exploits, strict default blocking of active content in office documents, and regular, ethical phishing simulations so employees build the reflex to pause, question, and verify before opening or enabling anything unexpected. The skill you’re really training is that film-room instinct: zooming out from a convincing subject line to ask, “Should I even be in this conversation, and is this how we normally share this kind of information?”
Insider bribery and social engineering (Coinbase)
Not every big cyber play is an outsider breaking through the perimeter. In one widely discussed case, cryptocurrency exchange Coinbase reportedly faced around $200 million in exposure when attackers blended classic social engineering with something more direct: bribing insiders to abuse their legitimate access. Public reporting and post-mortems, like a rundown of recent high-impact incidents on Medium’s review of costliest cyber incidents, describe a pattern where employees with valuable permissions were quietly approached, groomed, and offered money to help attackers get what they wanted.
The play: recruiting someone who already has the keys
Instead of hammering login pages, attackers started by mapping Coinbase’s organization from the outside: roles on LinkedIn, hints in job postings, and technical stack details from public docs. They then reached out to carefully chosen employees on side channels like encrypted messaging apps, pitching “consulting,” “bug bounty help,” or simply offering cash in exchange for specific favors. Those favors ranged from running unusual database queries, to exporting configuration data, to granting temporary access that could later be escalated. The social engineering here wasn’t about tricking someone into a mistake; it was about nudging them into a deliberate, unethical decision that they could rationalize as “just helping” or “a one-time thing.”
Why insider feints are so hard to defend
| Aspect | External Attack | Insider-Assisted Attack | Defensive Challenge |
|---|---|---|---|
| Access | Must bypass authentication and MFA | Uses existing, valid credentials | Activity looks like a normal login |
| Behavior in Logs | Unusual IPs, devices, or patterns | Known employee devices and locations | Harder to flag as “impossible travel” or anomaly |
| Controls Targeted | Perimeter defenses, phishing filters | Change approvals, data exports, admin tools | Requires deep monitoring of privileged actions |
| Human Factor | Trick users into accidental mistakes | Convince users to knowingly cross a line | Depends on ethics, culture, and pressure |
This is why insider incidents make security leaders nervous. A dedicated insider-threat report from BrightDefense’s insider-threat statistics collection notes that organizations are tracking more insider-related events and highlighting the mix of financial stress, targeted outreach, and access abuse that drives them. When a trusted employee goes rogue, traditional perimeter tools see “business as usual,” because from the system’s point of view, it is.
Defensive habits (and ethics) for future security pros
Defending against this kind of play is less about distrusting everyone and more about designing systems where no single person can quietly do catastrophic damage. That means least-privilege access, just-in-time elevation for sensitive tasks, detailed logging and review of high-risk actions (like large withdrawals, key rotations, or bulk exports), and clear, well-publicized ways to report suspicious approaches from outsiders. It also means putting ethics front and center in training: making it clear what counts as authorized testing, what requires formal approval, and why “side deals” with data or access are not just policy violations but legal trouble. For career-switchers heading into SOC, security engineering, or GRC roles, being able to talk through an insider scenario like this - technically and ethically - is a strong signal that you understand the full game, not just the perimeter highlight reel.
Spot-the-scam checklist
By now you’ve seen how often the game is lost on a tiny decision: a rushed click, a “sure, I’ll reset that,” a bank detail change that felt routine. A good checklist is like a coach’s playcard on the sideline - something you can glance at in real time when a message, call, or request feels a little off. You won’t memorize every scam, but you can train your eyes and ears to notice the patterns that show up in almost all of them.
Think of this as your personal “pause and rewind” tool. Any time something nudges you to move fast - especially around money or access - you can mentally run through these questions. Security teams use similar frameworks because so many incidents still start with social engineering, a point echoed in resources like IntelliSystems’ guide to social engineering red flags, which stresses how small clues in channels and content often reveal a scam long before any malware shows up.
Identity & Channel
- Is this request coming from a new number, email, or app for someone I already know?
- Am I being pushed to stay on one channel (phone, video, SMS) and discouraged from calling back on a known-good number?
- Does the display name or email address contain subtle misspellings, extra characters, or a domain that doesn’t match the real organization?
Context & Content
- Is this out of character for this person or role (for example, a CFO micromanaging wiring instructions or a CEO asking me directly for gift cards)?
- Am I the logical person to receive this information or perform this action, based on my job?
- Does the message rely heavily on attachments or links with little explanation in the body, or vague language like “see attached” with no details?
Emotion & Urgency
- Do I feel rushed, scared, flattered, or guilty after reading or hearing this?
- Is someone using urgency (“right now,” “before end of day,” “this must be secret”) to shortcut normal steps or keep me from asking others?
- Is this too good to be true - a prize, job, refund, or investment I didn’t expect or didn’t apply for?
Money, Access, and Changes
- Does this involve changing bank details, wiring money, or sharing 2FA codes or passwords?
- Am I being asked to disable security controls, reset MFA, or install a “fix” or “update” outside normal IT channels?
- Is this a request to grant an app or website broad permissions (email, files, contacts) that seem excessive for the task?
Verification & Process
- Have I verified the request using a second, independent channel (phone, internal chat, ticketing system) I look up myself?
- Does this request bypass normal approval or documentation processes - and if so, is there a legitimate, documented reason?
- If I slow down for 60 seconds, does anything feel off, even if I can’t immediately explain why?
If you hit two or more of these warning signs, the next step is simple: stop, verify, and, if you’re at work, loop in your security or IT team. You’re not being paranoid; you’re doing exactly what good defenders do - treating flashy messages and urgent calls like highlight clips, then checking the full play before you act. With practice, this kind of checklist becomes automatic, and that habit is one of the key skills employers look for in entry-level cybersecurity roles.
Frequently Asked Questions
Which social engineering attack should organizations prioritize defending against in 2026?
Business Email Compromise (BEC) should be top of the list - DeepStrike data shows BEC drives more than half of social-engineering losses, with roughly 21,500 complaints and about $2.9 billion in reported losses in a single year. AI-enabled deepfakes and voice-cloning are accelerating these schemes, so defenses must pair process controls with tech controls.
What subtle red flags do people usually miss that let scams escalate into big losses?
Attackers exploit single-channel trust (email-only approvals), sudden bank-detail changes, authority bias, and pressure to act quickly - small process breaks that commonly go unchecked. Those quiet slips have real price tags: Grand Rapids Public Schools reportedly lost about $2.8 million from a vendor email takeover, and Toyota Boshoku lost roughly $37 million from a BEC scam.
How should help-desk and support teams change procedures to stop vishing-based attacks?
Require out-of-band verification and dual approval for sensitive actions (for example, call back on a known-good number or confirm via the ticketing system), lock down MFA and password-reset workflows, and enforce no-exception scripts for urgent requests. These controls matter because phone-based social engineering was a key vector in high-profile incidents like the help-desk-enabled ransomware at MGM and Caesars.
Are AI deepfakes and voice-cloning scams realistically fooling employees, and how can teams verify requests safely?
Yes - high-fidelity deepfakes have already fooled employees in real incidents (one deepfake all-hands led to about $25.6 million in unauthorized transfers), so treat voice/video as one signal, not proof of identity. Always perform out-of-band verification, require written documentation and dual approvals for money or access, and institutionalize the habit of 'I’ll call you back on the official number.'
If I’m switching into cybersecurity, what practical skills should I learn first to spot these social-engineering plays?
Focus on pattern recognition (red flags), verification workflows, and basic incident response plus hands-on labs that simulate real scams; those are exactly the skills entry-level analysts are hired for. A structured, part-time path like Nucamp’s 15-week cybersecurity track (about 12 hours/week, ~$2,124 tuition) teaches those practical habits along with cert-aligned prep for Security+ and other industry exams.
You May Also Be Interested In:
Prefer a focused stack? See the learn to set up an isolated VirtualBox lab long-tail guide for a 3-4 VM starter lineup.
If you want practical remediation advice, check the best defenses against ransomware explained in that roundup.
New to packet analysis? Read our how to capture and analyze network traffic guide for practical, ethical exercises.
For legal and ethical guidance, see the introduction to authorized Metasploit testing and scope checklist.
Security students can learn to run ping sweeps and TCP scans with conservative timing settings.
Irene Holden
Operations Manager
Former Microsoft Education and Learning Futures Group team member, Irene now oversees instructors at Nucamp while writing about everything tech - from careers to coding bootcamps.

