what is social engineering types and examples(Guide 2026)
What is Social Engineering? Every Attack Type Explained with Real Examples & How to Defend Against Each One (Complete 2026 Guide)
In February 2025, a single social engineering attack against a cryptocurrency exchange resulted in the largest theft in crypto history — $1.5 billion from Bybit. The attackers did not break any encryption. They did not exploit a software vulnerability. They manipulated a small number of people at a third-party software provider into approving a fraudulent transaction. The technical systems worked perfectly. The human layer was the attack surface.
In 2026, vishing attacks — voice phishing — surged 442% year-over-year. AI-cloned voices are used in impersonation calls with increasing regularity. Deepfake video calls, once the domain of nation-state actors, are accessible to organised criminal groups. 98% of cyberattacks now use some form of social engineering. The human is the most targeted vulnerability in any security stack — and the hardest to patch.
This guide covers what social engineering actually is, the psychology that makes it work, every attack type with real 2026 examples, how AI has transformed the threat, and the specific defences that reduce risk at every level.
- What social engineering is — and why technical defences alone cannot stop it
- The six psychological triggers all social engineers exploit
- Every attack type explained with real examples and specific defences
- AI-powered social engineering — the 2026 escalation
- Real attacks — the $1.5B Bybit heist, $25M deepfake call, MGM Resorts
- Building a layered defence — technical, procedural, and human controls
What Social Engineering Is — And Why Technical Defences Cannot Stop It
Social engineering is the manipulation of people — through psychological pressure, deception, and exploitation of trust — into taking actions that benefit an attacker: revealing credentials, transferring money, granting access, or installing malware.
The critical distinction: social engineering exploits human vulnerabilities, not software vulnerabilities. A perfectly patched, perfectly configured technical environment is still vulnerable to social engineering because it relies on humans to operate it. The most sophisticated firewall in the world cannot prevent an employee from handing their password to someone they believe is from IT support.
This is why 98% of cyberattacks use social engineering in some form. It is the path of least resistance. Breaking encryption or exploiting a zero-day vulnerability requires significant technical skill. Calling someone and pretending to be their CEO — or their bank, their IT department, their supplier — requires primarily confidence, basic research, and an understanding of human psychology.
The Six Psychological Triggers All Social Engineers Exploit
"Your account will be suspended in 24 hours." Artificial time pressure prevents careful thinking. If you don't have time to verify, you act on instinct. This is the most common trigger in phishing emails and vishing calls.
"This is your CEO / the IRS / your bank's fraud department." People naturally comply with authority figures, especially when combined with urgency. Impersonating a senior executive or regulator dramatically increases compliance rates.
"I've helped fix your computer problem — now I just need your login to complete the update." Offering something first creates a psychological obligation to give something back. Used heavily in quid pro quo attacks.
"Everyone in your team has already completed this security verification." People follow the behaviour of others, especially peers. Used in targeted spear phishing that references real colleagues.
"Your computer is infected with malware — call this number immediately." Fear disables rational evaluation and drives immediate action. Used in tech support scams, fake security alerts, and law enforcement impersonation.
"Hi, this is John from IT — we spoke last week." People are more likely to comply with requests from people they like or recognise. Attackers research targets to build familiarity before making requests.
Every Social Engineering Attack Type — With Real Examples and Specific Defences
Pretexting — Fabricating a Scenario to Gain Trust
Pretexting is the creation of a fabricated scenario — a "pretext" — to establish credibility and manipulate a target into providing information or access. The attacker invents a role (IT support technician, auditor, new vendor, journalist) and builds a convincing backstory before making their request. Good pretexting involves research: knowing the target's name, their manager's name, their company's technology stack, and current projects to make the scenario feel authentic. Pretexting accounts for 27% of all social engineering breaches.
Business Email Compromise (BEC) — CEO Fraud and Invoice Fraud
BEC attacks impersonate senior executives or trusted vendors via email to authorise fraudulent financial transfers. In CEO fraud, the attacker sends an email appearing to come from the CEO or CFO to a finance employee requesting an urgent wire transfer to a new supplier or for a confidential acquisition. In vendor impersonation, the attacker either compromises a real supplier's email account or registers a nearly-identical domain and sends fake invoices with changed bank account details. BEC is financially devastating — $2.9 billion in US losses alone in 2023, with an average loss of $125,000 per incident and some losses exceeding $50 million.
Tailgating and Piggybacking — Physical Access Attacks
Tailgating is following an authorised person through a secured door without using an access card — typically by carrying something bulky ("could you hold the door?") or by timing entry immediately after an authorised person swipes in. Piggybacking is similar but with the authorised person's knowledge — they hold the door open for someone who claims to have forgotten their badge. Physical security is frequently the weakest layer in organisations with strong digital controls, because employees are naturally helpful and holding a door for someone feels like a harmless courtesy.
Baiting — Exploiting Curiosity and Greed
Baiting attacks offer something enticing to lure a victim into taking a harmful action. The most classic form is the USB drop attack — leaving malware-infected USB drives in car parks, reception areas, or near target organisation buildings, labelled with something enticing ("Q3 Redundancy List" or "Salary Survey 2026"). Human curiosity means a significant percentage of found drives are plugged into computers. Online baiting uses fake download links for pirated software, movies, or games that install malware when executed.
Quid Pro Quo — Help in Exchange for Access
The attacker offers a service or help in exchange for information or access. The classic form is tech support fraud — the attacker calls employees claiming to be IT support, offering to resolve a common problem (slow computer, VPN issues, email sync problems). Once the employee engages, the attacker asks for credentials to "complete the fix" or requests remote access using legitimate tools like TeamViewer or AnyDesk. The reciprocity trigger makes compliance feel natural — they helped me, so I should help them.
Watering Hole Attacks — Compromising Trusted Sites
Rather than going directly to the target, the attacker compromises a website that the target organisation's employees are known to visit — an industry forum, a trade association site, a local news site, or a professional resource. The attacker injects malicious code that exploits browser vulnerabilities in visitors. Because the compromised site is trusted (employees have visited it for years without incident), browser security warnings are often ignored. Watering hole attacks are particularly effective against specific industries because attackers can predict which sites a sector's employees visit.
AI-Powered Social Engineering — The 2026 Transformation
How AI Has Changed Social Engineering in 2026
AI has industrialised social engineering in four specific ways:
- Perfect personalisation at scale. AI can scrape LinkedIn, public social media, breach databases, and company websites to generate a uniquely personalised phishing email for every person in an organisation's directory — referencing their real projects, their real colleagues, their real manager's communication style. The personalisation that previously required hours of research per target now takes milliseconds per thousand targets.
- Flawless communication quality. AI-generated text has no spelling mistakes, no unusual phrasing, no grammatical errors — the traditional tells of phishing emails. Language model quality in 2026 is indistinguishable from human writing to all but the most suspicious readers.
- Voice cloning at commodity cost. Three seconds of audio from a voice note, a company presentation, or a social media video is sufficient to clone a person's voice convincingly. Tools capable of real-time voice cloning are available for under $10/month. This means the "call and verify using a known number" instruction — long a reliable secondary defence — must now include video verification for high-value requests, and even video can be faked with sufficient resources.
- Adaptive conversational AI. AI chatbots can now conduct believable multi-turn conversations with victims via text — responding to questions, overcoming objections, and maintaining the pretext over the course of a long interaction. This eliminates the human attacker's need to be available in real time for text-based social engineering.
Real Attack Scenarios — The Most Significant Cases
The $1.5 Billion Bybit Heist — February 2025
The largest theft in cryptocurrency history. Attackers — attributed to North Korean state-sponsored group Lazarus — compromised the software supply chain of a third-party safe wallet management provider used by Bybit. They manipulated the provider's employees through social engineering to approve a fraudulent update to the smart contract code managing Bybit's Ethereum cold wallet. The signing interface showed the correct wallet address and legitimate transaction details while the actual transaction code had been modified to transfer funds to attacker-controlled addresses. Three Bybit signatories — seeing legitimate-looking confirmation screens — approved the transaction. $1.5 billion in Ethereum was transferred in a single transaction. The social engineering attack targeted a third party, demonstrating that supply chain social engineering bypasses even strong internal security controls.
CarGurus Vishing Attack — 2026
A single vishing call resulted in 12.4 million customer records being stolen from the automotive marketplace platform. The attacker called CarGurus' customer service line, impersonating a corporate account manager. Using information gathered from data brokers and prior breach databases — including real employee names and account details — the attacker convinced a support agent to update account credentials and transfer access to multiple high-value dealer accounts. The attacker then used this access to extract the customer database. The entry point: one phone call, one cooperative support agent following standard helpfulness procedures.
Scattered Spider Retail Attacks — 2024-2025
The group responsible for the MGM Resorts breach continued through 2024-2025, targeting retail organisations including major UK retailers in 2025. Their methodology remained consistent: research real employees through LinkedIn and breach databases, call IT helpdesks with vishing scripts, use pretexting to bypass MFA, gain initial access via identity provider, deploy ransomware or exfiltrate data. Total losses across their campaign exceeded $300 million. The consistent lesson from each incident: the attack chain started with a phone call, not a technical exploit. Every technical security control they bypassed was bypassed through a human who was manipulated into unlocking it.
Building a Layered Defence Against Social Engineering
Social Engineering Defence Checklist
- Implement phishing-resistant MFA across all accounts. FIDO2 hardware keys (YubiKey) or passkeys are the only authentication methods that cannot be bypassed by real-time phishing attacks or vishing-based MFA push fatigue. SMS and TOTP MFA reduce risk but can be bypassed by determined attackers.
- Establish strict out-of-band verification for all sensitive requests. Any request to reset credentials, change bank account details, transfer funds, or grant access must be verbally confirmed using a number from an official directory — never from the message requesting the action.
- Run regular, realistic phishing simulations. Annual security awareness training that people click through without reading does not build resilience. Quarterly simulations using realistic pretexts — and immediate, educational feedback when someone clicks — train the instinct to pause and verify. Track results and target training to employees who consistently click.
- Train specifically on the new AI-powered threats. Most employees are aware of email phishing. Few understand voice cloning, deepfake video calls, or AI-personalised spear phishing. Update training to include these techniques with concrete examples. The $25M Hong Kong deepfake case is a powerful teaching example.
- Create a culture where verification is normal and respected. The single biggest enabler of social engineering is the fear of seeming unhelpful or rude by asking for verification. Explicitly communicate that verification is not an insult — it is a security requirement. "I need to verify this through a second channel before I can help" should be a phrase every employee is comfortable saying.
- Implement four-eyes controls for financial transactions above defined thresholds. Two separate people must authorise large transfers. Neither person should be able to approve unilaterally, and approval must require independent verification of the request source.
- Monitor for anomalous access patterns that indicate a compromise is in progress. After successful social engineering, attackers access systems they would not normally access, at unusual times, from unusual locations. Behavioural analytics and SIEM tools that alert on anomalous access patterns can detect post-compromise activity even when the initial social engineering was successful.
- Establish code words for video/voice verification of sensitive requests. For high-value financial or access requests, pre-establish a shared secret code word that must be provided. An AI deepfake of your CEO cannot know the code word unless it was already compromised.
Comments
Post a Comment