what is social engineering types and examples(Guide 2026)

What is Social Engineering? Types, Real Examples & How to Defend Against It (2026 Guide)

What is Social Engineering? Every Attack Type Explained with Real Examples & How to Defend Against Each One (Complete 2026 Guide)

what is social engineering types and examples(Guide 2026)

In February 2025, a single social engineering attack against a cryptocurrency exchange resulted in the largest theft in crypto history — $1.5 billion from Bybit. The attackers did not break any encryption. They did not exploit a software vulnerability. They manipulated a small number of people at a third-party software provider into approving a fraudulent transaction. The technical systems worked perfectly. The human layer was the attack surface.

In 2026, vishing attacks — voice phishing — surged 442% year-over-year. AI-cloned voices are used in impersonation calls with increasing regularity. Deepfake video calls, once the domain of nation-state actors, are accessible to organised criminal groups. 98% of cyberattacks now use some form of social engineering. The human is the most targeted vulnerability in any security stack — and the hardest to patch.

This guide covers what social engineering actually is, the psychology that makes it work, every attack type with real 2026 examples, how AI has transformed the threat, and the specific defences that reduce risk at every level.

2026 by the numbers: Social engineering caused 68% of all data breaches in 2024 (Verizon DBIR). Human error was the initial access vector in 60% of breaches. The average cost of a social engineering attack is $130,000. Business Email Compromise alone cost global organisations $2.9 billion in 2023. 63% of cybersecurity professionals cite AI-driven social engineering as their top threat.
Quick Navigation:
  1. What social engineering is — and why technical defences alone cannot stop it
  2. The six psychological triggers all social engineers exploit
  3. Every attack type explained with real examples and specific defences
  4. AI-powered social engineering — the 2026 escalation
  5. Real attacks — the $1.5B Bybit heist, $25M deepfake call, MGM Resorts
  6. Building a layered defence — technical, procedural, and human controls

What Social Engineering Is — And Why Technical Defences Cannot Stop It

Social engineering is the manipulation of people — through psychological pressure, deception, and exploitation of trust — into taking actions that benefit an attacker: revealing credentials, transferring money, granting access, or installing malware.

The critical distinction: social engineering exploits human vulnerabilities, not software vulnerabilities. A perfectly patched, perfectly configured technical environment is still vulnerable to social engineering because it relies on humans to operate it. The most sophisticated firewall in the world cannot prevent an employee from handing their password to someone they believe is from IT support.

This is why 98% of cyberattacks use social engineering in some form. It is the path of least resistance. Breaking encryption or exploiting a zero-day vulnerability requires significant technical skill. Calling someone and pretending to be their CEO — or their bank, their IT department, their supplier — requires primarily confidence, basic research, and an understanding of human psychology.

The key insight: Social engineering attacks do not fail because the victim was stupid. They succeed because they exploit normal, healthy human instincts — trust in authority, helpfulness, fear, urgency, and reciprocity. Understanding the psychology is the foundation of defence. You cannot train people to "stop trusting" — you can train them to verify before acting.

The Six Psychological Triggers All Social Engineers Exploit

Urgency and Scarcity

"Your account will be suspended in 24 hours." Artificial time pressure prevents careful thinking. If you don't have time to verify, you act on instinct. This is the most common trigger in phishing emails and vishing calls.

👔 Authority

"This is your CEO / the IRS / your bank's fraud department." People naturally comply with authority figures, especially when combined with urgency. Impersonating a senior executive or regulator dramatically increases compliance rates.

🤝 Reciprocity

"I've helped fix your computer problem — now I just need your login to complete the update." Offering something first creates a psychological obligation to give something back. Used heavily in quid pro quo attacks.

💬 Social Proof

"Everyone in your team has already completed this security verification." People follow the behaviour of others, especially peers. Used in targeted spear phishing that references real colleagues.

😨 Fear

"Your computer is infected with malware — call this number immediately." Fear disables rational evaluation and drives immediate action. Used in tech support scams, fake security alerts, and law enforcement impersonation.

😊 Liking and Familiarity

"Hi, this is John from IT — we spoke last week." People are more likely to comply with requests from people they like or recognise. Attackers research targets to build familiarity before making requests.

Every Social Engineering Attack Type — With Real Examples and Specific Defences

Most Common

Pretexting — Fabricating a Scenario to Gain Trust

Pretexting is the creation of a fabricated scenario — a "pretext" — to establish credibility and manipulate a target into providing information or access. The attacker invents a role (IT support technician, auditor, new vendor, journalist) and builds a convincing backstory before making their request. Good pretexting involves research: knowing the target's name, their manager's name, their company's technology stack, and current projects to make the scenario feel authentic. Pretexting accounts for 27% of all social engineering breaches.

Real example: In the MGM Resorts breach (2023), Scattered Spider used pretexting to call MGM's IT helpdesk. They had researched a real employee using LinkedIn, obtained personal details from dark web breach databases, and crafted a believable scenario about a lost phone requiring MFA reset. The IT helpdesk agent — acting helpfully, following normal procedures — reset the credentials. The resulting intrusion cost MGM over $100 million. The entire attack vector was a single, well-prepared phone call.
Defence: All requests to reset credentials, change account details, or grant elevated access must be verified through a second channel independently. Call the requesting person back on a number from the internal directory — not a number they provided. Verification questions ("what is your employee ID?") are insufficient alone — that information is available in breach markets. Out-of-band verification is the only effective control.
Financial Fraud

Business Email Compromise (BEC) — CEO Fraud and Invoice Fraud

BEC attacks impersonate senior executives or trusted vendors via email to authorise fraudulent financial transfers. In CEO fraud, the attacker sends an email appearing to come from the CEO or CFO to a finance employee requesting an urgent wire transfer to a new supplier or for a confidential acquisition. In vendor impersonation, the attacker either compromises a real supplier's email account or registers a nearly-identical domain and sends fake invoices with changed bank account details. BEC is financially devastating — $2.9 billion in US losses alone in 2023, with an average loss of $125,000 per incident and some losses exceeding $50 million.

Real example: A UK-based energy firm's CEO received a call from his parent company's CEO in Germany requesting an urgent €220,000 transfer to a Hungarian supplier for a time-sensitive acquisition. The caller's voice, accent, and speech patterns matched perfectly — it was an AI-cloned voice trained on publicly available audio recordings of the German CEO. The transfer was made. The money was never recovered. The AI voice cloning tool cost the attacker approximately $5.
Defence: Any financial transfer above a defined threshold must be verbally confirmed by the requester on a known, independently verified phone number — never a number provided in the requesting email. Implement a two-person authorisation policy for large transfers. Train finance staff that urgency and secrecy in a financial request are red flags, not reasons to act faster.
Physical Security

Tailgating and Piggybacking — Physical Access Attacks

Tailgating is following an authorised person through a secured door without using an access card — typically by carrying something bulky ("could you hold the door?") or by timing entry immediately after an authorised person swipes in. Piggybacking is similar but with the authorised person's knowledge — they hold the door open for someone who claims to have forgotten their badge. Physical security is frequently the weakest layer in organisations with strong digital controls, because employees are naturally helpful and holding a door for someone feels like a harmless courtesy.

Real example: A penetration tester engaged by a financial institution carried a large box of printer paper to the building entrance at 8:50 AM — the peak arrival time when people are rushing and door-holding is frequent. In four separate attempts across two mornings, he was never challenged. Once inside, he walked to an unattended workstation on the trading floor and inserted a USB drive that could have installed malware. The report noted that the building had $2 million in card-access infrastructure that was completely bypassed by a $40 box of printer paper.
Defence: Train all staff that security doors must not be held for tailgaters — politely asking someone to use their own badge is not rude, it is a security requirement. Implement mantraps (two-door airlocks) for high-security areas. Visitor management procedures must require all visitors to be escorted at all times, not just signed in at reception.
Curiosity Attack

Baiting — Exploiting Curiosity and Greed

Baiting attacks offer something enticing to lure a victim into taking a harmful action. The most classic form is the USB drop attack — leaving malware-infected USB drives in car parks, reception areas, or near target organisation buildings, labelled with something enticing ("Q3 Redundancy List" or "Salary Survey 2026"). Human curiosity means a significant percentage of found drives are plugged into computers. Online baiting uses fake download links for pirated software, movies, or games that install malware when executed.

Real example: A study by Google security researchers dropped 297 USB drives across a university campus. 45% were plugged into computers. Of those, 98% of the devices were opened within minutes of being plugged in. Several participants reported they plugged the drive in specifically to find the owner and return it — an instinct the attack weaponises. One drive labelled "Confidential — HR Documents" was plugged in within 6 minutes of being placed.
Defence: Disable auto-run on all endpoints. Configure systems to block unknown USB storage devices through endpoint management policies. Train staff that found USB drives should be physically destroyed or handed to IT security — never plugged in. For higher-security environments, physically block USB ports on workstations.
Long-Con

Quid Pro Quo — Help in Exchange for Access

The attacker offers a service or help in exchange for information or access. The classic form is tech support fraud — the attacker calls employees claiming to be IT support, offering to resolve a common problem (slow computer, VPN issues, email sync problems). Once the employee engages, the attacker asks for credentials to "complete the fix" or requests remote access using legitimate tools like TeamViewer or AnyDesk. The reciprocity trigger makes compliance feel natural — they helped me, so I should help them.

Defence: Establish a clear policy that IT support will never call employees to ask for credentials or request remote access proactively — employees should initiate support requests, not receive them. Any call claiming to be IT that asks for a password should be treated as a social engineering attempt and reported to the security team. Use a ticketing system for all support requests that employees can verify independently.
Long-Term

Watering Hole Attacks — Compromising Trusted Sites

Rather than going directly to the target, the attacker compromises a website that the target organisation's employees are known to visit — an industry forum, a trade association site, a local news site, or a professional resource. The attacker injects malicious code that exploits browser vulnerabilities in visitors. Because the compromised site is trusted (employees have visited it for years without incident), browser security warnings are often ignored. Watering hole attacks are particularly effective against specific industries because attackers can predict which sites a sector's employees visit.

Real example: In 2012, the Council on Foreign Relations website — visited regularly by US government employees and policy professionals — was compromised with a zero-day Internet Explorer exploit. Visitors to the site were infected automatically with malware that sent reconnaissance data back to attackers. The attack specifically targeted the CFR audience rather than general users, demonstrating the precision possible with watering hole attacks.
Defence: Keep browsers and plugins updated — watering hole attacks typically exploit known vulnerabilities in outdated browser versions. Use enterprise DNS filtering to block known malicious domains. Implement browser isolation for high-risk browsing environments. Endpoint detection systems should monitor for unusual outbound traffic patterns that indicate a drive-by compromise.

AI-Powered Social Engineering — The 2026 Transformation

How AI Has Changed Social Engineering in 2026

AI-generated phishing emails as share of all phishing activity82.6%
Increase in click-through rate for AI-generated vs traditional phishing+54%
Increase in AI chatbot use in malicious campaigns year-over-year+67%
Increase in voice-cloning tools used for impersonation+54%
Vishing attack increase in 2024+442%
Deepfake files in existence (2025 vs 500,000 in 2023)8 million+
Audio needed to clone a voice with current tools3 seconds

AI has industrialised social engineering in four specific ways:

  • Perfect personalisation at scale. AI can scrape LinkedIn, public social media, breach databases, and company websites to generate a uniquely personalised phishing email for every person in an organisation's directory — referencing their real projects, their real colleagues, their real manager's communication style. The personalisation that previously required hours of research per target now takes milliseconds per thousand targets.
  • Flawless communication quality. AI-generated text has no spelling mistakes, no unusual phrasing, no grammatical errors — the traditional tells of phishing emails. Language model quality in 2026 is indistinguishable from human writing to all but the most suspicious readers.
  • Voice cloning at commodity cost. Three seconds of audio from a voice note, a company presentation, or a social media video is sufficient to clone a person's voice convincingly. Tools capable of real-time voice cloning are available for under $10/month. This means the "call and verify using a known number" instruction — long a reliable secondary defence — must now include video verification for high-value requests, and even video can be faked with sufficient resources.
  • Adaptive conversational AI. AI chatbots can now conduct believable multi-turn conversations with victims via text — responding to questions, overcoming objections, and maintaining the pretext over the course of a long interaction. This eliminates the human attacker's need to be available in real time for text-based social engineering.

Real Attack Scenarios — The Most Significant Cases

The $1.5 Billion Bybit Heist — February 2025

The largest theft in cryptocurrency history. Attackers — attributed to North Korean state-sponsored group Lazarus — compromised the software supply chain of a third-party safe wallet management provider used by Bybit. They manipulated the provider's employees through social engineering to approve a fraudulent update to the smart contract code managing Bybit's Ethereum cold wallet. The signing interface showed the correct wallet address and legitimate transaction details while the actual transaction code had been modified to transfer funds to attacker-controlled addresses. Three Bybit signatories — seeing legitimate-looking confirmation screens — approved the transaction. $1.5 billion in Ethereum was transferred in a single transaction. The social engineering attack targeted a third party, demonstrating that supply chain social engineering bypasses even strong internal security controls.

CarGurus Vishing Attack — 2026

A single vishing call resulted in 12.4 million customer records being stolen from the automotive marketplace platform. The attacker called CarGurus' customer service line, impersonating a corporate account manager. Using information gathered from data brokers and prior breach databases — including real employee names and account details — the attacker convinced a support agent to update account credentials and transfer access to multiple high-value dealer accounts. The attacker then used this access to extract the customer database. The entry point: one phone call, one cooperative support agent following standard helpfulness procedures.

Scattered Spider Retail Attacks — 2024-2025

The group responsible for the MGM Resorts breach continued through 2024-2025, targeting retail organisations including major UK retailers in 2025. Their methodology remained consistent: research real employees through LinkedIn and breach databases, call IT helpdesks with vishing scripts, use pretexting to bypass MFA, gain initial access via identity provider, deploy ransomware or exfiltrate data. Total losses across their campaign exceeded $300 million. The consistent lesson from each incident: the attack chain started with a phone call, not a technical exploit. Every technical security control they bypassed was bypassed through a human who was manipulated into unlocking it.

Building a Layered Defence Against Social Engineering

Social Engineering Defence Checklist

  1. Implement phishing-resistant MFA across all accounts. FIDO2 hardware keys (YubiKey) or passkeys are the only authentication methods that cannot be bypassed by real-time phishing attacks or vishing-based MFA push fatigue. SMS and TOTP MFA reduce risk but can be bypassed by determined attackers.
  2. Establish strict out-of-band verification for all sensitive requests. Any request to reset credentials, change bank account details, transfer funds, or grant access must be verbally confirmed using a number from an official directory — never from the message requesting the action.
  3. Run regular, realistic phishing simulations. Annual security awareness training that people click through without reading does not build resilience. Quarterly simulations using realistic pretexts — and immediate, educational feedback when someone clicks — train the instinct to pause and verify. Track results and target training to employees who consistently click.
  4. Train specifically on the new AI-powered threats. Most employees are aware of email phishing. Few understand voice cloning, deepfake video calls, or AI-personalised spear phishing. Update training to include these techniques with concrete examples. The $25M Hong Kong deepfake case is a powerful teaching example.
  5. Create a culture where verification is normal and respected. The single biggest enabler of social engineering is the fear of seeming unhelpful or rude by asking for verification. Explicitly communicate that verification is not an insult — it is a security requirement. "I need to verify this through a second channel before I can help" should be a phrase every employee is comfortable saying.
  6. Implement four-eyes controls for financial transactions above defined thresholds. Two separate people must authorise large transfers. Neither person should be able to approve unilaterally, and approval must require independent verification of the request source.
  7. Monitor for anomalous access patterns that indicate a compromise is in progress. After successful social engineering, attackers access systems they would not normally access, at unusual times, from unusual locations. Behavioural analytics and SIEM tools that alert on anomalous access patterns can detect post-compromise activity even when the initial social engineering was successful.
  8. Establish code words for video/voice verification of sensitive requests. For high-value financial or access requests, pre-establish a shared secret code word that must be provided. An AI deepfake of your CEO cannot know the code word unless it was already compromised.

About the Author

Amardeep Maroli

MCA student and cybersecurity enthusiast from Kerala, India. I focus on API security, ethical hacking, and secure application development. I write practical guides built from hands-on lab experience — not just theory.

Social Engineering FAQs

What is the difference between social engineering and phishing?
Social engineering is the broad category — any technique that manipulates human psychology to achieve unauthorised access or information. Phishing is one specific delivery mechanism for social engineering, using deceptive digital messages (email, SMS, fake websites). Vishing is social engineering via voice calls. Pretexting is social engineering using a fabricated scenario. Baiting uses physical objects or enticing offers. Tailgating is physical social engineering. All phishing is social engineering, but not all social engineering is phishing. The complete phishing guide on this blog covers phishing-specific techniques in depth.
Can technical security tools stop social engineering?
Technical tools can significantly reduce the impact of successful social engineering, but they cannot prevent it entirely because social engineering exploits human trust, not system vulnerabilities. Email filters catch most phishing emails but miss sophisticated spear phishing. MFA prevents credential theft from being sufficient for account access. Endpoint detection systems catch malware installed post-compromise. Network monitoring detects anomalous access patterns. But all of these are safeguards that activate after a human has been manipulated — none of them prevent the manipulation itself. This is why human training, verification procedures, and organisational culture are as important as technical controls in a comprehensive social engineering defence.
How do attackers research their targets before a social engineering attack?
Comprehensive reconnaissance typically combines: LinkedIn (job titles, reporting relationships, team structures, technology references in posts), company website and press releases (executive names, recent deals, technology partnerships), social media (personal interests, locations, routines that make pretexts more believable), dark web breach databases (email addresses, prior passwords, phone numbers from previous breaches), and OSINT tools that aggregate public information. A well-researched social engineering attack against a specific individual can feel so personalised that it is indistinguishable from a legitimate contact — the attacker knows your manager's name, your current project, your technology stack, and your routine. The data enrichment problem covered in the dark web guide is exactly what fuels this level of research.
Is social engineering covered in penetration testing?
Yes — social engineering penetration testing is one of the six main pentesting types. It involves authorised phishing simulation campaigns (fake phishing emails sent to employees to measure click rates and credential submission), authorised vishing calls (fake calls to the helpdesk to test verification procedures), and sometimes physical testing (authorised attempts to tailgate into secured areas). Social engineering pentests reveal which employees are most susceptible, which phishing pretexts work best against the organisation, and whether helpdesk verification procedures are actually effective. The findings directly inform targeted training and procedural improvements.
How do you defend against AI-generated deepfake voice calls?
The core challenge is that AI voice cloning is now so convincing that voice recognition alone is not reliable. Defence requires layered verification that does not depend on voice quality: (1) Establish pre-arranged code words with executives and key contacts — a phrase that would be used in sensitive requests, unknown to anyone who has not been briefed. An AI clone cannot know this word. (2) For high-value financial requests, require a separate video call on an established communication platform (not one initiated in the suspicious call) followed by a separately-verified callback. (3) Any request that cannot follow the standard verification procedure — for any reason — is automatically declined and escalated. "The CEO is travelling and can't do the video call" is the pretext, not a reason to bypass verification. Speed and inconvenience are the pressure tactics. Procedure is the defence.
Tags: what is social engineering, social engineering attacks 2026, pretexting, baiting, BEC attack, AI social engineering, deepfake vishing, how to prevent social engineering, social engineering examples

Found this useful? The psychological triggers section is worth sharing with anyone who manages people or handles financial transactions. Most people are unaware of how systematically these techniques exploit normal instincts.

Has your organisation or someone you know experienced a social engineering attempt? What technique was used? Share in the comments.

Comments

Popular posts from this blog

SQL Injection Explained: 5 Types, Real Examples & How to Prevent It (2026 Guide)

Penetration Testing Guide: Real-World Methodology (Recon to Exploitation) [2026]

Phishing Scams in 2026: How They Work & How to Avoid Them