DeFi's Greatest Strength Is Also Its Biggest Security Liability
On March 13th, 2023, Euler Finance was exploited via a flash loan attack, and $197M was lost — not because Euler's code was written by amateurs, but because it was woven into a composable system where one missing liquidity check could be weaponized with borrowed capital from a competing protocol. That single oversight, inside a single function, made the entire ecosystem a loaded gun. This is composability risk. And if your protocol touches any external contract — an oracle, a DEX, a lending pool — you are exposed to it.
The Promise That Became the Attack Surface
A unique feature of DeFi is composability, meaning many projects and protocols can be connected and stacked together, like building blocks or "money legos." This allows developers to combine several DeFi products into a new service, which leads to rapid progress in the industry. The speed of innovation this unlocks is real. A yield aggregator can route capital through a lending market, hedge on a DEX, and auto-compound — all in one transaction, all without any party's permission. That is genuinely powerful.
But the same openness that makes DeFi composable makes it dangerous. Most of the big DeFi attacks don't stem from one weakness, but rather attackers linking multiple small flaws together, which is possible due to composability. If there arises a small issue in one app, and another in a separate app, an attacker might combine the two to cause a larger problem. This is the mechanic that VCs consistently underestimate when evaluating protocol risk: the threat is not just inside your code — it lives at the seams between your code and everything it touches.
In 2022, the Ronin/Axie Infinity bridge exploit hit approximately $620 million, while Wormhole lost approximately $320 million before backer Jump stepped in to recapitalize. Both are composability failures in the broader sense — they are the points where systems trust external state. The Nomad bridge was hacked on August 1st, 2022, and $190M of locked funds were drained. A routine upgrade on the implementation of one of Nomad's proxy contracts marked a zero hash value as a trusted root, which allowed messages to get automatically proved. The hacker leveraged this vulnerability to spoof the bridge contract and trick it to unlock funds. The result was not a sophisticated cryptographic attack — the first attacker exploited this to withdraw funds, then hundreds of copycats replicated the attack by copying the transaction and replacing the recipient address. It was the first "crowd-looted" exploit in DeFi history — nearly 300 addresses drained the bridge over 150 minutes.
The Mechanism: How Composability Turns One Bug Into a Nine-Figure Heist
The March 13 flash loan attack against Euler Finance resulted in over $195 million in losses. It caused a contagion to spread through multiple decentralized finance protocols, and at least 11 protocols other than Euler suffered losses due to the attack. That contagion is the composability risk made concrete. Euler didn't exist in isolation — it was deeply integrated with Aave, Uniswap, and a constellation of downstream protocols that had taken positions or built yield strategies on top of it.
Here is how the Euler attack actually worked, because the mechanism matters. Euler Finance was hacked for approximately $200 million on March 13th, 2023 due to a vulnerability in their EToken smart contract. This attack was made possible due to a missing check on the liquidity status of the account upon donating funds to the protocol coupled with the ability to use loans as self-collateral and Euler's dynamic liquidation penalty. This meant that the account was able to become insolvent, allowing the attacker to liquidate themselves and steal the contract balance.
Step by step: the hacker received initial funding from Tornado Cash for gas fees and to create the contracts used in the exploit, then initiated a flash loan to borrow around $30 million in DAI from the DeFi protocol Aave. Using this flash loan, the attackers deposited 20 million DAI into Euler, receiving around 19.6 million eDAI in exchange. The attackers then borrowed a staggering 195.6 million eDAI and 200 million dDAI, leveraging the tokens acquired. Exploiting the donateToReserves function, the attackers donated 100 million eDAI to Euler's reserves. Leveraging the liquidate function, the attackers capitalized on the discrepancy between eDAI and dDAI values to execute a liquidation operation. This maneuver resulted in the acquisition of 310 million dDAI and 259 million eDAI. The exploiter repeated the attacks on other pools, netting around 197 million dollars.
The key insight for developers: the donateToReserves function had no health check. Calling it with borrowed funds drove the caller's account into insolvency without triggering a revert. This allowed deliberate self-liquidation at a discount steep enough to generate profit. The simplified version of the vulnerable pattern looks like this:
// Vulnerable pattern — no liquidity check after donation
function donateToReserves(uint subAccountId, uint amount) external nonReentrant {
(address underlying, AssetStorage storage assetStorage,
AssetCache memory assetCache, address proxyAddr) =
CALLER();
AssetStorage storage eTokenStorage = assetStorage;
uint owed = eTokenStorage.users[proxyAddr].owed;
// amount is donated to reserves — but account health is NEVER rechecked
eTokenStorage.reserveBalance += amount;
eTokenStorage.users[proxyAddr].balance -= amount;
// An insolvent account can now be liquidated at a steep discount
emit Donation(proxyAddr, amount);
}
The health check that should have followed this operation was simply absent. The donateToReserves function was introduced in order to fix a much smaller 'first depositor' bug in the protocol that had been missed by all previous auditors of Euler. That bug was eventually reported by a white hat hacker as part of the Euler bug bounty program via Immunefi almost a year prior to the 2023 attack. The fix introduced the exploit. When you are composing protocols at speed, the patch surface is as dangerous as the original attack surface.
DeFi protocols integrate with numerous external systems — price oracles, other protocols, governance mechanisms. Auditors typically assess contracts in isolation, not all possible interaction scenarios. That gap — between auditing a contract and auditing a contract in its full composable context — is where most of the money gets taken.
What Good Security Looks Like in a Composable System
The first practical step is to stop treating your protocol as a standalone unit. Every external call is a trust assumption. Every oracle read is a dependency. Map them all. For each one, define: what is the worst-case value this external input can return, and what does my contract do with it? For oracle dependencies specifically, use TWAPs with sufficient windows over spot prices, and implement circuit breakers that halt operations if a price feed diverges beyond a defined threshold. Assess interactions with oracles, APIs, and market conditions to catch vulnerabilities in governance, price feeds, and external dependencies before attackers exploit them.
For flash loan attack surfaces, the mitigation is not to ban flash loans — it is to ensure that no single-transaction state manipulation can produce a profitable outcome. That means health checks must run after every state-mutating operation, not just at entry and exit points. It means critical functions like reserve donations, liquidity additions, or collateral changes need invariant validation baked in — not bolted on afterward. For upgrades specifically, the Nomad lesson is unambiguous: test invariants after every upgrade. Before and after every contract upgrade, run automated tests that verify the protocol's core security invariants. For a bridge: can a message be processed without a legitimate Merkle proof? Can funds be withdrawn without a valid cross-chain deposit? The same principle applies to any stateful DeFi protocol. A deployment script that does not run a post-upgrade invariant suite is a liability.
Tools that should be in your stack: Echidna or Foundry's fuzzer for property-based testing of your invariants, Slither for static analysis of external call patterns, and formal verification for your highest-value state transitions if your attack surface warrants the cost. Audit costs average $40,000–$100,000. Average exploit losses are $10–30 million. Yet many protocols struggle to afford even basic audits. That asymmetry is not an argument against auditing — it is an argument for making every audit dollar count by defining the composability boundaries before the auditor starts the clock.
ChainShield's Take: Context-Aware Security Is Not Optional
Most audit tools and most traditional audit firms examine your contracts in isolation. They check for reentrancy, integer overflows, access control flaws — the classics. What they rarely do is model your contract's behavior when an attacker controls the price returned by your oracle, or when a flash loan hands them $200M of synthetic leverage against your liquidation logic. That is not a criticism of auditors — it is a structural limitation of scope-bound engagements.
The Euler hack passed multiple audits. Sherlock took responsibility for missing the vulnerability in their review and paid a claim of $4.5M to Euler. Insurance payouts do not restore user trust, protocol TVL, or the team's next 18 months. The question every CTO should be asking is not "did we get audited" but "did our audit model the composable attack surface — including every protocol we call, every oracle we read, and every function that changes state without a health check." If the answer is no, the audit was necessary but not sufficient. ChainShield is built to close that gap: automated, continuous, and context-aware — because the blockchain does not stop composing after your audit report is filed.
Post 0 of 30
ChainShield Discovery Runs are designed to identify high-risk issues quickly, validate what matters, and give engineering teams a faster path to remediation.
Request Security Quote