Research note

Bug Bounty Programs Are Not Optional: A Protocol's Last Line of Defense

$197 million evaporated from Euler Finance in a single March 2023 morning. The exploit ran through a function called `donateToReserves` — code that had been sit

Published
2026-04-14
Author
ChainShield
← Back to Blog

Bug Bounty Programs Are Not Optional: A Protocol's Last Line of Defense

$197 million evaporated from Euler Finance in a single March 2023 morning. The exploit ran through a function called donateToReserves — code that had been sitting live on mainnet for eight months, armed with a $1 million bug bounty that nobody claimed in time. That is not a story about bad luck. It is a story about the gap between having a bounty program and running one that actually works.

If you are building a DeFi protocol, you need to understand what that gap costs — and how to close it.

Why Audits Alone Are Not Enough

The default assumption in Web3 security is: ship code, get an audit, deploy. The historical record argues strongly against this being sufficient. Halborn's analysis of the top 100 DeFi hacks between 2014 and 2024 documented $10.77 billion in total losses. Notably, 20% of exploited protocols had undergone security audits before incidents, yet still accounted for 10.8% of total value lost.

The mechanics of why audits fail are well understood. Audits are point-in-time snapshots. A firm reviews a commit hash, issues a report, and the engagement ends. Your protocol then evolves — patches land, governance proposals modify storage layouts, integrations with external protocols introduce new attack surfaces. Every change resets the threat model, but almost no team re-audits every delta. The result is a growing window between what was reviewed and what is actually running.

Consider Euler Finance's story in granular detail. The donateToReserves function was introduced to fix a much smaller 'first depositor' bug that had been missed by all previous auditors of Euler. That bug was eventually reported by a white hat hacker as part of the Euler bug bounty program via Immunefi almost a year prior to the 2023 attack. The white hat identified that the original version of the Euler protocol was vulnerable to allowing first depositors in new, uninitialised pools to have their deposits stolen by a front-runner. The team patched it. The patch introduced the fatal vulnerability. Six Web3 security companies had audited Euler Finance, yet this attack occurred. The function introduced by the fix was audited once and fell out of scope on subsequent reviews. The vulnerability remained on-chain for eight months until it was exploited, despite a $1 million bug bounty being in place during that time.

For VCs evaluating protocol risk: the Euler exploit cascaded into losses across more than a dozen integrated protocols. The DeFi lending protocol became a victim of a flash loan attack on March 13, resulting in the biggest hack of crypto in 2023 so far. The lending protocol lost nearly $197 million in the attack and impacted more than 11 other DeFi protocols as well. Composability in DeFi is a feature that multiplies returns in good times and multiplies blast radius in bad ones. Any protocol your portfolio company integrates with is a potential attack vector against your portfolio company's users.

The Exploit Mechanism: What the Code Actually Did Wrong

The Euler vulnerability is worth dissecting because it illustrates a class of bug that audits routinely miss: emergent vulnerabilities that arise from the interaction of multiple individually-correct components.

Euler Finance was hacked for approximately $200 million on March 13th, 2023 due to a vulnerability in their EToken smart contract. This attack was made possible due to a missing check on the liquidity status of the account upon donating funds to the protocol coupled with the ability to use loans as self-collateral and Euler's dynamic liquidation penalty. This meant that the account was able to become insolvent, allowing the attacker to liquidate themselves and steal the contract balance.

In practice, the attack sequence looked like this: a flash loan of 30 million DAI from Aave was used to open a leveraged position in Euler, creating both eToken (deposit) and dToken (debt) balances. The attacker then called donateToReserves, transferring a portion of their eTokens directly into the reserve. This sounds harmless in isolation — it is a donation. But the function did not trigger a health-score check post-donation. The account's collateral dropped below its debt, manufacturing artificial insolvency. The attacker then liquidated their own position at a discount, collecting more collateral than the debt was worth, and walked out with the difference.

The critical flaw in pseudocode:

// VULNERABLE: donateToReserves does not check health score after donation
function donateToReserves(uint subAccountId, uint amount) external nonReentrant {
    // transfers eTokens to reserve
    // NO: checkLiquidity(account) call here
    // result: account can be pushed into insolvency deliberately
}

This attack could have been avoided if the health score was tested post-donation — the core invariant being that the health score never goes below 1 unless the value of the underlying changes. New logic and functions added to an existing codebase, such as the donateToReserves() function, should be thoroughly tested in the context of the entire protocol.

This is the deeper problem that bug bounty programs are designed to address: protocol-level invariant violations that emerge from function interactions no single auditor fully traced. A healthy bounty program puts hundreds of researchers with different mental models against your codebase simultaneously and continuously — not just during a scoped, time-limited engagement. Attack surfaces are entirely public, with all smart contract code and transaction data visible to adversaries with unlimited time to study targets. Perhaps most critically, failures are largely irreversible, creating an environment in which a single vulnerability can result in immediate, catastrophic losses. The attacker always has more time than the auditor. A bug bounty program lets your white hats operate on the same timeline.

What a Real Bug Bounty Program Looks Like

A bug bounty program is not a disclaimer you post on your docs site. It is structured infrastructure with four required components: a defined scope, a severity classification framework, credible payout amounts, and a fast response process. Skip any one of these and you will attract neither serious researchers nor serious findings.

Scope: Define exactly which contracts and which versions are in scope. Out-of-scope submissions waste everyone's time and poison researcher goodwill. Be explicit about what counts as a valid attack path — economic attacks using flash loans, access control issues, reentrancy, oracle manipulation — versus theoretical risks you have already accepted.

Severity and payout calibration: Some programs tie rewards directly to funds at risk, typically 10%. Projects allocate 5-10% of TVL to bug-bounty budgets. If your protocol has $50 million TVL, a critical bounty ceiling of $25,000 signals that you either do not understand the economics or do not take this seriously. Researchers who can find a $10 million critical will simply exploit it rather than report it. Wormhole paid $10 million for a critical bug. Aurora paid $6 million. Polygon paid $2.2 million. Optimism paid $2 million. These are not charity — they are the cost of not getting drained.

Platform choice: Bug bounty programs invite developers and security researchers to examine a project's code, identify vulnerabilities, and receive payment for their discoveries. The two dominant platforms in Web3 are Immunefi and HackenProof. Immunefi operates as a Web3-native security coordination platform connecting protocol teams with independent security researchers who are incentivized to disclose vulnerabilities responsibly rather than exploit them. As of December 2025, the platform coordinates security efforts across more than 650 protocols and infrastructure providers, working with a global community of over 60,000 security researchers. The assets under protection through these programs exceed $180 billion. Launching on an established platform gives you researcher access, triage infrastructure, and proof of credibility that a self-hosted form on your website cannot replicate.

Triage and response: Launch-to-first-report time averages about 12 days. You need someone designated to review submissions within 24 hours. A researcher who waits two weeks for an acknowledgment on a critical finding has plenty of time to reconsider their white-hat commitment. Define your SLAs before you go live, not after your first critical report lands.

The sequencing also matters. A bug bounty program complements an audit — it does not replace one. The canonical security stack for a protocol launching with significant TVL is: internal review, formal audit from a reputable firm, invariant testing suite (Foundry's fuzzer and formal verification tools like Certora or Halmos), followed by a continuous bounty program that runs for the life of the protocol.

Should Your Protocol Have One? Here Is the Honest Answer.

If your protocol is not yet deployed and has less than $1 million in anticipated TVL at launch, a formal bug bounty program is probably premature. Focus on audit quality and a robust invariant test suite first. Launching a bounty before your code is reasonably hardened attracts noise submissions that consume engineering time.

If you have deployed contracts with real user funds — any real user funds — you need a program running. The threshold is not $100 million TVL. It is the moment someone other than your team has money at risk.

At ChainShield, the protocols we see get hurt most often are not the ones that skipped the audit. They are the ones that treated the audit as a security certificate and stopped there. Smart contract bugs represent the highest proportion of Immunefi's total payouts, accounting for $77.97 million of the bounties paid. The threats are almost entirely at the contract logic layer — the exact layer that keeps evolving after the auditors close their report. A bug bounty program is how you keep eyes on that layer continuously.

The Euler story has a postscript worth noting. After the $197 million exploit and a remarkable recovery effort, Euler now maintains one of the largest bug bounty programs in DeFi, with up to $7.5 million available for critical vulnerability discoveries. This substantial bounty demonstrates their commitment to security and incentivizes the world's best security researchers to help safeguard the protocol. Learning this lesson after the fact cost them nearly $200 million in user funds, a 70% token price collapse, and months of remediation work. You do not need to learn it the same way.

Need this level of scrutiny on your protocol?

ChainShield Discovery Runs are designed to identify high-risk issues quickly, validate what matters, and give engineering teams a faster path to remediation.

Request Security Quote