Research note

If Your Security Firm Only Hands You a PDF, Keep Shopping

Most teams choose a security firm by logo density, badge count, and price. That is how you buy an audit artifact instead of an adversarial security partner.

Published
2026-04-27
Author
ChainShield
← Back to Blog

If Your Security Firm Only Hands You a PDF, Keep Shopping

Most teams choose a security firm by logo density, badge count, and price. That is how you buy an audit artifact instead of an adversarial security partner.

That distinction matters because the stakes are asymmetrical. Founders use security firms to unlock capital, exchange listings, integrations, and launch confidence. CTOs and Solidity engineers need something less cosmetic and more useful: reviewers who can reason about protocol invariants, integration risk, upgrade safety, and the weird edge cases that only show up when a hostile counterparty gets a turn. If the firm cannot do both, the engagement may still produce a polished report, but it will not materially change the protocol's odds of surviving mainnet.

Establish the problem with technical depth

If you want a brutal case study, look at Euler. Euler says the protocol was exploited in March 2023 for roughly $197 million. The root cause was a single missing health check in an obscure donateToReserves path. Euler also says the function had been introduced to fix a smaller earlier bug that prior auditors had missed, and that the patch itself had gone through audit review.

That is the operating reality teams keep underestimating. The hardest failures are often not in the obvious places. They show up in patches, exception paths, upgrade deltas, integration assumptions, and state transitions that nobody expected a rational user to take. A firm that is good at spotting textbook bugs but weak at challenging protocol assumptions can still leave you exposed exactly where the blast radius is highest.

For founders and investors, the lesson is commercial before it is technical. Buying a respected logo is not the same thing as buying coverage quality. Your treasury, token, and counterparties do not care whether the report looked credible in a diligence folder. They care whether the review process actually interrogated the parts of the system that could fail under adversarial execution.

For engineers, the lesson is harsher. The right security firm is not there to admire clean syntax. It is there to break your mental model. It should pressure-test privileged roles, oracle dependencies, liquidation logic, accounting invariants, and upgrade procedures with the same seriousness it applies to reentrancy or access control. If the reviewers are not trying to understand how the whole system becomes wrong, they are mostly doing expensive linting.

This is also why timing matters. OpenZeppelin's audit readiness guide is explicit that audits are most productive on code that is already tested, documented, and mature enough to deploy. That is not a process footnote. It is a warning. A firm that happily rushes into a half-formed codebase without pushing you on test coverage, documentation, or architecture clarity is optimizing for calendar utilization, not for your security outcome.

The mechanism, the mistake, the misunderstanding

Most teams run a broken procurement process for security work.

They ask who audited the last hot protocol. They ask how quickly the firm can start. They ask how many reviewers will be assigned. They ask whether the report can be ready before the token launch, partner announcement, or exchange conversation. Those questions are not irrational, but they optimize for signaling and schedule rather than adversarial depth.

The better question is simpler: how does this firm actually find the failure modes that matter for our protocol? That answer should be technical, specific, and uncomfortable.

A credible firm should be able to explain where automated analysis fits and where it stops. Slither is excellent at quickly detecting known bug classes, surfacing risky patterns, and slotting into CI. That is valuable. But a detector suite does not tell you whether your liquidation path, governance delay, vault exchange rate, or bridge accounting model still holds after three interacting state transitions and a malicious callback.

The same is true in the other direction. Foundry's invariant testing is powerful because it runs random sequences of function calls and checks whether properties still hold after each call. Echidna goes further with property-based fuzzing that tries to falsify user-defined predicates and assertions. These are the kinds of tools serious teams should expect to hear about. But they are not magic either. Invariants only protect what someone had the discipline to define, and fuzzing only explores the behaviors the harness makes reachable.

That is the misunderstanding at the center of this market. Many buyers think they are choosing between firms. In reality they are choosing between security models.

One model treats the audit as a branded artifact. The output is a PDF, a severity table, and a social post. The review may still catch obvious issues, but the engagement is optimized to signal reassurance.

The other model treats the audit as an adversarial investigation. The reviewers start by understanding architecture, privileged roles, trust assumptions, and business logic. They use automated tooling aggressively, but they do not confuse tooling coverage with system understanding. They review patches, ask why certain invariants should hold, and force the team to confront the ugly edges of its own design.

OpenZeppelin's audit documentation is helpful here because it describes what a mature audit artifact actually tracks: scope, timeline, executive summary, privileged roles, trust assumptions, issue status, recommendations, and the fix-review trail. That tells you what a serious engagement looks like. If a firm's process does not obviously produce those kinds of artifacts and those kinds of conversations, you are probably buying surface credibility rather than deep coverage.

What good looks like

Start with methodology, not marketing. Ask the firm how it moves from system understanding to findings. You want to hear about architecture review, threat modeling, protocol-specific invariants, manual line-by-line review, static analysis, stateful testing, and post-fix verification. If the answer sounds like a polished sales deck with vague references to "best practices," keep going.

Ask for sample deliverables. A serious report should make scope obvious, tie conclusions to a specific code state, spell out privileged roles and trust assumptions, and track issue status through remediation. If the sample artifact makes it hard to tell what was actually reviewed or what remained unresolved, that is not a documentation problem. It is a process problem.

Ask how the firm handles change after the first review. This is where weak security engagements quietly fail. Protocols ship patches, governance updates, parameter changes, adapter integrations, and emergency fixes. If the firm does not have a crisp answer for diff audits, fix reviews, and post-audit support, you are being sold a point-in-time ceremony for a system that will keep changing underneath it.

Ask what testing discipline the firm expects from your team before they begin. Strong reviewers do not just inspect code; they inspect whether your engineering process deserves a serious review yet. If you have no invariant tests, poor coverage around edge cases, or no clear upgrade process, the right firm should tell you that directly. You do not want a vendor that says yes too easily.

Ask who will actually do the work. Brand is not irrelevant, but named logos do not read code. You need to know whether the people assigned to your protocol have real experience with lending systems, AMM math, bridge assumptions, account abstraction, governance surfaces, or whatever your design actually depends on. A protocol with unusual architecture needs reviewers who can think beyond generic Solidity mistakes.

Then ask the uncomfortable closing question: what kind of failure does this firm most often catch, and what kind does it work hardest not to miss? If they cannot answer that clearly, they probably do not know their own edge.

For technical teams, one practical rule matters more than the rest: choose the firm that talks most precisely about your protocol's invariants and upgrade surface, not the firm that promises the cleanest badge outcome. The better reviewer is usually the one who makes the scoping call a little more awkward.

ChainShield's angle

ChainShield's view is blunt: the right smart contract security firm behaves less like a vendor and more like a stress test for your engineering organization.

It should leave the team with clearer invariants, sharper tests, tighter privilege boundaries, cleaner upgrade discipline, and a better sense of what must be monitored after deploy. A report still matters, but only as one artifact inside a broader security process. The real output is whether the protocol became harder to break.

That is the standard sophisticated buyers should use. Founders should buy evidence, not comfort. CTOs should buy pressure on assumptions, not just prose on vulnerabilities. And if a firm mostly wants to hand you a PDF and move on, it is telling you exactly what it thinks the job is.

Need this level of scrutiny on your protocol?

ChainShield Discovery Runs are designed to identify high-risk issues quickly, validate what matters, and give engineering teams a faster path to remediation.

Request Security Quote