Interlace vs. CodeQL vs. Semgrep vs. Snyk Code
Honest comparison of JavaScript / TypeScript static-analysis tools — when to pick each, where they overlap, where they diverge.
Interlace vs. CodeQL vs. Semgrep vs. Snyk Code
Verifying the numbers. All comparison metrics are reproducible —
git clone https://github.com/ofri-peretz/eslint && npm install && npm run ilb:diffregenerates every cell in the tables below. Oncepublish-packages.ymlruns in non-dry-run mode, thenpx @interlace/*recipes will additionally work without cloning.
A buyer's guide. The honest answer is "you usually want two of these." They cover different threat models. This page tells you which combinations make sense and why.
Quick verdict
| If you're… | Pick |
|---|---|
| A small/mid JS/TS project, want low-friction security CI | Interlace standalone |
| A large enterprise with cross-language polyglot codebases (JS + Go + C++ + …) | CodeQL standalone — its multi-language story is unmatched |
| Open-source maintainer, want one tool that "works on anything" out of the box | Semgrep standalone — broadest rule registry, lowest config burden |
| Already paying for vulnerability management (deps + container + IaC) | Snyk Code as part of your existing Snyk bundle |
| You want the deepest JS/TS coverage + AI-agent integration | Interlace + CodeQL — Interlace for breadth, CodeQL for the data-flow-heavy CWEs that need its database build |
| You want low-overhead PR-time + cross-language CI | Interlace + Semgrep — Interlace for JS/TS depth, Semgrep for non-JS files |
Honest one-liners
Interlace — JS/TS-only by design. 207 rules across 11 security plugins. MCP-native: every plugin doubles as an agent-callable tool. Verifiable benchmark numbers (F1 98.8% on its own arena). Best on agent-axis dimensions (determinism, autofix coverage, NL discoverability) — nobody else even publishes these. Weakest on whole-program data-flow analysis (no database build).
CodeQL (GitHub) — Multi-language, database-build SAST. Best-in-class for whole-program / inter-procedural taint analysis. Free for open-source, paid for private repos at GitHub Advanced Security tier. Weakness for JS-shop teams: heavy database build per analysis run (minutes-to-hours), opaque rule authoring (QL is its own language), no agent-axis dimension.
Semgrep — Multi-language pattern matcher. Largest public rule registry. Easy custom rules (YAML pattern syntax). Free OSS tier + paid Snyk-style commercial. Weakness: pattern-only, so structural/dataflow rules underperform Interlace's typed AST checks; no MCP integration; rules are largely community-contributed and quality varies.
Snyk Code — Commercial SAST baked into the Snyk vulnerability-management platform. Strongest at "buy a security tool, install it, get a dashboard." Weakness: closed-source, opaque accuracy claims, no public benchmark numbers comparable to Interlace's, no agent-axis features.
The detailed comparison matrix
| Dimension | Interlace | CodeQL | Semgrep | Snyk Code |
|---|---|---|---|---|
| Language coverage | JS/TS only | 12+ languages | 25+ languages | 10+ languages |
| JS/TS rule count | 207 | ~150 (CWE-mapped) | ~600 (incl. community) | ~400 |
| License (free tier) | MIT, fully open | OSS-free, paid commercial | OSS-free, paid commercial | Free dev-tier, paid teams |
| Setup time | ≤ 1 min (npx @interlace/cli init) | 5-15 min (database + workflow) | ≤ 1 min | 5 min (account + token) |
| CI integration | One-line GitHub Action | Multi-step database+analyze | One-line CLI | One-line CLI |
| SARIF emission | ✅ via @interlace/eslint-formatter-sarif | ✅ native | ✅ native | ✅ native |
| MCP / agent tool integration | ✅ 11 plugin MCP servers | ❌ | ❌ | ❌ |
| Whole-program data-flow analysis | ❌ AST-pattern only | ✅ deep | ⚠️ pattern-based, limited | ✅ proprietary |
| Public benchmark numbers | ✅ ILB (live, reproducible) | Self-reported | Self-reported | Self-reported |
| Public corpus + scoring methodology | ✅ open-source | ❌ | Partial (rule registry) | ❌ |
| Auto-fix support | ✅ per-rule, deterministic | Limited (QuickFix subset) | ✅ for many rules | Limited |
| Confidence / reliability calibration | ✅ ILB-Confidence bench | ❌ | ❌ | ❌ |
| Adversarial-rewrite resilience bench | ✅ ILB-Evade | ❌ | ❌ | ❌ |
| Toolchain matrix coverage (Node × TS × ESLint × parser × cache) | ✅ all 5 axes | n/a | n/a | n/a |
| Compliance crosswalk (SSDF / ASVS / CAPEC / ISO 25010) | ✅ per-rule | ⚠️ informal | ⚠️ informal | ⚠️ commercial dashboard |
| CWE Compatibility (MITRE certified) | 8/8 criteria met (submission pending) | ✅ certified | ✅ certified | ✅ certified |
When to pick which combination
Interlace + CodeQL — depth-by-depth
Interlace covers the breadth of JS/TS-AST-detectable patterns (the 207 rules across 11 security verticals). CodeQL covers the depth of inter-procedural taint analysis that can only run on a built database. They're complementary, not redundant.
Concrete recipe:
# .github/workflows/security.yml
- uses: ofri-peretz/eslint/.github/actions/audit@main # Interlace, fast, breadth
with: { plugins: all, fail-on: error }
- uses: github/codeql-action/init@v3 # CodeQL, slower, depth
with: { languages: javascript-typescript }
- uses: github/codeql-action/analyze@v3When to use this: enterprise codebases where you have the CI budget for both, especially auth/payments/crypto-heavy services.
Interlace + Semgrep — JS/TS depth + multi-language coverage
If your repo has JS/TS plus other languages (Python, Go, Rust), Interlace handles the JS/TS rigorously and Semgrep covers the rest. Both are pattern-based, so you're not paying the database-build tax twice.
When to use this: polyglot startups, OSS projects with mixed-language servers.
Interlace standalone
Most JS/TS-only projects. The full security fleet at one-line install. SARIF→GHAS upload via the GitHub Action means findings still land in the same Code Scanning Alerts dashboard a CodeQL user sees.
When to use this: ~80% of JS/TS shops.
Where Interlace genuinely loses
Be honest about it:
- Inter-procedural taint analysis — Interlace doesn't do whole-program data-flow. CodeQL does. If your threat model is "find the path from
req.bodytochild_process.execacross 5 files of intermediate function calls," Interlace will miss it; CodeQL will catch it. - Non-JS/TS coverage — zero. Interlace has no Python, Go, Rust, or Java rules. Won't.
- Vulnerability-management workflow — Interlace finds bugs; it doesn't track them across releases, integrate with Jira, or generate executive dashboards. Snyk does that.
- Established enterprise procurement — Snyk and CodeQL have decades of FedRAMP / SOC2 / HIPAA paperwork done. Interlace is on the path (MITRE submission ready, OWASP pitch drafted) but not there yet.
Where Interlace genuinely wins
- Agent-axis features — MCP servers, deterministic findings, calibrated confidence, NL→rule retrieval. Nobody else does this for JS/TS. If your dev workflow includes Claude Code / Cursor / GitHub Copilot Workspace, this is the differentiator.
- Verifiable benchmark numbers — every claim recomputable from
benchmarks/results/. Compared to vendor self-reports. - JS/TS depth — 11 security plugins specialized per concern (express, lambda, mongo, pg, jwt, crypto, vercel-ai…). Beats general SAST tools on JS-ecosystem-specific patterns.
- One-line CI integration —
uses: ofri-peretz/eslint/.github/actions/audit@main. Including SARIF upload to GHAS in the same step. - Open-source MIT — no per-developer pricing.
How to verify these claims
# Run our differential bench yourself — Interlace × CodeQL × Semgrep × Snyk
git clone https://github.com/ofri-peretz/eslint && cd eslint && npm install
brew install codeql && pipx install semgrep && npm i -g snyk && snyk auth
npm run ilb:diff -- --tools interlace,codeql,semgrep,snyk
npm run ilb:diff:publish
# → benchmark-results/differential.md with the agreement matrix per fixtureIf our published numbers don't reproduce, open an issue — we treat that as a bug.
What we'd recommend reading next
- Launch page — what Interlace is + headline numbers
- Submission protocol — how a tool vendor lands on the public leaderboard
- Differential publication template — the live data behind the table above