Nethermind·about 15 hours ago
Nethermind is building an AI-driven security product line that helps protocols and developers find vulnerabilities earlier, cheaper, and faster:
AuditAgent: AI-assisted smart contract vulnerability detection and insight generation for pre-audits and security workflows.
AgentArena: a platform where multiple independent audit agents can run in parallel, with an arbiter/triage layer to deduplicate findings and score severity fairly.
This role owns product strategy and execution for these products and the next wave of features/products in the same direction (e.g., CI integrations, remediation workflows, benchmarks/evals, agent marketplace mechanics, enterprise offerings).
A proactive, hands-on Product Manager who can lead AI + security + developer platform products end-to-end: from discovery → roadmap → shipping → adoption → iteration.
You’ll work closely with:
External users: protocol teams, security leads, CTOs, developers
Security researchers/auditors, AI/agent engineers, and platform engineers
BD/partnerships, marketing, and ops
1) Own vision, roadmap, and outcomes
Own the mission and long-term vision for AuditAgent and AgentArena.
Create and maintain a prioritized roadmap balancing user value, model/agent quality, and engineering constraints.
Define and track product success metrics (see “Success looks like…”).
2) Product discovery & positioning (users + market)
Build deep understanding of user needs by interviewing:
Protocol teams shipping to mainnet
Audit customers and Nethermind Security auditors
Agent builders/security researchers
Clarify product positioning: “pre-audit copilot” (AuditAgent) vs “multi-agent auditing platform” (AgentArena).
3) Ship features that increase trust + utility in security workflows
For AuditAgent:
Improve the developer workflow for vulnerability detection + findings quality (clarity, repro guidance, attack scenarios, etc.).
Drive integrations (CI/CD, repo scanning, reporting formats) and “fix-verify-rerun” loops.
For AgentArena:
Own the product mechanics for multi-agent parallel audits and fair evaluation (including arbiter/triage workflows).
Build the “two-sided platform” experience: agent builders (supply) + protocols (demand).
Partner with engineering to evolve scoring/severity, deduplication, and dispute handling.
4) Create the evaluation + data flywheel
Define benchmarks/evals for vulnerability detection quality (precision/recall proxies, severity accuracy, duplicate rates, time-to-signal).
Set up feedback loops from real audits and user outcomes into product improvements (without compromising confidentiality).
5) Monetization and go-to-market
Define packaging and pricing (self-serve, team, enterprise; usage-based credits; platform fees; bounties/reward splits).
Drive GTM with the BD team: how these tools complement audits and expand customer funnel.
6) Execution excellence
Write clear product requirements and coordinate delivery with engineering.
Run weekly execution cadence (milestones, risks, tradeoffs).
Maintain high bar on security, privacy, and reliability for developer trust.
Success looks like (example KPIs)
Pick a small set and own them:
Revenue growth: new MRR/ARR and conversion to paid
Retention & expansion: NRR/GRR, seat/usage expansion, enterprise renewals
Time-to-value: time to first scan → first actionable finding → verified fix
Adoption at scale: weekly active teams/repos, CI integration rate, cohort retention
Unit economics: compute cost per $ revenue (gross margin) and support cost per account
Trust as a growth lever: accepted/validated finding rate and low false positives (quality that drives renewals)
(AgentArena) Platform health: paid demand + active competitive agents, fast time-to-results, low dispute rate
3+ years in Product Management (or equivalent) shipping developer-facing software (B2B SaaS / tooling / platforms)
Strong familiarity with Ethereum smart contracts and security mindset (Solidity, common vuln classes, audit process)
Ability to work cross-functionally with researchers/engineers and translate ambiguity into shipped product
Strong written communication (PRDs, specs, launch notes)
Comfort with AI/agent products: evaluation thinking, prompt/agent iteration cycles, quality measurement
Hands-on experience with smart contract auditing tools/workflows
Familiarity with multi-agent systems, LLM evals, or building marketplaces/two-sided platforms
Experience with security triage/severity frameworks and report standardization
Remote-first, globally distributed team.