Are you passionate about applying the latest advances in offensive security and DevSecOps to real-world challenges? At Plain Security Studios, we’re reshaping how organizations test, secure, and evolve their cyber defense strategies using AI. As Lead Defensive AI Security, you’ll lead the transformation of red teaming, adversarial testing, and secure development practices for next-generation environments — from LLM applications to CI/CD pipelines.
You will report directly to the VP of Plain Security and lead a growing team of specialists focused on Offensive AI and DevSecAIOps. Your role will be both strategic and hands-on: helping clients build safer AI systems, improving their ability to prevent and detect threats, and embedding security directly into their engineering workflows. If you’ve been part of the red team, the blue team, or built both sides into one, this is your chance to define what modern defensive security looks like in the age of AI.
Key Responsibilities
- Offensive AI Strategy & Delivery: Lead the design and execution of AI-augmented red and purple team operations. Simulate real-world attacks to uncover risks in LLM-based applications, AI pipelines, and modern cloud environments.
- Adversarial Testing of AI Systems: Develop and apply methodologies to assess prompt injection, data leakage, insecure agents, and other vulnerabilities in AI-driven software. Bring an attacker’s mindset to our clients’ most advanced systems.
- DevSecOps & Automation: Help clients integrate security into the development lifecycle. Lead secure-by-design initiatives, threat modeling, and build automated security checks into CI/CD environments using modern tooling (Azure, GitHub, etc.).
- Client Advisory & Innovation Enablement: Partner with product, AI, and engineering teams to improve their security posture without slowing down innovation. Translate red team insights into actionable improvements for developers, architects, and CISOs.
- Tooling & Reusability: Build automation, scripts, and AI-based security tools to accelerate assessments. Leverage LLMs or security copilots to scale red team insights across environments and clients.
- Evangelism & Enablement: Create internal frameworks, methodologies, and reusable templates for offensive assessments. Share best practices across the studio and help less experienced teams apply offensive techniques.
- Knowledge Sharing: Keep your team and clients up to date on threat actor TTPs, novel attack chains, and defense strategies. Identify opportunities to evolve legacy assessments into modern, AI-aware techniques.
- Strategic Mentorship: Grow and guide a cross-disciplinary team of consultants. Foster a culture of curiosity, excellence, and continuous learning.