Are you a visionary in cybersecurity strategy and policy? As the Lead AI Security Governance, you will own the AI Security Strategy domain within Plain Security Studios. This pivotal role focuses on the governance and people aspects of cybersecurity in the age of AI. You will develop and enforce frameworks that ensure our AI solutions and those of our clients are secure, compliant, and ethically sound. From shaping internal policies and best practices to advising clients on regulatory compliance and risk management, you’ll be at the forefront of defining how organizations can safely adopt AI technologies. Reporting directly to the VP of Plain Security, you will collaborate with other security leaders to maintain a holistic and responsible security program covering prevention, detection, response, governance, and user awareness.
Key Responsibilities
- Develop AI Security Strategy: Create and continuously refine the overall AI security governance framework for Plain Concepts and its clients. This includes policies for secure AI development, deployment, and maintenance, ensuring alignment with industry standards and legal requirements.
- Governance and Compliance: Establish guidelines and procedures to comply with emerging AI regulations and cybersecurity standards (e.g. AI Act, GDPR, NIST AI Risk Management Framework, ISO 27001). Oversee compliance initiatives and risk assessments related to AI and machine learning systems.
- Risk Management: Identify and assess security risks unique to AI solutions (such as data privacy issues, model vulnerabilities, adversarial threats). Implement risk mitigation strategies and incident response plans specific to AI/ML systems.
- Security Awareness and Training: Lead the “People” vertical by developing training programs and awareness initiatives on AI security. Ensure that employees and clients understand secure practices when building or using AI-driven tools. This may include creating workshops on topics like secure AI coding, data handling, and recognizing AI-driven social engineering threats.
- Collaboration and Advisory: Work closely with technical teams (Defensive and Agentic Security leads) to embed governance requirements into product and service development. Act as an internal advisor for projects involving AI, guiding teams on best practices for security and compliance from project inception through deployment.
- Client Consulting: Serve as a strategic advisor to our clients and partners. Provide high-level guidance on establishing their own AI security governance — from drafting AI security policies to implementing governance structures and audit programs. Help clients navigate the challenges of adopting AI in a secure and compliant manner.
- Thought Leadership & Representation: Represent Plain Concepts in external forums, standards bodies, and industry events on AI security governance. Contribute to white papers, speak at conferences, and publish insights to solidify our reputation as leaders in secure AI strategy.
- Continuous Monitoring: Keep abreast of developments in cybersecurity, AI ethics, and data protection. Adjust strategies and policies proactively in response to new threats or regulatory changes. Advocate for continuous improvement in how the company and its clients govern and secure AI technologies.