Linkedin·about 19 hours ago
This role will be based in Dublin, Ireland.
At LinkedIn, our approach to flexible work is centered on trust and optimised for culture, connection, clarity, and the evolving needs of our business. The work location of this role is hybrid, meaning it will be performed both from home and from a LinkedIn office on select days, as determined by the business needs of the team.
LinkedIn was built to help professionals achieve more in their careers, and everyday millions of people use our products to make connections, discover opportunities and gain insights. Our global reach means we get to make a direct impact on the world’s workforce in ways no other company can. We’re much more than a digital resume – we transform lives through innovative products and technology.
Searching for your dream job? At LinkedIn, we strive to help our employees find passion and purpose. Join us in changing the way the world works.
The Senior AI Prompt Engineer – plays a critical role in how LinkedIn leverages AI to enhance safety, trust, and operational excellence across our platform. You will design, evaluate, and optimize prompts and AI‑driven workflows to improve moderation quality, operational efficiency, and classifier performance.
You will lead complex prompt experimentation, conduct investigations into model behavior, partner cross‑functionally to drive system improvements, and ensure alignment with policy, safety standards, and global regulatory frameworks.
This role is ideal for someone who operates at the intersection of AI systems, content safety, operational rigor, and policy alignment, and wants to influence the future of AI‑assisted Trust Review Operations at scale.
What You’ll Do
Lead initiatives with Product, Engineering, Data Science, Policy, and Research to advance AI‑enabled Trust & Safety workflows.
Drive alignment on prompt engineering strategies, classifier improvements, and model‑driven risk mitigation.
Communicate effectively across technical and non‑technical audiences to guide adoption of AI tooling.
Mentor peers involved in AI experimentation, orchestration, or operational testing.
Build team capabilities around prompt safety, AI agent best practices, and quality assurance.
Partner with senior staff to develop learning programs in AI literacy, human‑in‑the‑loop design, and prompt engineering.
Contribute to a high‑performance, experimental, data‑driven culture within Trust Review Operations.
Core Responsibilities:
Prompt Design, Testing & Optimization
Design, evaluate, and iterate on prompts for generative and classification models supporting moderation, risk detection, case summarization, and reviewer assistance.
Conduct complex prompt‑based investigations to diagnose model behavior, failure modes, and quality issues.
Establish frameworks for evaluating AI outputs across accuracy, bias, safety, and consistency.
AI‑Driven Case Resolution & Risk Mitigation
Integrate AI tooling into Trust Operations workflows to support scalable and consistent resolution of high‑risk cases.
Develop mitigation plans for model errors and propose long‑term improvements that reduce operational and platform risk.
Escalation & Incident Management for AI Behavior
Serve as the Trust Operations escalation point for issues involving AI output quality, hallucinations, override rates, and misclassifications.
Lead initiatives to prevent AI‑driven escalations and strengthen model governance.
Create and enforce root‑cause and intervention frameworks specific to model performance.
Policy & Regulatory Alignment
Ensure AI‑generated outputs comply with platform policies, MDSS, safety standards, and international regulations (e.g., DSA, synthetic media rules).
Collaborate with Policy and Engineering teams to identify and mitigate emerging compliance risks.
Feedback Integration & Model Improvement
Lead feedback‑collection programs focused on AI output quality, partnering with Policy Operations, Data Science, Engineering, and human reviewers.
Translate reviewer insights into actionable model refinement requirements and product changes.
Data Analysis & Experimentation
Analyze model and prompt performance data to generate insights and influence improvements.
Execute experiments, measure impact, validate results, and present findings to leadership.
Partner with Data Science to build recurring model performance dashboards and reports.
Trend & Behavior Analysis
Monitor trends in user behavior, harmful content types, and classifier drift to anticipate new risk patterns.
Influence roadmap decisions by identifying where AI or automation can enhance detection, routing, or review efficiency.
Basic Qualifications
Bachelor’s degree or equivalent experience in Data Science, Policy, AI Engineering or related field
1+ years experience designing, debugging / optimizing prompts for LLMs or content moderation models.
5+ years experience in Trust & Safety, content moderation, quality engineering, policy, or related domains.
2+ years experience using data tools (e.g., SQL and python) to evaluate model and prompt performance.
Preferred Qualifications
Prior experience working with classification models, generative AI systems, or human‑in‑the‑loop workflows.
Understanding of Trust & Safety policies, global regulations (e.g., DSA), and safety standards.
Experience partnering directly with Product, Engineering, or Data Science teams on AI feature development.
Familiarity with evaluation metrics such as precision, recall, FPR, FDR, and adversarial testing.
Ability to influence strategy through experimentation, data‑driven insights, and cross‑functional leadership.
Suggested Skills:
Analytical Excellence
Creativity & Adaptability
Data Interpretation
Leadership & Coaching
Problem-Solving
Written Communication
Global Data Privacy Notice for Job Candidates
Please follow this link to access the document that provides transparency around the way in which LinkedIn handles personal data of employees and job applicants: https://legal.linkedin.com/candidate-portal.