AI Product Security Researcher

New

Skills

Ai Frameworks Distributed systems Penetration Testing Python Researcher Security Software Engineering

Join GitLab as a Senior AI Product Security Researcher and become a pivotal member of our Security Platforms & Architecture Team. You will lead cutting-edge security research focused on AI-powered DevSecOps capabilities, safeguarding the future of human and AI collaborative development. Work remotely and asynchronously with global teams to proactively identify vulnerabilities, influence platform security, and shape industry-leading practices that protect millions of developers worldwide.

Job Overview
  • Drive security research for GitLab’s AI-powered DevSecOps tools.
  • Lead hands-on penetration testing of multi-agent orchestration systems.
  • Develop innovative methodologies for AI agent security.
  • Collaborate with engineering to remediate and validate AI security vulnerabilities.
  • Contribute to the evolution of secure AI practices in a fully remote environment.
Key Responsibilities
  • Identify and validate vulnerabilities in GitLab's AI systems through hands-on testing and proof-of-concept exploits.
  • Conduct comprehensive penetration tests targeting AI agent platforms, including prompt injection and workflow manipulation.
  • Research emerging AI security threats and translate findings into actionable improvements.
  • Design and implement tools and frameworks for AI agent and multi-agent security evaluation.
  • Create detailed technical reports and advisories to inform risk mitigation strategies.
  • Collaborate with engineering teams to verify and test security fixes.
  • Mentor team members and share expertise in AI security testing.
Required Skills & Qualifications
  • 5+ years of experience in security research, penetration testing, or offensive security roles.
  • Proven expertise in AI/ML security and vulnerability discovery.
  • Strong understanding of AI attack vectors such as prompt injection and agent manipulation.
  • Proficiency in Python, AI frameworks, and security testing tools.
  • Experience analyzing code across multiple languages and codebases.
  • Excellent analytical, problem-solving, and written communication skills.
  • Ability to translate technical findings into clear risk assessments and remediation strategies.
  • Preferred: Experience with AI agent platforms, published AI security research, distributed systems background, and security certifications (OSCP, OSCE, GPEN, etc.).

Job Type: Remote

Salary: Not Disclosed

Experience: Entry

Duration: 12 Months

Share this job:

Similar Jobs

AI Security Architect Lead

New

Design scalable AI security architectures

Establish secure-by-design principles for AI integrations

Ai Frameworks Architecture devsecops Distributed systems

AI Prompt Engineer

Posted 177 days ago

Craft, optimize, and evaluate prompts for enhanced AI performance.

Develop client-specific solutions using NLP and ML principles.

Agile Development Ai Frameworks Data Science Data Visualization

AI Data Engineer

Posted 201 days ago

- Build production-grade data pipelines - Collaborate with cross-functional teams - Take on new

lenges and responsibilities - Shape company culture - Solve real-world complex

Ai Frameworks Airflow BigQuery Python

AI Safety Scientist

Posted 209 days ago

- Evaluate and enhance safety mechanisms for large language models - Address biases and risks

ated with AI systems - Develop monitoring systems to detect unwanted behaviors - Collaborate with

Ai Frameworks Generative AI Java Machine Learning

AI Research Engineer

Posted 209 days ago

-Build AI systems -Help accelerate research -Develop AI models and algorithms -Collaborate with

nce team -Interface research with product

Ai Frameworks Deep Learning Distributed systems LLMs

Backend Software Engineer

Posted 222 days ago

Design, Develop, Integrate AI services, Optimize database management, Conduct stress

Ai Frameworks Ci/cd Pipelines Cloud Platforms Containerization
overtime