Summary of the Role:
As a Security Research Engineer at Maze, you'll be at the forefront of defining what constitutes real security risk in the age of AI-powered vulnerability detection. This is a unique opportunity to join as one of the early team members of a well-funded startup building at the intersection of generative AI and cybersecurity, where your security expertise directly shapes how our AI models understand and prioritize cloud security threats.
You'll spend the majority of your time as the expert human-in-the-loop, analyzing cloud vulnerability findings from our AI systems, conducting deep research to validate and contextualize threats, and creating the authoritative labels that train our models to distinguish critical risks from noise. Your security judgment will become embedded in our AI platform, scaling your expertise to protect thousands of organizations while establishing new standards for AI-powered vulnerability assessment.
This role is perfect for a security researcher who wants to pioneer the future of AI-assisted threat detection, loves diving deep into cloud security vulnerabilities, and wants to see their security insights amplified through cutting-edge technology.
Your Contributions to Our Journey:
Expert Data Labeling and Validation: Serve as the authoritative voice on vulnerability severity and impact, reviewing and categorizing cloud security findings from our AI models to create high-quality training data that improves detection accuracy across our platform
Deep Vulnerability Research: Conduct comprehensive research into cloud vulnerabilities affecting EC2 images, Docker containers, and cloud infrastructure, investigating true/false positives, analyzing business impact, and building proof-of-concepts to validate threat scenarios
AI Model Improvement: Work directly with our labeling tools and platform to provide expert feedback that enhances our AI models' understanding of vulnerability context, helping them learn to prioritize threats like a seasoned security researcher
Technical Investigation and Analysis: Create detailed technical writeups about exploitation techniques, attack vectors, and remediation strategies for cloud vulnerabilities, turning complex security research into actionable intelligence
External Security Intelligence: Leverage CVE databases, security advisory feeds, and threat intelligence sources to enrich vulnerability findings with broader context and emerging threat patterns
Security Content Creation: Contribute to our thought leadership through technical blog posts, security videos/podcasts, and conference presentations, sharing insights from your research to establish Maze as a leader in AI-powered security
Cross-Team Collaboration: Work closely with engineering and product teams to translate security research insights into product improvements and new detection capabilities
What You Need to Be Successful:
Security Research Expertise: 5+ years of hands-on security experience with proven vulnerability research background, comfortable investigating complex security issues and building proof-of-concepts to validate findings
Cloud Security Mastery: Deep knowledge of AWS security, cloud infrastructure vulnerabilities, container security, and cloud-native attack vectors, with hands-on experience securing cloud environments at scale
Technical Investigation Skills: Strong coding and scripting abilities (Python, Go, or similar) for automating research tasks, building validation tools, and creating proof-of-concept exploits
Analytical Excellence: Proven ability to analyze complex security data, distinguish between critical threats and false positives, and communicate technical findings to both technical and business audiences
External Intelligence Integration: Experience working with vulnerability databases, security advisory feeds, and threat intelligence sources to contextualize and prioritize security findings
Collaborative Mindset: Strong communication skills and ability to work effectively with AI/ML teams, translating security domain knowledge into actionable model improvements
Startup Agility: Comfort working in a fast-paced environment where your research directly impacts product development and customer security outcomes
Nice to haves:
Experience with AI/ML security or working with AI-generated security findings
Background at security tooling companies or building security products
Expertise in specific vulnerability research methodologies and frameworks
Open source contributions to security tools or research projects
Previous content creation experience in security (blogs, talks, research papers)
Industry certifications (CISSP, OSCP, AWS Security, etc.)
Why Join Us:
Ambitious Challenge: We're using generative AI (LLMs and agents) to solve some of the most pressing challenges in cloud security today. You'll be defining how AI understands and prioritizes vulnerabilities, working at the cutting edge of AI-powered threat detection.
Expert Team: We are a team of hands-on leaders with experience in Big Tech and Scale-ups. Our team has been part of the leadership teams behind multiple acquisitions and an IPO.
Impactful Work: Your security research and labeling work will directly improve how thousands of organizations understand and respond to cloud security threats, scaling expert security knowledge through AI to protect the entire ecosystem.
Pioneer AI-Native Security: You'll help establish the gold standard for AI-assisted vulnerability research, defining how human security expertise enhances machine learning models in the cybersecurity domain.
Technical Leadership: Shape the future of security research methodologies while working with cutting-edge AI technology, with opportunities to present your work at major security conferences and establish yourself as a thought leader in AI-powered security.