Please reference you found the job post on jobsfordevelopers.com to help us get more companies to post here.
We are looking for a collaborative and forward‑thinking AI Cybersecurity Engineer to help lead the design and implementation of our Cybersecurity Program. In this role, you will work closely with teams across the company to ensure our use of AI—large language models (LLMs), ML pipelines, commercial AI platforms, and AI‑enabled applications—is secure, responsible, and aligned with our organizational values.
You will also contribute broadly to cloud, application, and platform security initiatives.
You’ll partner with Data Security, Engineering, Architecture, Legal/Compliance, and business stakeholders to ensure our AI adoption is responsible, resilient, and secure by design.
This is an opportunity to define foundational controls for a rapidly evolving domain. We are looking for you to bring curiosity, a security engineering foundation, and the ability to work with diverse stakeholders.
You will be responsible for detecting, analyzing, and neutralizing sophisticated cyber threats while proactively gathering intelligence to predict future attacks. This is a leadership role requiring a balance of deep technical expertise in defensive operations and the ability to communicate risk to senior leadership and stakeholders.
This role requires more than technical proficiency. We are looking for a leader who models GRAIL’s core values, embodies our LEAD leadership attributes, and delivers results with integrity, inclusivity, and strategic insight.
This role is based in Menlo Park, California, and will move to Sunnyvale, California in Fall 2026. It offers a flexible work arrangement, with the ability to work from GRAIL's office or from home. Our current flexible work arrangement policy requires that a minimum of 60%, or 24 hours, of your total work week be on-site. Your specific schedule, determined in collaboration with your manager, will align with team and business needs and could exceed the 60% requirement for the site. At our Menlo Park campus, Tuesdays and Thursdays are the key days where we encourage on-site presence to engage in events and on-site activities.
Agentic Security Development: Build and maintain a secure reasoning layer for GRAIL data strategy, moving security from a concept to a functional necessity within business workflows.
Domain-Specific Model Engineering: Develop and refine healthcare-specific security detection models (e.g., Content Safety Classifiers, Behavioral / Alignment Monitoring Models) that outperform generic models by minimizing domain-specific blind spots.
Privacy-Preserving and Data Leakage Computation: Implement and manage cryptographic Private Information Retrieval (PIR) systems (such as SealPIR, XPIR, or CPIR) to protect access patterns over large-scale patient record datasets. Detects and prevents exposure of sensitive data (PII, secrets, enterprise data).
Integrity & Tamper Detection: Design data-layer protections, including bilinear pairing checks and cryptographic receipts, to ensure any server-side tampering is detected instantly.
Cloud Infrastructure Security: Deploy and maintain Terraform IaC across AWS multi-cloud environments, ensuring VPC isolation and continuous threat exposure monitoring.
Security Observability: Utilize XAI tools like LIME and SHAP to analyze model failure modes, ensuring that security controls do not inadvertently cause HIPAA availability violations or disrupt care coordination.
Key responsibilities include:
Design, build, and support AI/ML solutions and integrations across the enterprise
Evaluate and secure AI platforms, LLMs, Claude, Gemini, ChatGPT and AI-powered development tools (e.g., GitHub, OKTA, PaloAlto) in AWS Bedrock.
Lead development of AI security controls, guardrails, and governance frameworks
Perform threat modeling and risk assessments for AI/ML systems and integrations
Partner with engineering teams to enable secure AI development practices, including prompt engineering, API security, and data protection
Assess and mitigate risks related to LLMs, including prompt injection, model leakage, and data exposure
Contribute to secure architecture patterns for AI-enabled applications and services
Support security reviews, testing, and validation of AI use cases and implementations
Collaborate with cloud, data, and application teams to ensure secure deployment of AI capabilities
Evaluate and onboard AI vendors and tools, ensuring alignment with security, privacy, and compliance requirements
Promote awareness and adoption of secure AI usage practices across the organization
Remain current on emerging AI and security risks, trends, and technologies
Ensure alignment and compliance with industry standards (NIST AI-RMF, ISO 42001, OWASP Top 10 for LLMs) and advanced security architectures (Agentic, MCP).
GRAIL Core Values & Expected Behaviors
Demonstrate GRAIL’s values in every engagement:
Be Courageous
Challenge the status quo, step up to address difficult issues, and support others who do the same.
Solve Problems Together
Collaborate across boundaries, bring in diverse skillsets, and work with rigor, speed, and a data-driven mindset.
Think BIG!
Pursue ambitious goals with focused execution and bring in external perspectives to shape future solutions.
Embrace Change
Navigate ambiguity, anticipate the future, and turn complexity into opportunity.
Bring an Open Mind
Cultivate curiosity, listen actively to diverse voices, and challenge assumptions to unlock innovation.
Strong hands-on experience with AI/ML technologies, LLMs, or AI development tools
3–5+ years of experience in security engineering, application security, or cloud security
Experience performing threat modeling, security architecture design, and secure code review or testing
Experience developing AI solutions within IDEs, utilizing AI code assistants
Experience working with LLM APIs (OpenAI, Anthropic, etc.)
Familiarity with AI frameworks such as LangChain, LlamaIndex, or similar
Understanding of AI/ML lifecycle and prompt engineering
Familiarity with AI security risks such as prompt injection, data leakage, and model misuse
Experience working in cloud environments (AWS, Azure, or GCP)
Familiarity with secure development practices (DevSecOps)
Working knowledge of OWASP Top 10 and application security principles
Strong collaboration and communication skills
Experience with agentic and Model Context Protocol (MCP) architectures.
Expertise in Python, R, Java, or similar programming languages.
Experience in GCP or AWS cloud-native services, architectures, and tools.
Advanced knowledge of security and governance frameworks (NIST AI-RMF, ISO 42001, OWASP Top 10 for LLM).
The expected, full-time, annual base pay scale for this position is $119K-$140K.
Share