Jobs for Developers

Staff Security Engineer

BoxFull-time$110k - $270k*Warsaw, PolandMay 8, 2025
Apply for this job

What is Box?

Box (NYSE:BOX) is the leader in Intelligent Content Management. Our platform enables organizations to fuel collaboration, manage the entire content lifecycle, secure critical content, and transform business workflows with enterprise AI. We help companies thrive in the new AI-first era of business. Founded in 2005, Box simplifies work for leading global organizations, including AstraZeneca, JLL, Morgan Stanley, and Nationwide. Box is headquartered in Redwood City, CA, with offices across the United States, Europe, and Asia.

By joining Box, you will have the unique opportunity to continue driving our platform forward. Content powers how we work. It’s the billions of files and information flowing across teams, departments, and key business processes every single day: contracts, invoices, employee records, financials, product specs, marketing assets, and more. Our mission is to bring intelligence to the world of content management and empower our customers to completely transform workflows across their organizations. With the combination of AI and enterprise content, the opportunity has never been greater to transform how the world works together and at Box you will be on the front lines of this massive shift.

Why Box needs you:

We are seeking a highly skilled and visionary Staff Security Engineer to lead the security strategy and implementation for Generative AI and Agentic AI technologies within Box's platform. You will be instrumental in designing, developing, and operationalizing security controls that address the novel risks introduced by autonomous AI agents and generative models. Additionally, you will drive strategic initiatives to leverage LLMs to enhance our secure development lifecycle. Your work will ensure that Box remains a trusted leader in AI-powered content management by embedding security-by-design principles into all AI features and tooling.

What you'll do:

  • Lead the design and implementation of security architectures specifically tailored for Generative AI and Agentic AI systems, including agentic identity models, least privilege access, runtime guardrails, and audit logging.
  • Develop threat modeling approaches adapted for dynamic, non-deterministic AI agent behaviors, identifying autonomy-related risks such as prompt injection, tool misuse, agent impersonation, and multi-agent system attacks.
  • Build and integrate advanced security tooling and automation to detect, prevent, and respond to AI-specific vulnerabilities across the development lifecycle, including adversarial testing frameworks for AI agents.
  • Spearhead the strategy for integrating LLMs into the secure development lifecycle, including code review automation, vulnerability detection, and security documentation generation.
  • Design and implement AI-powered security tools that can analyze code, identify potential vulnerabilities, and recommend secure coding patterns at scale.
  • Lead proof-of-concept initiatives to demonstrate how generative AI can improve security posture through automated threat modeling, security testing, and developer education.
  • Collaborate closely with product, engineering, and compliance teams to embed secure-by-default configurations and user consent checkpoints for sensitive AI actions involving PII, PHI, or critical business decisions.
  • Drive continuous improvement of AI security posture by researching emerging attack vectors like model poisoning, untrusted code execution, and supply chain risks related to open-source AI frameworks.
  • Mentor and guide other engineers on secure AI development practices and contribute to organizational knowledge sharing around AI risk mitigation strategies.

Who you are:

  • Experienced security engineer with 5+ years in application security, DevSecOps, or security tooling, ideally with exposure to AI/ML security challenges.
  • Deep understanding of AI agent architectures, generative AI models, and associated security risks such as prompt injection, adversarial attacks, and autonomous decision-making vulnerabilities.
  • Proven track record implementing security tools and automation (SAST, DAST, SCA, API security scanning) integrated into CI/CD pipelines at scale.
  • Experience with or strong interest in applying LLMs to security use cases, such as code analysis, vulnerability detection, or security documentation.
  • Demonstrated ability to translate security requirements into practical AI applications that enhance the secure development lifecycle.
  • Skilled in threat modeling methodologies and able to adapt traditional frameworks to dynamic AI systems.
  • Proficient in at least one scripting language (e.g. Python) and familiar with multiple programming languages, cloud-native environments and container security.
  • Strong communicator capable of articulating complex AI security concepts to both technical and non-technical stakeholders.
  • Passionate about cybersecurity innovation, with active participation in security communities, conferences, CTFs, bug bounty programs, or CVE submissions preferred.
  • Growth mindset with a proactive approach to learning and problem-solving in fast-evolving technology landscapes.
  • Preferred Skills:
    • Experience working with Security Architecture patterns and context-aware access control mechanisms.
    • Background in adversarial machine learning or AI robustness testing.
    • Contributions to open source AI security projects or research publications in AI safety/security.
    • Experience building or working with LLM-powered developer tools or security automation.
    • Knowledge of prompt engineering techniques to optimize LLM outputs for security applications.
    • Understanding of the limitations of current LLM technologies and strategies to mitigate false positives/negatives in security contexts.

Percentage of Time Spent:

  • 40% building the AI Security program

  • 30-40% leading a strategy for building capabilities of generative AI

  • 20-30% partnership with the Engineering Teams

BENEFITS
Check out the overview of the benefits and additional perks offered at Box.

Box lives its values, with community and in-person collaboration being a core part of our culture. Boxers are expected to work from their assigned office a minimum of 2 days per week, with a focus on Tuesdays and Thursdays. Your Recruiter will share more about how we work and company culture during the hiring process.

EQUAL OPPORTUNITY

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, disability, and any other protected ground of discrimination under applicable human rights legislation.

For details on how we protect your information when you apply, please see our Personnel Privacy Notice.

For more details on how Box Poland protects your information, please see our Supplemental Personnel and Candidate Privacy Notice

Share

Alternative Jobs