• en
  • he
  • < חזרה

    AI Security Architect

    location iconתל אביב, ישראל interest iconסייבר

    Description

    We are seeking an experienced and highly skilled AI Security Architect to join AI Security team in Israel. This is a hands-on, highly technical role responsible for defining security architecture and implementing robust security controls for our AI/ML systems and their underlying platforms.

    You will serve as the team’s technical mentor and architecture authority, driving secure-by-design patterns across the AI/ML lifecycle (data, training, evaluation, deployment, and production monitoring) and proactively mitigating AI-specific threats such as model integrity risks, data poisoning, adversarial attacks, prompt injection, model extraction, and inference-time abuse. While you won’t manage people, you will lead technically, set standards, and guide engineers day-to-day through architecture, reviews, and delivery.

    Key Responsibilities:

    Architecture & Secure-by-Design Leadership

    • Define and maintain AI security reference architectures for multiple AI deployment patterns, including MCP / Agentic AI and LLM application stacks (RAG, tools/plugins, agents, orchestration).
    • Establish and evolve security requirements, patterns, and guardrails across the AI/ML SDLC (design → build → run), including secure pipelines and platform controls.
    • Own AI security architecture decisions across critical domains: identity, secrets, data protection, network controls, tenancy boundaries, logging/telemetry, and isolation for training/inference.

    Control Design & Implementation (Hands-on)

    • Design and deploy controls to ensure model integrity and governance, including RBAC/ABAC for models, feature stores, data sets, registries, and evaluation artifacts.
    • Build/enable technical mechanisms for provenance, attestation, signing, and approval workflows (where applicable) across datasets, models, prompts, and deployments.
    • Drive implementation of runtime protections for AI services (abuse prevention, rate limiting, input/output validation, prompt-injection mitigations, model endpoint hardening, and monitoring).

    Threat Modeling, Assurance, and Risk Reduction

    • Conduct and lead AI/ML-specific threat modeling (data poisoning, model evasion, extraction, inversion, supply-chain, prompt attacks), translate findings into actionable backlogs, and drive remediation.
    • Define and run security design reviews for AI initiatives; provide clear, pragmatic architecture guidance and document exceptions with risk acceptance paths.
    • Establish AI security testing approaches (adversarial testing, red-teaming enablement, evaluation security, misuse/abuse cases) and integrate into delivery pipelines.

    Tooling, Automation, and Operational Enablement

    • Design and deliver AI security tooling to improve and automate cybersecurity posture (e.g., controls coverage, policy-as-code, detection engineering, vulnerability management integration, incident response playbooks for AI-specific events).
    • Define logging/monitoring standards and detection use-cases for AI platforms and LLM apps (drift signals, anomalous access, suspicious prompt patterns, exfiltration indicators, policy violations).

    Technical Mentorship & Influence (No Line Management)

    • Act as the team’s technical mentor: coach engineers through designs, implementations, and trade-offs; raise engineering quality via reviews, pairing, and knowledge sharing.
    • Lead by influence across Data Science, Engineering, Product, Platform, and Cybersecurity—driving alignment without formal authority.
    • Create internal enablement materials: runbooks, architecture standards, reusable patterns, and reference implementations.

    Requirements

    Required Qualifications

    Experience

    • 6+ years in Information Security, Cloud Security, or Application Security.
    • 2+ years securing AI/ML systems or LLM applications in production (or equivalent depth in architecture and threat modeling for AI-enabled systems).
    • Proven track record designing security architectures and driving adoption across multiple teams.

    Technical Expertise

    • Deep understanding of the ML/AI lifecycle and associated security risks (training/inference threats, data governance, evaluation integrity, model/prompt supply chain).
    • Strong expertise in cloud security (AWS/Azure/GCP) and AI/ML services (e.g., SageMaker, Vertex AI, Azure ML) plus container platforms/orchestration.
    • Strong knowledge of data security (classification, encryption, masking/tokenization, key management, lineage/provenance).
    • Strong knowledge of application security architecture and secure design patterns (API security, authz/authn, secrets, CI/CD, policy-as-code).
    • Deep understanding of AI-specific threats/defenses: adversarial ML, data poisoning, prompt injection, model inversion, model extraction, inference-time attacks.
    • Strong coding ability in Python and/or Go (building security tooling, automation, integrations, prototypes).

    Soft Skills

    • Excellent communication—able to translate complex AI security risks into clear engineering requirements and decision-ready trade-offs.
    • Strong stakeholder management and ability to drive alignment and delivery across diverse teams.
    • Practical, proactive mindset with strong problem-solving in ambiguous, fast-moving AI environments.

    Preferred Qualifications

    • BA/BS degree required; advanced degree (MS/PhD) in Computer Science, Data Science, Cybersecurity, or related field is a plus.
    • Certifications such as CISSP, CSSLP, and/or relevant cloud/security certifications; AI security-focused training is a plus.
    • Familiarity with AI security frameworks/standards and enterprise governance expectations.


    We at Deloitte believe that diversity and inclusion among our people is a critical component of our success and that is why we cultivate an organizational culture that contains and embraces diversity in all its forms.


    Share this job

    Apply