DEV Community

Cover image for Secure at Inception: Introducing New Tools for Securing AI-Native Development
SnykSec for Snyk

Posted on • Originally published at snyk.io

Secure at Inception: Introducing New Tools for Securing AI-Native Development

At Snyk, we believe you should never have to choose between speed and security. As the age of AI transforms software development, our goal is to extend our developer-first security approach to this new era, providing the essential tools your teams need to build with confidence. 

Today at Black Hat, we are delivering on that vision with three tangible innovations that offer a comprehensive solution to secure the entire code lifecycle with AI.

This solution delivers powerful dual value. First, for securing AI-driven development, it empowers developers to make "Secure at Inception" a reality, embedding Snyk’s robust security testing directly into their agentic workflows and tools, such as Cursor, Windsurf, and Co-Pilot. Second, for securing the AI-native software you build, it provides comprehensive visibility and governance through our AI-BOM and innovative MCP-scanning capabilities. 

We are opening up these three capabilities for anyone to use, free of charge, and we invite the community to join us in building the future of secure AI development together.

The AI security challenge: A new attack surface emerges

The statistics paint a stark picture. With 90% of enterprise software engineers projected to use AI code assistants by 2028, and nearly half of all AI-generated code currently containing security vulnerabilities, we're witnessing the emergence of an entirely new attack surface and exploitation at unprecedented speeds.

Traditional security approaches that were designed for developers who write code themselves are failing to keep pace with the sheer amount of code generated by agentic workflows, AI-centric IDEs and background agents.

"We're not just seeing more code being written faster; we're seeing fundamentally different types of applications being built," said Manoj Nair, Chief Innovation Officer at Snyk. "AI-native software, autonomous agents, and agentic workflows require security paradigms that simply didn't exist two years ago. The question isn't whether these new attack vectors will be exploited, it's when."

The challenge extends beyond volume. AI-native applications introduce novel security risks like prompt injections, model poisoning, and what our research teams have identified as "MCP rug pulls". These are novel AI-native attacks that exploit the Model Context Protocol (MCP) infrastructure that powers many AI integrations.

Agentic security with Snyk’s MCP Server

Snyk’s new Model Context Protocol (MCP) Server transforms how AI development tools integrate security context and aligns with modern agentic development workflows.

Rather than bolting security onto AI workflows as an afterthought, Snyk’s MCP Server embeds rich security intelligence directly into the development process

“By standardizing how AI models access and incorporate security context, we're making 'secure at inception’ a reality for AI-native development," explained Ezra Tanzer, Director of Product responsible for Security Extensibility for AI.

Snyk’s MCP Server unlocks several key capabilities for software developers and security engineering teams alike:

  • Real-time security context injection into AI coding assistants and development workflows. Cursor or GitHub Copilot generate code that introduces a security vulnerability? Snyk’s MCP Server exposes Snyk’s testing engines that allow these agentic tools to invoke them and scan the code they generated. If that code is found to be insecure by Snyk, the models can generate new code and repeat this process until the code is secure.
  • Automated vulnerability detection specifically tuned for AI-generated code patterns. Agentic workflows like Claude Code command-line application may autonomously pull in dependencies from registries, but what stops it from installing vulnerable or malicious packages? Snyk’s MCP Server can be invoked via tool calling to scan the package that Claude Code’s agent wants to install and provide it with insights about vulnerabilities, license issues, and other cybersecurity risks.

MCP Server usage reporting within Snyk’s user interface enables development and security leaders to monitor agentic security testing being done by their development teams through Snyk’s MCP server.

Secure at Inception: Introducing New Tools for Securing AI-Native Development - youtube video 1
Tanzer further distilled the benefits of integrating security into agentic workflows and added, “'Shift left' was a massively transformative approach to security that moved the burden of security to earlier, less costly stages of the SDLC. The opportunities created by the adoption of AI in the development process, in conjunction with Snyk’s MCP Server, represent an even more significant paradigm shift where repetitive remediation tasks can be delegated to AI models, minimizing and even eliminating developer involvement in most scenarios.”

Getting started with Snyk’s MCP Server is as easy as running a CLI command. Add the following to your MCP Server configuration:

{
  "mcpServers": {
    "Snyk": {
      "command": "snyk mcp -t stdio",
      "env": {}
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Snyk’s MCP Server is available in Early Access for all Snyk customers. Free to use for the duration of Early Access.

Find more quickstart guides for MCP in the Snyk docs, including one-click installs for Cursor, Windsurf, Qodo, and other agentic tools.

Achieve visibility and governance with the Snyk AI-BOM

The speed and ease of integrating AI components have created a significant blind spot for security and governance teams known as "shadow AI". When developers can freely pull in various MCP servers, models, libraries, and AI services, organizations lose track of their AI "ingredients," making it impossible to manage risk or ensure compliance. Without a comprehensive inventory, you can't secure what you can't see.

"Visibility is the critical first step to effective AI governance," said Rudy Lai, Director of Technology Incubation at Snyk. "The Snyk AI-BOM provides that foundational 'list of ingredients' for your AI applications. It’s about moving from the unknown risks of 'shadow AI' to a managed, compliant program where you can innovate with confidence."

Key capabilities that the Snyk AI-BOM unlocks for security leaders and platform engineering teams:

  • Comprehensive AI asset inventory. The Snyk AI-BOM discovers, catalogs, and visualizes all the AI components used in your applications, including agents, tools, datasets, models, and MCP servers. Now available via CLI, API, and MCP, Snyk’s AIBOM gives you a single source of truth for your entire AI stack, identifying everything from a new open-source model your data science team is experimenting with to the specific AI services being used in production.

Unified risk and compliance management. With a complete AI bill of materials, you can govern your AI supply chain. The AI-BOM allows you to track component licenses, identify dependencies with known risks, and enforce security policies across your organization. This provides assurance to leadership and regulators that you have a fully governed program in place for AI development.

Secure at Inception for AI: Introducing New Tools for Governance and Development - youtube video 2
A free experimental preview of Snyk’s AI-BOM is available via CLI, MCP, and API for all Snyk customers.

To learn more, see Snyk’s product documentation.

Defend against novel AI threats with Toxic Flow Analysis (TFA)

The rise of agentic systems has created a new class of threats that traditional security tools were never designed to see. Vulnerabilities are no longer just in the code itself, but in the dynamic, unpredictable interactions between AI models, tools, and data sources. This creates the potential for a "lethal trifecta," where an agent's access to untrusted inputs, sensitive data, and external tools can be combined by attackers to exfiltrate data, as seen in exploits like the GitHub MCP attack.

To meet this challenge, our Invariant Labs research team is introducing Toxic Flow Analysis (TFA). This cutting-edge security framework is the first principled approach to reducing the attack surface of modern AI applications.

Key capabilities that Toxic Flow Analysis unlocks for security teams and researchers:

  • Proactive attack vector detection - TFA proactively models and scores "toxic flows" - sequences of tool uses that could lead to a security breach at runtime. This allows it to automatically uncover and warn about complex attack vectors, including lethal trifecta scenarios and indirect prompt injections, before they can be exploited.
  • Holistic system analysis - Rather than analyzing components in isolation, TFA instantiates and analyzes the entire flow graph of an agent system and the tools at their disposal. It models all potential sequences of tool usage, incorporating both static information about the system and dynamic data captured during runtime monitoring, to accurately profile security risks.

Secure at Inception for AI: Introducing New Tools for Governance and Development - youtube video 3
We are releasing Toxic Flow Analysis as an experimental preview within our MCP-Scan tool. We invite you to explore this new frontier in AI security.

To learn more, read the research blog from Invariant Labs. 

Building the future of AI security, together

This announcement is more than just a new set of tools; it’s the tangible proof of our vision to empower developers in the age of AI. By providing the capabilities to secure both how you build with AI and what you build with it, we are laying the foundation for a future where you don't have to sacrifice speed for security. We are committed to partnering with the community to solve these complex challenges, and we invite you to join us on this journey.

Visit our website to learn more about how to get started with “Secure at Inception”.

And to learn more about Snyk’s next step in delivering its AI security vision, be sure to sign up for DevSecCon on October 22!

Top comments (0)