AI coding assistants like Claude, GitHub Copilot, and Cursor have transformed how developers work. But with great power comes a new attack surface: executable skills that can turn your trusted AI assistant into a threat actor.
Recent security research has uncovered a concerning pattern. Skills—the plugins and extensions that give AI agents their capabilities—can harbor malicious code that executes with your permissions, accesses your credentials, and spreads across your infrastructure. This isn’t theoretical: researchers have demonstrated skill worms that propagate through SSH configurations, exfiltrate secrets via base64-encoded curl commands, and persist across sessions.
This article explains the threat, provides an actionable checklist for individual developers, and outlines enterprise-grade controls for security teams.
Understanding the Threat: What Makes Skills Dangerous
AI agent skills are essentially executable code with ambient authority. When you install a skill, you’re giving it access to everything your AI assistant can touch: your filesystem, network, SSH keys, API tokens, and more.
The attack patterns identified by security researchers include:
1. Pre-execution side effects. A skill can execute network calls or file operations before the AI model even reasons about the output. Even if the model rejects a suspicious result, the damage is already done.
2. Worm-like propagation. A malicious skill can parse your ~/.ssh/config, extract hostnames, and copy itself to every known host via your existing SSH trust relationships. Each hop leverages the AI agent’s normal operation pattern.
3. Supply chain infiltration. Popular skill repositories can be compromised, injecting malicious code into widely-used extensions. A November 2025 report identified 43 agent framework components with embedded vulnerabilities from supply chain attacks.
4. Prompt injection via skills. Skills can inject persistent instructions that manipulate AI behavior across sessions, effectively hijacking your assistant’s decision-making.
Security Checklist for Individual Developers
If you’re using AI coding assistants, these controls will significantly reduce your risk:
| Control | Implementation |
|---|---|
| Vet skills before installing | Review skill source code. Look for obfuscated payloads (base64, eval), network calls (curl, wget, requests), and file access outside the project directory. |
| Limit tool permissions | Use allowed-tools configuration to restrict which tools skills can invoke. Treat Bash access as equivalent to full system access. |
| Isolate SSH keys | Use different unique SSH keys for different hosts—never reuse keys across servers. Don’t load keys into ssh-agent by default. Use ssh-add -t 300 for time-limited access. Consider hardware-backed keys (YubiKey, TPM). |
| Enable network monitoring | Install a host-based firewall with notifications. opensnitch (Linux) or LuLu (macOS) alert on every new outbound connection. Personally, we like Little Snitch for macOS—although it’s not free, it’s worth it. |
| Use project-scoped access | Configure your AI assistant to only access the current project directory. Prevent access to ~/.ssh, ~/.aws, ~/.config, and other sensitive locations. |
| Review before execution | Don’t auto-approve commands. Take a moment to review what the AI is about to execute, especially for network or file operations. |
Practical Configuration Examples
SSH key isolation (add to ~/.bashrc or ~/.zshrc):
# Don't auto-load SSH keys
unset SSH_AUTH_SOCK # Disable agent forwarding by default
# Function to temporarily add a key
ssh-temp() {
ssh-add -t 300 "$1" # Key expires in 5 minutes
}Network monitoring with opensnitch (Linux):
# Install opensnitch
sudo apt install opensnitch
sudo systemctl enable --now opensnitchd
# Every outbound connection now requires approvalNetwork monitoring with LuLu (macOS):
# Install via Homebrew
brew install --cask lulu
# Or download from objective-see.org/products/lulu.htmlEnterprise Security Controls
For security teams managing AI assistant usage across an organization, the individual controls above are necessary but not sufficient. Here’s a defense-in-depth approach:
Layer 1: Skill Supply Chain Security
Curated skill repositories. Maintain an internal registry of vetted skills. Require security review before any new skill is approved for use.
Static analysis gates. Scan skills for red flags: obfuscated code, network calls, file operations outside expected paths, credential access patterns.
Version pinning. Don’t auto-update skills. Review changes before upgrading, just as you would for any other dependency.
Provenance tracking. Log which skills are installed on which machines, when, and by whom. This is essential for incident response.
Layer 2: Runtime Isolation
Container isolation. Run AI assistants inside Docker containers or gVisor sandboxes. This limits the blast radius of a compromised skill.
Network segmentation. AI assistant processes should only reach approved destinations. Use egress filtering to block unexpected outbound connections.
Filesystem restrictions. Use OS-level sandboxing (Bubblewrap on Linux, Seatbelt on macOS) to enforce directory-scoped access.
Secrets isolation. Never store credentials in plaintext on developer machines. Use a secrets manager with explicit checkout/checkin and short TTLs.
Layer 3: Detection and Response
Behavioral monitoring. Use eBPF-based tools (Falco, Tetragon) to detect anomalous syscalls: unexpected file access, network connections, process spawning.
Audit logging. Log all AI assistant tool invocations with full context. This enables forensics if something goes wrong.
Kill switch. Maintain the ability to immediately terminate all AI assistant processes across your fleet. Test this capability regularly.
Incident response playbook. Document what to do when a skill is compromised: isolation steps, forensic collection, credential rotation, and communication templates.
Key Takeaways
AI coding assistants are powerful tools, but their skill systems introduce real security risks. The attack patterns—worm-like propagation via SSH, pre-execution side effects, supply chain poisoning—are not hypothetical. Security researchers have demonstrated working exploits.
The good news: these risks are manageable with the right controls. For individual developers, the checklist above takes 30 minutes to implement and dramatically reduces your exposure. For enterprises, layered defenses—supply chain controls, runtime isolation, detection—provide defense in depth.
The worst approach is to ignore the problem. AI assistants are here to stay, and so are the attackers targeting them. Proactive security now prevents painful incidents later.
Need Help Securing Your AI Development Environment?
At BSG, we specialize in DevSecOps and application security—and we can help with AI security too. Whether you need to assess your current exposure, design appropriate controls, or integrate AI assistant security into your existing workflows, get in touch to discuss your situation.