picoCTF logo
  • Get Started
  • Learn
    Resources
    Community
    Primer
  • Practice
  • Compete
    Current and Upcoming Events
    Historical Competitions
  • About
    About picoCTF
    Sponsorship
    Contact Us
  • Log In
CMU logo

What I Wish More People Knew About AI in Cybersecurity

12 September 2025

by Supriti Vijay

A conversation with a friend recently reminded me just how unfamiliar AI still feels to many people working in or around cybersecurity. Not in theory, we’ve all heard the buzzwords but in practice. In actual workflows. In detection pipelines. In red teaming. Even in CTFs.

The tools exist. The threats are evolving. But the awareness? That’s still catching up.

This post is an effort to bridge that gap - by laying out what’s changing, why it matters, and how models like Foundation-Sec-8B-Instruct are starting to make AI in security not just possible, but useful.

The Stakes Are Real - and Immediate

Back in late 2024, security researchers flagged a worrying trend:

“By 2025, malicious use of multimodal AI will be used to craft an entire attack chain.ˮ [1]

That includes everything from scanning social profiles, generating tailored phishing emails, writing malware that dodges detection, and even setting up supporting infrastructure - all powered by AI.

This isn’t some speculative future. It’s happening now. Attackers aren’t waiting for the AI ethics debate to resolve itself - they’re using these tools because they work. Meanwhile, many defenders are still unsure if AI belongs in security at all.

The gap is growing. And that’s a problem.

The AI Hesitation in Security Circles

Let’s be honest - security people are trained to be skeptical. We don’t install things blindly, we verify, we question assumptions, we test for edge cases. So when a new tool like AI shows up promising to “revolutionizeˮ everything, hesitation is reasonable.

In fact, recent data shows that skepticism is rising:

  • 69% of security leaders now cite AI privacy concerns (up from 43% just a quarter before) [2]
  • 55% are concerned about regulations and compliance [2]
    That’s not surprising. Security has always been about minimizing risk, and AI introduces a lot of unknowns - how it behaves, what data it touches, where models live.

But at some point, skepticism can become resistance. And resistance, in this case, could mean falling behind.

The Threat Landscape Isn’t Waiting

Let’s break down why AI matters. Not in theory, but in real day-to-day security work.

  1. Scale
    By 2025, cybercrime is projected to cost $10.5 trillion annually [3]. Traditional methods can’t keep up with that volume. A human analyst might triage hundreds of alerts daily. AI can sift through millions and surface the 10 that actually matter.
  2. Speed
    Threats now operate at machine speed. Phishing kits and malware variants are generated and deployed in seconds. A user downloading sensitive files at 2 a.m. from an unexpected location? AI can flag it, score the risk, and trigger a session kill instantly.
  3. Skills Gap
    We don’t have enough trained cybersecurity professionals to meet the current demand, let alone what’s coming. AI doesn’t replace humans, it gives them breathing room. It helps teams do more with less.

What AI in Cybersecurity Actually Looks Like

When we talk about AI in cybersecurity, it’s not about handing over control to a black-box model. It’s about integrating tools that can reduce noise, improve context, and help security teams move faster - especially under pressure.

Here are a few examples of what that looks like in practice.

Foundation Models for Security

One example I’ve used myself is Foundation-Sec-8B-Instruct, a model developed by the team I work with at Cisco. It’s an 8-billion-parameter instruction-tuned language model trained specifically for cybersecurity tasks.

It’s designed to work in security settings - not just general-purpose chat:

  • It’s trained on incident reports, threat intel, red-team planning, and security documentation
  • It understands frameworks like MITRE ATT&CK and can help map out threats or simulate attacker behavior
  • It can generate threat models, enrich scan outputs, and assist in triaging incidents

What I find useful is that it’s built for local or restricted environments, so you’re not dependent on cloud APIs for everything - especially important if you’re working in sensitive settings or handling regulated data.

It’s a good place to start experimenting if you want something purpose-built instead of repurposing general LLMs.

Other Tools Already in Use

There’s also a broader trend of AI being quietly integrated into existing tools:

  • SIEM and EDR platforms are using ML to correlate events and prioritize alerts
  • Behavioral analytics systems establish baselines and flag unusual activity at scale
  • Identity systems apply AI to evaluate risk during authentication, adjusting access dynamically

These aren’t emerging tools - they’re already in the ecosystem. The opportunity now is to start using them more deliberately, with awareness of where they help and where they need oversight.

Getting It Right: Augmentation, Not Autonomy

The goal isn’t to let AI make decisions in isolation. It’s to build systems where AI acts as a fast, intelligent assistant - surfacing patterns, connecting the dots, and reducing manual effort—while humans remain in control.

Effective AI in security should:

  • Accelerate timeline reconstruction during investigations
  • Highlight anomalies for review, not judgment
  • Recommend responses, not enforce them blindly

This hybrid approach respects the stakes of security work while acknowledging the scale and speed we now need to operate at.

The Competitive Shift Is Already Happening

53% of security leaders already believe that AI-driven identity and authentication will be the most transformative tech in the next few years - more than passwords, more than quantum-proof encryption [4]

Organizations that adopt AI early and thoughtfully will have a head start in both capability and resilience. Others will spend the next few years catching up.

Where This Is Going

We’re at a pivot point.

AI will reshape security. The only real choice is whether you want to shape how it fits into your workflows - or wait until the market (or your attackers) do it for you.

Start small. Use the models in safe environments. Add guardrails. Keep a human in the loop.

But start. Because the window for proactive adoption is narrowing!

References

  1. https://www.scworld.com/feature/cybersecurity-threats-continue-to-evolve-in-2025-driven-by-ai
  2. https://kpmg.com/us/en/media/news/q2-ai-pulse-2025-agents-move-beyond-experimentation.html
  3. https://www.csbs.org/newsroom/regulatory-burden-top-community-bank-concern-annual-survey
  4. https://fox5sandiego.com/business/press-releases/ein-presswire/828352502/identity-based-attacks-lead-cybersecurity-concerns-as-ai-threats-rise-and-zero-trust-adoption-lags/
CMU Logo INI Logo CyLab Logo PPP Logo
Facebook logo Twitter logo Discord logo
© Carnegie Mellon University 2025
Use of this site is governed by the Privacy Statement and Terms of Service.