16 Jun
16Jun

Artificial intelligence (AI), especially Large Language Models (LLMs) and General Language Models (GLMs), are reshaping both defensive and offensive cybersecurity landscapes. While AI-based tools enhance detection, orchestration, and threat hunting, they also empower threat actors particularly ransomware operators with automation and sophistication. This blog explores how AI is transforming security strategy, why it poses new risks, and what technically robust defenses are now essential. 

1. AI as a Defensive Force

a. Automation in SOC Operations

 AI-powered agents can ingest and correlate huge volumes of telemetry endpoint, network, identityand detect advanced threat indicators. Platforms like Delinea and Trend Micro report over 60% of organisations now use AI to accelerate detection of IoCs at scale theregister.com. Automated playbooks allow actions such as isolating endpoints, revoking credentials, or initiating forensic snapshots dramatically reducing containment latency. 

b. Behavioural Anomaly Detection

 LLMs and GLMs aid behavioural profiling by understanding typical communication patterns, file access sequences, or authentication events. This allows the system to detect subtle deviations indicative of ransomware or data exfiltration before encryption begins. 

c. Dynamic Access and Identity Governance

AI-driven IAM solutions continuously adapt policies based on user context, device posture, and location. Autonomous systems can assess privilege elevation requests in real time, granting or denying just-in-time access enhancing zero trust and reducing compromised session risks .

2. AI Used by Attackers: From Phishing to Ransomware

a. Highly Convincing Phishing Campaigns

 LLMs can generate targeted phishing content that mimics writing styles of internal personnel or leadership. Organizational communications become indistinguishable from genuine messages. The UK National Cyber Security Centre warns that AI-generated scams are becoming harder to detect theguardian.com

b. "Agentic" AI Assistants in Attack Chains

 Emerging agentic AI systems self-directed multi-tool agents can execute reconnaissance, exploit discovery, privilege escalation, lateral movement, and data encryption totally autonomously itpro.com, malwarebytes.com, cybersecuritytribe.com. These agents may even negotiate with ransomware victims and manage Bitcoin transactions. 

c. LLM-Driven Ransomware Innovation

 Academic proofs of concept like RansomAI demonstrate how reinforcement learning can enable ransomware to adapt encryption behaviour to avoid detection using packet timing and compression to minimize signature exposure perception-point.io2, arxiv.org, malwarebytes.com. Moreover, a recent study shows LLMs can assist in both identifying zero-day vulnerabilities and customizing ransomware payloads that exploit those gaps enabling tailored, high-value extortion based on contextual data analysis. 

3. Technical Strategies: How Ransomware Uses AI

Custom cryptographic routines: Ransomware like Akira uses hybrid encryption with ChaCha20 and RSA‑4096 to lock legacy and modern storage assets . 

EDR evasion tools: Payloads like EDRKillshifter disable monitoring agents while remaining stealthy . 

Multi-vector delivery: Attackers use AI to tailor payloads for Windows, Linux, ESXi hypervisors, increasing cross-platform reach.

Adaptive payload tuning: Reinforcement-trained ransomware changes encryption rates and file targeting based on runtime detection feedback .


4. Defensive Controls and Best Practices

 Given these accelerated threats, organisations must evolve from static defenses to AI-aware layered strategies

a. Hardened IAM and Access Policies

  • Enforce context-aware authentication with multifactor and device compliance
  • Implement just-in-time privilege elevations using behavior analytics
  • Monitor elevated sessions and automatically revoke after completion

b. EDR/XDR with AI-Empowered Detection

  • Deploy systems that detect cryptographic anomalies like ChaCha20, abnormal process hierarchy, and certificate misuse
  • Tune detection rules against MITRE ATT&CK TTPs credential theft, lateral movement, persistence.
  • Use sandboxing and memory inspection to detect AI-mutated ransomware behaviour

c. Network and Segmentation Controls

  • Microsegmentation of critical assets such as identity services, hypervisors, and backup repositories
  • Deep packet inspection at chokepoints with AI-enhanced flow analytics to detect encrypted data tunnels

d. AI-Aware Backup and Integrity Architecture

  • Use immutable backups and robust air-gapped storage with anomaly detection for unauthorized deletion or encryption activity
  • Deploy AI to monitor backup integrity and identify suspect file modifications prior to payment events

e. Supply Chain Defense and LLM Input Protection

  • Vet AI integrated applications and LLMs against prompt injection and supply chain risk businessinsider.com
  • Limit model access, apply input sanitization, and monitor LLM outputs to prevent misuse

f. Incident Response Augmented by AI

  • Integrate AI in runbooks to suggest adaptive containment strategies based on evolving attack patterns
  • Simulate agentic AI attacks by red teams to validate real response readiness

5. Real-World Alignment with RSAC 2025 and Industry Insights

 At the recent RSAC 2025 conference, experts emphasised dual-use of AI in cybersecurity both as a defence and as a weapons factor. 

Data governance and visibility are now core to counter data theft layers of modern ransomware Fortinet identifed explosive growth of AI-accelerated scanning up to 36,000 scans per second and a 42 percent surge in credential-based attacks, largely fuelled by stolen data circulating in underground markets. 

RaaS operators like RansomHub, LockBit 3.0, Play, and Medusa have rapidly expanded their operations by integrating credential theft, AI-optimized attacks, and automated negotiation models. 

Moreover, Business Insider noted that prompt injection and data exfiltration from LLM-based systems represents a sophisticated new threat vector. 

DeepSeek and Gemini vulnerabilities are early examples of model manipulation that can bypass safeguards and assist attackers. 

6. The Road Ahead: Balancing AI With Security

 AI and LLMs will continue to raise both threats and defenses. The security posture of the future depends on three core principles: 

  1. Adaptation: Defenders must integrate AI capabilities to match the speed and scale of attacker automation sequencing detection across multiple layers including identity, host, and network.
  2. Anticipation: Organisations must invest in red team tactics simulating AI-assisted ransomware testing automated encryption, data exfiltration, and negotiation flows.
  3. Governance: AI models must be treated as critical infrastructure subjected to access control, prompt auditing, adversarial testing, and behavior monitoring.


In summary: The evolution of ransomware through AI and LLM technology marks a turning point in cyber crime. Emerging threat actors can build adaptable, stealthy, tailor-made malware. But defenders are not powerless. With AI-enhanced detection, behavior analytics, dynamic access controls, and supply chain scrutiny, organisations can counter these intelligent threats. The future of cyber resilience requires pairing technological innovation with disciplined governance preparing to fight fire with fire, while keeping identity, context, and trust at the center of every defense.