0.2 How AI Changes Cybersecurity Work

45 minutes · Module 0 · Free

How AI Changes Cybersecurity Work

Introduction

Before diving into specific AI techniques in Module 1, you need to understand the fundamental shift that AI creates in cybersecurity work. This is not about AI replacing analysts or automating everything. It is about AI changing the economics of expertise — making experienced professionals faster and enabling smaller teams to operate at a scale that previously required larger headcount.

What AI actually changes

AI in cybersecurity operates along four dimensions. Each dimension changes a different aspect of how security professionals work:

1. Speed of analysis. A security analyst investigating an account compromise manually queries sign-in logs, checks MFA status, reviews recent email activity, examines file access patterns, and cross-references with threat intelligence. Each step requires navigating a portal, writing a query, waiting for results, and interpreting output. With AI assistance, the analyst describes the scenario and the AI generates the investigation queries, structures the analysis, and identifies the anomalies — the same investigation in a fraction of the time. The analyst’s expertise is still required for judgment calls: is this anomalous pattern actually malicious, or is there a benign explanation? AI accelerates the mechanics; the analyst provides the judgment.

2. Scale of coverage. A detection engineer maintaining 30 detection rules spends most of their time on maintenance — tuning false positives, updating for new log schema changes, documenting rules, and testing after platform updates. AI assistance allows the same engineer to maintain 30 rules and develop 10 new rules in the same time window. The AI drafts the rule from a threat intelligence report, generates the documentation, identifies likely false positive patterns, and produces the test cases. The engineer reviews, validates, and deploys. Coverage expands without headcount increase.

3. Quality of documentation. Incident response documentation is universally acknowledged as important and universally neglected under operational pressure. Writing a formal executive summary, a technical timeline, a regulatory notification assessment, and stakeholder communications takes hours that analysts do not have during active incidents. AI transforms this: the analyst provides investigation notes (the data they already collected), and AI generates the structured documentation. A report that took 3 hours to write manually takes 30 minutes with AI assistance — and the quality is more consistent because the AI applies the template structure reliably.

4. Accessibility of specialized knowledge. Compliance frameworks, regulatory requirements, and governance standards are specialized knowledge domains. A security analyst who is expert in investigation may have limited knowledge of GDPR Article 33 notification requirements or NIST AI RMF implementation. AI bridges this gap — the analyst describes the situation, and AI applies the relevant framework. This does not replace compliance expertise (you still need someone to validate the output), but it makes specialized knowledge accessible to professionals who would otherwise need to research from scratch.

What AI does NOT change

Understanding the limitations is as important as understanding the capabilities:

AI does not replace judgment. An AI can tell you that a sign-in from a hosting provider IP is anomalous. It cannot tell you whether the anomaly matters in your specific context — whether the user is travelling, using a VPN, or genuinely compromised. That judgment requires understanding of your environment, your users, and your threat landscape. AI provides analysis; humans provide judgment.

AI does not guarantee accuracy. AI models hallucinate — they generate plausible-sounding but incorrect information. A KQL query that looks syntactically correct may reference a table that does not exist, use a function with wrong parameters, or apply logic that does not match the intended detection. Every AI output requires expert review before deployment. This course teaches you to be a critical reviewer of AI output, not a passive consumer.

AI does not eliminate the need for domain expertise. An analyst who does not understand investigation methodology will not produce better investigations with AI — they will produce faster, equally flawed investigations. AI amplifies existing competence. If you understand detection engineering, AI makes you a faster, more productive detection engineer. If you do not understand detection engineering, AI gives you a tool you cannot effectively evaluate.

AI does not reduce the need for operational discipline. Change management, testing procedures, documentation standards, and review processes are more important with AI, not less. When AI makes it easy to generate a detection rule in 5 minutes instead of 5 hours, the temptation is to skip the testing, documentation, and peer review. This course explicitly addresses this risk — every module includes the review and validation steps that maintain quality alongside speed.

The AI-assisted workflow model

Every module in this course follows the same workflow model:

  1. Define the task clearly — what you want AI to produce, with context and constraints
  2. Generate the output — AI produces a draft (query, document, script, analysis)
  3. Review critically — you evaluate the output against your domain expertise
  4. Refine and iterate — correct errors, adjust for your environment, improve quality
  5. Deploy and validate — put the output into production and verify it works

This five-step model applies whether you are generating an investigation query (Module 2), a detection rule (Module 3), an IR report (Module 4), an automation script (Module 5), or a governance framework (Module 7). The task changes; the workflow is constant.

The critical insight: step 3 (Review) is where your value as a cybersecurity professional is concentrated. AI can generate; only you can evaluate whether the generated output is correct, safe, appropriate, and effective for your environment. This course trains you to be an excellent reviewer — fast enough to keep up with AI generation speed, rigorous enough to catch errors that would cause operational harm.

Try it yourself

Before proceeding to the next subsection, try a simple AI-assisted security task. Open your AI assistant and paste this prompt:

"I need to investigate a potential account compromise. The user j.morrison received 5 MFA push notifications they did not initiate, then one was approved. The approval came from an IP address in a country the user has never signed in from. What are my investigation steps?"

Evaluate the response:

  • Are the investigation steps logical and in the right order?
  • Does the AI miss any steps you would include?
  • Does the AI suggest any steps that are wrong or inappropriate?
  • How would you modify the prompt to get a better response?

This exercise demonstrates the five-step model: you defined the task (the prompt), the AI generated the output (investigation steps), and you reviewed it critically. In Module 2, you will learn to do this systematically for every investigation type.

Check your understanding

1. A junior analyst uses AI to generate a KQL detection rule from a threat intelligence report. The rule looks syntactically correct and the analyst deploys it to production without testing. Within 24 hours, the rule generates 200 false positive alerts. What went wrong in the AI-assisted workflow?

The analyst skipped steps 3 (Review) and 5 (Validate). AI generates plausible output that may be syntactically correct but operationally flawed — the query may match legitimate patterns that the AI could not anticipate because it does not know your environment's baseline. The correct workflow: generate the rule (step 2), review the logic against your environment's known patterns (step 3), test against historical data to check the false positive rate (step 4/5), and only then deploy. The AI accelerated rule creation from hours to minutes, but the analyst treated speed as a reason to skip validation — the opposite of the correct approach.
The AI generated a bad rule — the analyst should use a different AI model
The threat intelligence report was inaccurate

2. A security manager asks: "Will AI replace our analysts?" Based on the AI-assisted workflow model, what is the accurate answer?

No — AI changes what analysts spend their time on, not whether you need them. Before AI: analysts spend 70% of their time on mechanics (writing queries, formatting reports, searching documentation) and 30% on judgment (deciding if something is malicious, choosing containment actions, assessing risk). With AI: the mechanics are accelerated, so analysts spend more time on judgment, review, and decision-making — the work that requires human expertise. The team does not shrink; it becomes more capable. Each analyst handles more investigations, maintains more detection rules, and produces better documentation in the same time. The risk is not job replacement — it is the gap between teams that adopt AI effectively and teams that do not.
Yes — AI will automate most analyst tasks within 2-3 years
No — AI is not reliable enough for security work

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus