In this module

OD1.8 Operational Timing — Why Attacks Happen When They Do

6-8 hours · Module 1 · Free
What you already know

You've responded to incidents at inconvenient times. You've seen ransomware deployed on Friday evenings and suspicious activity during shift handovers. This sub explains why — the attacker's timing is a deliberate operational decision, not coincidence, and reading the timing is a diagnostic that tells you the objective, the sophistication, and the response urgency.

Operational Objective

When a ransomware operator deploys encryption at 22:00 on a Friday, that's not because they happened to be working late. Every hour of uncontested encryption is more data encrypted, more leverage for the ransom demand. Friday deployment maximises the uncontested window. But Friday ransomware is the obvious pattern. The sophisticated timing decisions — business-hours blending, event-aligned collection, shift-boundary exploitation — reveal deeper operational thinking.

Learning Objectives

By the end of this sub you will be able to:

  • Classify the attacker's timing strategy (maximum-impact window, business-hours blending, response-gap exploitation, or event-aligned) from the timestamps in your investigation telemetry. LockBit affiliates consistently cluster deployment around Friday evenings; Volt Typhoon operated strictly during target business hours for years. This matters because the timing strategy is a first-hour diagnostic that tells you the attacker's objective and sophistication before you've analyzed any technique-level evidence.
  • Map your organisation's high-risk timing windows and pre-position containment capability for each. This matters because response capability is not uniform across the week — the gap between coverage and capability during off-hours is the window attackers exploit.
  • Detect timing anomalies in high-value account authentication using AI-assisted pattern analysis over 30-day baselines. This matters because espionage operators who time their access to coincide with the target's meeting schedule produce authentication events that look normal individually but form a suspicious cadence when analyzed over weeks.
TIMING STRATEGIES AND WHAT THEY REVEAL
MAXIMUM-IMPACT WINDOW
Friday evening, holiday weekends, quarter-end
Objective: financial/destructive. Maximize uncontested time.
BUSINESS-HOURS BLENDING
10:30 AM Wednesday, during normal admin activity
Objective: intelligence/access. Blend with legitimate traffic.
RESPONSE-GAP EXPLOITATION
SOC shift handover, IR lead at conference
Deep recon. Knows your team structure and availability.
EVENT-ALIGNED
During Patch Tuesday, board meeting prep, change freeze
Highest capability. Times activity to org-specific noise.

Figure OD1.8 — Four timing strategies. Each reveals the attacker's objective, sophistication, and knowledge of your environment.


Timing is not accidental

Noise is a choice (OD1.4). Timing is the same — a deliberate operational decision based on the attacker's objectives and knowledge of your response capability.

The ransomware affiliate deploys at 22:00 Friday because the encryption runs 36-48 hours before Monday-morning staffing returns. The espionage operator accesses the CFO's mailbox at 10:30 AM Wednesday because that's when the CFO is in meetings and the assistant is actively managing the inbox — the attacker's access blends with expected behavior.

Both are rational timing decisions. Both reveal the attacker's operational logic.

The maximum-impact window

Financial and destructive operations choose timing that maximises the gap between execution and effective response.

Friday evening is the most common. Documented campaigns show statistically significant clustering around Friday evenings and Saturday mornings. The Kaseya VSA ransomware attack on July 2, 2021 (Friday before the US Independence Day long weekend) exploited precisely this window — the attackers timed deployment for the start of a holiday weekend when MSP staffing was at its lowest. The encryption ran across hundreds of MSP customers' environments for nearly three days before effective response began.

The logic is consistent across campaigns: the attacker has been inside the environment for days. Discovery, credential access, and lateral movement happened earlier in the week — often during business hours to blend with legitimate traffic. The deployment trigger is held for Friday evening because every hour of uncontested encryption is more data encrypted and more leverage for the ransom demand.

Here's what this looks like in your telemetry. You run a retrospective query on Monday morning after discovering the ransomware:

// Timeline reconstruction: what happened before Friday night deployment?
let EncryptionStart = datetime(2026-04-24T22:15:00Z); // Friday 22:15
DeviceProcessEvents
| where TimeGenerated between ((EncryptionStart - 5d) .. EncryptionStart)
| where AccountName == "compromised_admin"
| summarize EventCount = count() by bin(TimeGenerated, 1h)
| render timechart

The chart reveals the campaign's operational rhythm: scattered events during business hours Monday–Thursday (discovery, credential access, lateral movement blending with legitimate admin work), silence Thursday evening as the attacker waited, then a burst of activity starting Friday 21:30 (backup deletion, shadow copy removal, ransomware staging) followed by mass encryption at 22:15.

Holiday weekends extend the window dramatically. The Christmas-to-New-Year period is historically the highest-risk window. The PYSA ransomware group's documented preference for holiday timing exploited exactly this — security teams at minimum staffing, change freezes preventing rapid infrastructure modifications, key decision-makers unreachable.

Quarter-end and financial close create different leverage. A ransomware attack during financial close doesn't just disrupt operations — it threatens regulatory filings, audit deadlines, and financial reporting obligations. The cost of downtime during financial close is disproportionately higher, which increases ransom pressure. An attacker who knows your fiscal year-end (public information for publicly traded companies, often discoverable for private companies via industry norms) can time deployment to maximise financial pressure.

Before planned IT change freezes is a subtler timing choice. If the attacker knows your organisation enters a change freeze on a specific date (sometimes communicated in job postings, IT staff LinkedIn activity, or public calendar references), deploying ransomware just before the freeze creates a nightmare: the IT team can't make the infrastructure changes needed for recovery because they're in a freeze, but they're in crisis and need to break it, which requires management approval that adds hours to response.

Defensive translation: map your high-risk timing windows and increase readiness before each one. This isn't just "add SOC analysts." It's operational pre-positioning:

Pre-holiday weekend checklist:
□ Verify backup integrity — test restore of one critical system
□ Confirm IR team on-call — names, phone numbers, response SLA
□ Pre-authorise emergency containment actions so skeleton staff
  can isolate systems without waiting for management approval
□ Run a quick threat hunt for dormant persistence:
  - Scheduled tasks created in the last 30 days that haven't executed
  - Accounts that authenticated once and went dormant
  - Web shells (aspx/php files in web-accessible directories)
□ Verify critical service monitoring is active (backup agents,
  domain controllers, file server shares)
□ Document the weekend escalation path: who to call, in what order,
  for what severity levels

The business-hours blending strategy

Espionage operators operate during business hours because that's when their activity blends with legitimate work.

Consider what your environment looks like at 10:30 AM on a Wednesday. IT admins are running PowerShell scripts, authenticating to servers, accessing SharePoint sites, querying Active Directory. Users are logging into M365 from various devices and locations. Scheduled tasks are executing. Backup jobs are running. API calls are flowing between applications. The volume of legitimate activity is at its daily peak.

Now consider 3:00 AM on a Sunday. Almost nothing is happening. A PowerShell execution stands out. An authentication from an unfamiliar device stands out. A SharePoint access event stands out. The baseline is quiet, and anomalies are visible.

The sophisticated attacker knows this and operates during the noisy window. Here's what that looks like in practice — this is a real investigation pattern from an espionage-style compromise at Northgate Engineering:

Investigation finding — NE incident #IR-2026-047:

The compromised account (exec assistant with delegated CFO mailbox access)
showed the following authentication pattern over 60 days:

Normal pattern (the user):
  Mon-Fri, 08:15-17:45, from 10.0.0.67 (office workstation)
  Occasional evenings from 192.168.1.x (home IP, known)
  Device: DESKTOP-NGE067 (Entra-joined, compliant)

Anomalous pattern (the attacker):
  Tue + Thu only, 09:15-10:20, from 185.220.101.x (residential proxy)
  Device: unregistered, non-compliant
  Actions: MailItemsAccessed on CFO mailbox (inbox + "Board Materials" folder)
  Duration: exactly 45-65 minutes per session

  The attacker accessed the CFO mailbox ONLY during the Tue/Thu 09:00-10:00
  window — the same window when the CFO's recurring leadership meeting runs
  and the assistant legitimately accesses the mailbox for meeting prep.

Each individual access event — an executive assistant accessing the CFO's mailbox on a Tuesday morning — is completely unremarkable. The investigation only identified the compromise because a separate unrelated alert led to a deep audit of the account's OAuth grants, which revealed a consent grant for an unrecognised application with Mail.Read permissions created 4 months earlier.

Event-aligned timing goes further. Running discovery commands during Patch Tuesday — when IT admins run similar queries to verify updates — makes the attacker's activity indistinguishable from legitimate work. Accessing executive mailboxes during a known board meeting cycle matches expected assistant behavior.

You can detect this with a KQL query that compares per-user authentication patterns across 30 days and flags sessions from devices that don't match the user's normal device:

// Detect business-hours access from anomalous devices
let TargetUsers = dynamic(["cfo-assistant@company.com",
    "ceo-assistant@company.com", "finance-lead@company.com"]);
let NormalDevices = SigninLogs
| where TimeGenerated > ago(60d)
| where UserPrincipalName in (TargetUsers)
| where ResultType == "0"
| summarize DeviceList = make_set(DeviceDetail) by UserPrincipalName;
SigninLogs
| where TimeGenerated > ago(30d)
| where UserPrincipalName in (TargetUsers)
| where ResultType == "0"
| join kind=leftouter NormalDevices on UserPrincipalName
| where not(DeviceDetail in (DeviceList))
| project TimeGenerated, UserPrincipalName, IPAddress,
    DeviceDetail, AppDisplayName, Location
| order by UserPrincipalName, TimeGenerated asc

If this returns sessions during business hours from devices that aren't in the user's 60-day device history, investigate. The timing will look normal. The device won't.

Defensive translation: business-hours detection needs to focus on what and who, not just when. Anomaly detection during business hours must be contextual: is this user normally doing this action, on this system, from this device? The timing is deliberately chosen to look normal — the anomaly is in the device, the IP, or the access pattern, not in the timestamp.

Response-gap exploitation

Some attackers time operations to exploit known gaps in your incident response capability. This requires intelligence about your specific team structure — which means the attacker has done deeper reconnaissance than average.

SOC shift handovers create brief attention gaps. The transition period between shifts — typically 15-30 minutes — is when outgoing analysts are wrapping up and incoming analysts are getting oriented. Alerts that fire during the handover have a statistically higher chance of being deprioritised or missed. An attacker who knows your shift schedule can time their noisiest actions to coincide with the handover.

Many organisations publicly state their SOC operates "24/7" — which implies shifts. Some even publish shift times in job postings ("rotating 12-hour shifts, 07:00-19:00/19:00-07:00"). That's enough for the attacker to calculate the handover windows.

IR team availability gaps are exploitable if the attacker monitors your team's public presence. Conference agendas list speaker names. LinkedIn shows "attending RSA Conference." Out-of-office auto-replies (which some organisations configure to respond to external senders) confirm absences. An attacker who monitors these signals can time their critical operations to coincide with reduced response capability.

Managed SOC provider scheduling follows predictable patterns. Most managed SOC providers run shifts with variable staffing. The overnight shift (02:00-06:00 local time) typically has fewer analysts and less senior coverage. The attacker doesn't need to know the exact schedule — they make a reasonable assumption about when coverage is weakest.

Here's a practical example of what response-gap exploitation looks like in an investigation timeline:

Investigation finding — NE incident #IR-2026-062:

Timeline (all UTC):
  Fri 16:45 — IR lead posts "OOO until Monday" on Teams
  Fri 17:30 — SOC shift handover (day→evening shift)
  Fri 17:42 — First lateral movement (PsExec to file server)
              Alert fires: "Suspicious PsExec usage" — Medium severity
              Evening shift analyst triages at 18:15, notes:
              "Admin account, PsExec is authorised tool, likely maintenance"
              Disposition: benign — admin maintenance
  Fri 18:30 — Lateral movement to backup server (WinRM)
  Fri 19:15 — Shadow copy deletion on backup server
  Fri 19:20 — Backup agent service stopped
  Fri 22:00 — Ransomware deployment via GPO

  The attacker began lateral movement 12 minutes after the shift
  handover. The PsExec alert was triaged by a less-experienced
  evening shift analyst who made a defensible-but-wrong disposition.
  The IR lead was offline. The backup destruction went undetected
  because the specific detection rule wasn't deployed.

The attacker may not have known the exact shift handover time. But they knew Friday evening = reduced capability. They knew the IR lead's absence (Teams status is visible to anyone in the tenant, including compromised accounts). They timed the noisiest phase — lateral movement — to coincide with the weakest coverage.

Defensive translation: ensure shift handover procedures include explicit review of any alerts that fired in the preceding 30 minutes. Pre-authorise the evening/overnight team to execute containment actions (endpoint isolation, account disable) without waiting for senior approval. If the IR lead is absent, designate a named deputy with the same authority — and don't announce absences in channels visible to compromised accounts.

Timing as investigation diagnostic

Within the first hour of analysis, timing tells you:

Off-hours + fast pace = Financial or destructive objective. Contain immediately.

Business-hours + slow pace = Intelligence or access objective. Operation has been running a while. Covert scoping before containment.

Activity aligned with specific organisational events = Deep reconnaissance, high capability. Assume the attacker has anticipated your response.

Timing inconsistent with your timezone = The attacker is in a different timezone and may not have researched your business hours. Lower capability indicator.

STEP 1 — Map your high-risk windows (15 minutes)
   List your organisation's five highest-risk timing windows:
   a. Holiday weekends (list the specific dates for the next 6 months)
   b. Quarter-end / financial close dates
   c. SOC shift handover times
   d. Known periods of reduced IR staffing
   e. Planned change freeze windows

   For each, document:
   - What reduces your response capability during this window?
   - What pre-positioning action could you take before it?

STEP 2 — AI-assisted timing anomaly detection (20 minutes)
   Export 30 days of sign-in logs for 3 high-value accounts
   (one executive, one IT admin, one finance lead).
-- Sentinel KQL to export per-user sign-in data:
SigninLogs
| where TimeGenerated > ago(30d)
| where UserPrincipalName in
    ("cfo@company.com", "itadmin@company.com", "finlead@company.com")
| project TimeGenerated, UserPrincipalName, IPAddress,
    DeviceDetail, AppDisplayName, ResultType
| order by UserPrincipalName, TimeGenerated asc
   Feed each account's logs into Claude with this prompt:

   "Analyse this user's authentication patterns over 30 days:
    [paste the exported data]

    1. What are their normal working hours?
    2. Any authentications outside their normal pattern?
    3. Do off-pattern events come from different devices/locations?
    4. Is there a regular cadence to off-pattern events that might
       indicate a collection schedule?
    5. Which events would you flag as suspicious and why?"

STEP 3 — Historical timing review (10 minutes)
   Review your last 3 security incidents. For each:
   - When did the attack begin? (day of week, time)
   - When was it detected?
   - When was it contained?
   - Was detection or response delayed by timing factors?
     (weekend, holiday, shift handover, key personnel unavailable)

Hands-on Exercise — Timing Analysis

Objective: Map your high-risk timing windows, detect timing anomalies in high-value accounts, and review historical incidents for timing-related response delays.

Prerequisites: Access to your SIEM with 30 days of sign-in logs. Access to your incident history.

Success criteria: You've mapped 5 high-risk windows with pre-positioning actions, run timing anomaly analysis on 3 accounts, and identified timing-related response delays in historical incidents.

Challenge: For the executive account you analyzed, check whether their authentication timing aligns with their calendar pattern. If the account shows regular Tuesday/Thursday morning access from an unfamiliar device coinciding with the executive's recurring meetings, that's the espionage timing pattern from this sub — the attacker is reading the calendar and timing collection to blend with expected behavior.


Next
OD1.9 — Team Structures and Attacker Roles. IABs, RaaS operators, state-sponsored teams. How team structure affects campaign patterns — single-operator vs multi-role campaigns produce different telemetry.
Checkpoint — before moving on

You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.

1. Classify an attacker's timing strategy from investigation timestamps and explain what it reveals about their objective and sophistication. (§ Timing as investigation diagnostic)
2. Map your organisation's five highest-risk timing windows and describe the pre-positioning action for each. (§ Hands-on Exercise Step 1)
3. Explain why business-hours espionage activity is harder to detect than off-hours ransomware deployment and what detection approach is required. (§ The business-hours blending strategy)

You're reading the free modules of offensive-security-for-defenders

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.

View Pricing See Full Syllabus