Umair Sabir
← Back to blog
Oct 18, 2025·Penetration Testing, Career, Mistakes

Top 10 Penetration Testing Mistakes (And How To Avoid Them)

Ten mistakes I've watched junior pentesters make on real engagements — what each one looks like, the operational damage it causes, and the specific fix that makes it stop happening.

By Umair Sabir

I've reviewed a lot of pentest reports — mine, my peers', interviewees', students'. Roughly the same ten mistakes show up. None of them are about not knowing enough exploits. All of them are about process.

If you fix these ten things, your reports get sharper, your time-on-target shrinks, and you stop creating bad work-products that haunt the customer's inbox three weeks later.

→ time wasted→ damage to deliverableVIII — weak reportingIII — out-of-scopeI — skipped reconII — no notesIV — over-exploitationV — tool-only mindsetVI — no cleanupVII — unvalidated findsIX — same playbookX — no reflection
Fig. 1.Where each mistake costs the operator. Time wasted on the engagement, plotted against damage to the deliverable.

The red dots are the ones that get a junior fired. The amber ones cost the engagement. The green ones cost only the tester's growth.

1. Skipping Reconnaissance

Every dollar you skip on recon you pay back ten in exploitation. Recon turns a "scan and pray" engagement into a "I know exactly which endpoint is interesting" engagement.

Fix: spend at least 10% of total engagement hours on phase 1 OSINT. For a 5-day test that's 4 hours. Build a stack-fingerprint, subdomain list, user-format guess, and breach-cred list before you touch a single packet at the target.

2. Not Taking Notes

You'll forget 80% of what you tried in 48 hours. Then the customer asks "did you check X?" and you don't know.

Fix: Obsidian + a per-engagement vault. One markdown file per host, one per finding. Screenshots paste straight into Obsidian on Windows and macOS — use it.

~/engagements/acme-2025-q4/
├── 00-scope.md             the SoW, copy-pasted, marked up
├── 01-recon/
├── 02-targets/10.20.0.50.md
├── 02-targets/10.20.0.51.md
├── 03-findings/
   ├── F-001-rce-jenkins.md
   └── F-002-broken-access.md
└── 99-report.md            assembled at the end

3. Ignoring Scope

Out-of-scope testing is the single fastest way to end your career. There is no "but I found something cool" exception.

Fix: print the scope. Tape it to your monitor. Use Burp's Target → Scope feature so the proxy refuses to send out-of-scope traffic by accident. If you find something interesting on an out-of-scope domain, write it down and ask. Don't poke.

4. Over-Exploitation

Crashing a service to prove RCE works is the move of someone who's never had to apologise to a CIO. You don't need to drop a binary to prove command execution; you need to prove command execution, full stop.

Fix: the minimum PoC is the right PoC. id, whoami, or a unique DNS callback is plenty. Save the destructive PoCs for your home lab.

5. Tools-Only Mindset

A scanner is a junior who has read the manual but never thought about anything. Useful, fast, missing context.

Fix: before each scan, ask "what would I do if this tool didn't exist?" If your answer is "I have no idea", you're not ready for the scan. Read the protocol. Then run the tool.

6. Forgetting Cleanup

You added a user. You uploaded a webshell. You created a scheduled task for persistence testing. What happens to those at end-of-engagement?

Fix: every artifact you drop on a target gets a row in a tracking table. End-of-engagement, you walk that table and remove every row. The customer's report includes a cleanup attestation signed by you.

7. Not Validating Findings

Auto-scanners produce noise. If your report says "the host is vulnerable to MS17-010" and the customer's IR team patched it 3 years ago, you've burned credibility. Validate.

Fix: every reported finding has a manually-reproduced screenshot. No exceptions. If you couldn't reproduce by hand, downgrade or drop the finding.

8. Weak Reporting

A great test with a weak report becomes invisible. Customers re-read reports six months later, when the CISO asks why a budget line item exists.

Fix: every finding needs Title / Severity / Affects / Repro / Impact / Remediation. Every section is one paragraph or fewer. If you can't compress it that small, you don't understand the finding yet.

9. Repeating the Same Playbook

If you find SQLi in every report, two things are true: SQLi is everywhere, and you only know how to find SQLi.

Fix: at the end of every engagement, add one new technique to your repertoire. SSRF chains. Cache deception. JWT key confusion. Workflow attacks on CI/CD. Variety is what separates a 3rd-year tester from a 1st-year tester.

10. No Post-Engagement Reflection

You finished. You shipped the report. Now what?

Fix: 30-minute solo retro. Three columns: what worked, what wasted time, what I want to learn next. Save it. Re-read it before your next engagement. This single habit compounds harder than any tool, course, or cert.

Summary

Most of these I learned by failing one of them and watching a senior tester pull me aside. Pass it along.

Want the methodology this list assumes? Read How I Approach Real-World Pentests. Want the offensive specifics for an internal engagement? Active Directory Attack Paths is the next stop.

For more posts like this, visit the blog or subscribe to the RSS feed.