DDoS Protection Basics: Rate Limits, ACLs, and Scrubbing

You don’t control when a DDoS hits, but you can control how prepared you are. The strongest posture blends simple controls you already own—rate limits and ACLs—with burst capacity from scrubbing networks and tight coordination with your upstreams. The goal is to keep good users online while shedding junk, fast.
Think in layers. Volumetric floods saturate links (Gbps, Mpps), transport floods exhaust state on servers and load balancers (SYN, ACK, UDP), and application floods target expensive code paths (search, login, inventory). Each layer has different signals and different brakes. If you try to solve everything at layer 7, you’ll miss cheap wins at the edge; if you only push to transit scrubbing, you may pay more and throttle late.
We’ll walk through practical guardrails you can set before an incident, how to tune rate limits without breaking real users, ACL patterns that stop common floods, where scrubbing centers fit, what to ask from your providers, and fast checks that tell you within minutes whether you’re under attack or just going viral.
Core Attack Types and Impact
Volumetric attacks often exploit amplification over UDP—commonly DNS, NTP, CLDAP, SSDP, or memcached—aiming to saturate the smallest pipe in your path; they’re measured in bits per second or packets per second and usually require filtering upstream of your edge.
Transport-layer floods (SYN, ACK, RST, malformed options) try to exhaust connection tracking, SYN backlogs, or firewall CPU; application-layer floods send valid-looking HTTP or gRPC requests to resource-heavy endpoints, often with rotating IPs, user agents, or TLS JA3 fingerprints to evade single-key throttles.
Know your chokepoints: the last-mile circuit to your data center, the edge firewall state table, the load balancer worker threads, and the application’s slow queries. You defend each with different controls, in this order: block or discard as early as possible, shed load cheaply, keep critical paths fast, and only then scale up capacity.
Rate Limits: How to Set and Monitor
Rate limits are your precision throttle for application- and session-level abuse. The art is choosing the key, the window, and the action. Useful keys include IP, /24 or /48 prefix, user account, API key, cookie, JA3/TLS fingerprint, and path tuple (method + route). Start with soft limits that log and sample block decisions for a week before enforcing.
Use short windows (1–10 seconds) to rein in bursts and longer windows (1–5 minutes) to cap sustained flows. Set different budgets per route: search/autocomplete and login should have tighter limits than static assets. Add concurrency limits (in-flight requests per client) and queue time caps (drop if queued beyond a small threshold). Combine limits with circuit breakers that open when upstream latency or error rates jump.
Algorithms: Token Bucket and Leaky Bucket
Token bucket allows bursts up to bucket size while keeping an average rate; leaky bucket smooths but can be too strict for spiky mobile traffic. For web APIs, a token bucket per key with a small burst (for example, 10–50 requests) and a modest refill (about 1–5 rps) balances tolerance and control. For login, pair per-IP limits with per-account limits to avoid collateral damage on shared NATs.
Practical Limits by Layer
At layer 7, limit by path categories: authentication (very tight), search (tight), checkout (moderate), catalog/listing (looser), static files (bypass). At layer 4, enable SYN cookies and set per-source SYN thresholds; for UDP services, apply per-source packet rate limits and drop malformed or clearly oversized payloads before they reach the app.
Access Control Lists (ACLs) and Filtering
ACLs are your first cheap filter. Put them as close to the edge as possible—router, switch TCAM, DDoS appliance, or cloud edge. Keep a clean base ACL: explicitly permit expected protocols/ports and drop everything else by default, and block well-known abused services you don’t offer (for example, NTP, SSDP, or memcached) to your VIPs. For TCP, deny impossible flag combinations (such as SYN+FIN), suspicious option stacks, or invalid MSS values.
Edge ACL Patterns That Work
Common rules: allow TCP 80/443 to your VIPs; if you serve HTTP/3 over QUIC, allow UDP 443 only for those VIPs; otherwise, drop unsolicited UDP to them. Filter bogons and private/ULA source ranges at the edge. For authoritative DNS, enable response rate limiting (RRL) and size UDP answers to avoid IP fragmentation (EDNS UDP size around 1232 bytes); ensure TCP/53 works for fallback. For recursive resolvers, restrict recursion to trusted clients and consider per-client QPS thresholds.
Pitfalls and False Positives
ACLs can block legitimate new features (for example, enabling HTTP/3/QUIC without permitting UDP 443). Shadow-test rules first and attach counters to every deny to watch impact. Keep an emergency rollback; TCAM fills faster than you think, so aggregate prefixes and prioritize the most selective L3/L4 filters.
Scrubbing Centers and Anycast CDNs
Scrubbing networks absorb volumetric attacks and return clean traffic. They work either always-on (always proxying) or on-demand (BGP diversion or GRE tunneling when thresholds trip). Anycast CDNs spread traffic across many PoPs to dilute floods and apply layer 7 filters close to sources. Scrubbing is essential when the attack can saturate your smallest link or when you need advanced signatures beyond your edge capacity.
When to Use On-Demand Scrubbing
On-demand is cost-effective if most of your year is quiet. Automate triggers: if inbound pps exceeds a threshold for N seconds, announce the relevant prefix via your scrubbing provider, validate health, then gradually withdraw when stable. Share telemetry (recent top talkers, JA3 hashes, bad paths, known-good test IPs) to reduce collateral damage.
Telemetry and Feedback Loops
Feed scrubbing decisions back to your WAF and rate limiter: if a signature blocks a botnet family upstream, relax local limits to improve user experience; if scrubbing lets through app floods, tighten route-specific budgets. Ask for per-rule counters, sample PCAPs, and time-bound allowlists for critical partners and monitoring probes.
Upstream Controls: RTBH, Flowspec, and Community Tags
Remote triggered black hole (RTBH) lets you signal your transit to drop traffic to an attacked IP at their edge, freeing your links. Use it as a last resort for sacrificial services or when you can fail over to alternate VIPs. BGP Flowspec lets you push stateless filters upstream—match on source/destination, ports, protocol, TCP flags, and fragments—and is powerful for fast, temporary drops. Coordinate BGP communities and scope with your ISP to control where rules propagate and for how long; mis-scoped Flowspec can cause wide outages.
Quick Checks to Confirm an Active Attack
Within five minutes you can answer “attack or spike?” Check these: error ratios (5xx and 499/408 surges alongside traffic usually mean stress, not popularity), request-shape concentration on a few expensive endpoints, SYN backlog growth and retransmits, sudden UDP predominance or tiny TCP packets at high pps, geographic/ASN skew with inconsistent user agents, falling cache hit ratio with rising origin CPU, DNS spikes in NXDOMAIN or ANY that you verify aren’t misconfigurations, and the difference between flat WAN utilization (app-layer) versus pegged circuits (you need upstream help).
Tuning and Testing Rate Limits Safely
Dry-run first: emit headers (X-RateLimit-*) and logs when a request would be throttled. Compare against business KPIs—cart adds, login success—to ensure no drop. Ship dashboards with p50/p90/p99 latency per route, per-key hit ratios, and the count of limited requests by reason. During an incident, prefer gradual clampdowns: cut burst by 50%, then reduce average, then add stricter keys (for example, broaden from per-IP to per-/24 if IP rotation is heavy).
Playbooks and Runbooks That Hold Under Stress
Write per-attack playbooks: SYN flood (enable SYN cookies, tune SYN-ACK retries/timeouts, raise backlogs, add per-source SYN rate limits), HTTP GET flood (tighten path budgets, add lightweight challenges on non-critical paths, cache aggressively, collapse duplicate requests), UDP amplification (drop unexpected UDP at edge, engage scrubbing, request Flowspec on destination port, update allowlists). Tie each step to a metric change so responders know when to move to the next action.
Logging, Evidence, and Postmortem
Keep short rolling packet samples during the event for signature building. Preserve top talkers by source prefix and ASN, top JA3s, user agents, and attacked paths. In the postmortem, trace the “first effective block” and make it automatic next time: persistent ACL entries, pre-approved Flowspec templates, and auto-diversion thresholds. Rehearse failover: health-check sensitivity, TTLs for DNS failover, warm standbys on CDNs, and database read replicas.
Minimal Hardening That Pays Off
Before the next incident, do these simple things: enforce least-privilege ports at the edge, close unused UDP, enable SYN cookies, set conservative timeouts, raise backlogs, cache static and semi-static API responses, ship rate-limit dry runs, and subscribe to upstream blackhole/Flowspec capabilities with documented contacts and SLAs. Your time-to-stable will drop from hours to minutes.