In recent weeks, several incidents surfaced where content providers blocked traffic coming from multi-tenant proxies to stop automated attacks or illegal rebroadcasting. The countermeasure reduced the offensive surface, but also denied access to legitimate users travelling through the same channel. It illustrates a common issue: upstream security — security applied at proxies, CDNs or scrubbing centers before traffic reaches the application — does not always retain the context required to make good decisions.
The relevant point is not the individual incident, but what it exposes: when security runs upstream and multi-tenant, the backend loses semantics, session state and part of the operational timeline. This alters how attacks are detected, how they are mitigated, and how user continuity is preserved.
The issue is not that these proxies “fail”, but that their efficiency relies on sharing channel, capacity and enforcement across thousands of customers. The model optimizes cost and scale, but erodes signals that were historically essential for security and operations: origin, semantics, persistence and temporal correlation. Once those signals disappear, security stops being a purely defensive problem and becomes an operational decision problem.
Shared-proxy architectures and their operational trade-offs
Multi-tenant proxies — Cloudflare being the most visible reference — terminate TLS, filter bots, apply WAF rules, absorb DDoS and optimize latency before forwarding requests to the backend. Operationally, the model offers:
- shared scale
- economic amortization
- simplified management
The problem emerges in the least visible layer: traffic identity. When thousands of customers share the same defensive channel, the IP address no longer represents a user, it represents the proxy. For the backend, origin stops being an identity signal and becomes a collective. Attackers, legitimate users and corporate SSO traffic exit through the same door.
Traditional web security largely assumed origin was enough to make decisions. In a multi-tenant model, that signal degrades and the system no longer separates legitimate from abusive behavior with the same clarity.
At that point the decision collapses to two choices:
- block the channel → stops the attack but penalizes legitimate users
- allow the channel → preserves continuity but lets part of the attack through
The difficulty is not having two options, but having to choose with incomplete information. That is where the multi-tenant model shows its real cost: it gains efficiency but loses context.
How upstream filtering fragments application context
Context loss is not just about hiding origin or masking IP. In production it appears across multiple planes, and — importantly — not in the same place nor at the same time. This fragments the operational timeline, weakens signals and complicates defensive decision-making.
TLS plane
When TLS negotiation and establishment happen before reaching the application, the backend stops seeing signals that do not indicate attack but do indicate degradation of legitimate clients, such as:
- renegotiation attempts
- handshake failures
- client-side timeouts
- cipher downgrades
- inconsistent SNI
During brownouts or incident response, these signals matter because they describe the real client, not the attacker. In a multi-tenant proxy, that degradation disappears and the application only sees “apparently normal” HTTP. For continuity and SLO compliance, that information is lost in the wrong plane.
WAF plane
When filtering occurs before the application — at a proxy or intermediary — another effect appears: the backend sees the symptom but not the cause.
The real circuit is:
Request → WAF/Proxy → Block → END
but for the backend it becomes simply: less traffic
Without correlation between planes, root-cause analysis becomes unreliable. A drop in requests may look like failure, user abandonment or load pressure when it is in fact defensive blocking.
Session plane
In modern architectures, user state does not live in the connection but in the session: identity, role, flow position and transactional continuity. When session lives in a proxy or intermediary layer, the backend loses persistence and affinity. In applications driven by login, payment or transactional actions, this is critical.
The symptoms do not resemble an attack; they resemble broken UX:
- unexpected logouts
- interrupted payments
- inconsistent login flows
- failover correct from infrastructure perspective but wrong from user perspective
A typical case where infrastructure “works”, but the user churns because the flow cannot complete.
Observability plane
The quietest plane concerns who sees what and when. If logs, metrics and traces stay at the proxy or upstream service, the downstream side — the one closer to application and backend — becomes partial or blind.
Without temporal continuity across planes, the following increase:
- time-to-detect
- time-to-mitigate
- internal noise
- post-mortem cost
And, more importantly, real-time defensive decisions degrade — precisely where continuity matters.
From origin-based filtering to behavior-based decisions
In recent years, defensive analysis has shifted toward behavior. Where the client comes from matters less than what the client is trying to do. Regular timings, repeated attempts, invalid sequences, actions that violate flow logic, or discrepancies between what the client requests and what the application expects are more stable signals than an aggregated IP.
In short:
Interpreting intent requires three planes that upstream proxies lose by design:
- session (who and where in the flow)
- semantics (what action is being attempted)
- timeline (in what order things occur)
Without those planes, defensive decisions simplify. With them, they can be made precise.
The application-side plane where context actually exists
If context disappears upstream, the question is not “remove the proxy”, but locating where the information lives that distinguishes abuse from legitimate use. That information only exists where three things converge:
- what the user does
- what the application expects
- what the system allows
That point is usually the application or the component immediately before it (typically an ADC or integrated WAF), where session, semantics, protocol, results and transactional continuity coexist.
A practical example:
login() → login_failed() → login_failed() → login_failed()
vs:
login() → 2FA() → checkout() → pay()
For the upstream proxy, both are valid HTTP. For the application, they are different intentions: abuse vs legitimate flow.
What matters here is not “blocking more”, but blocking with context — which in operations becomes the difference between:
- blocking the channel
- blocking the behavior
and, in service terms, between losing legitimate users or preserving continuity.
Where SKUDONET fits
SKUDONET operates in that plane closer to the application, without the constraints of the multi-tenant model. The approach is mono-tenant and unified: TLS, session, WAF, load-balancing and observability coexist in the same plane without fragmenting across layers or externalizing identity and semantics.
This has three operational consequences:
1. Origin retains meaning
No aggregation or masking. IP becomes useful again when combined with behavior.
2. Transactional flows maintain continuity
Login, payment, checkout, reservation or any stateful action survives even during active/passive failover.
3. Timeline and semantics correlate
Errors, attempts and results occur in the same place, enabling precise decisions instead of global blocking.
Schematically:
From this plane, security stops being “block proxy yes/no” and focuses on blocking abuse while preserving legitimate users.
Conclusion
Multi-tenant proxies solve scale, cost and distribution. But continuity, semantics and intent still live near the application — because it is the only plane where full context exists.
If continuity and application-level context matter to your stack, you can evaluate SKUDONET Enterprise Edition with a 30-day trial.


