IP Transparency
Your Traffic Manager functions as a full application proxy, terminating network connections from remote clients and making new connections to the selected back-end servers. It does not use the packet-orientated NAT-based load balancing methods that simple layer-4 load balancers use.
With this architecture, the back-end server views the client’s request as originating from the Traffic Manager, not from the remote client. This can be a disadvantage if the back-end server performs access control based on the client’s IP address, or if the server wishes to log the remote IP address. This can often be worked around by performing the access control or logging functions on the Traffic Manager itself, or by making use of the “X-Forwarded-For” or “X-Cluster-Client-Ip” headers that the Traffic Manager can insert into every HTTP connection to identify the client’s IP address.
In situations where these workarounds are not appropriate, the Traffic Manager can spoof the source IP address of the server-side connection so that it appears to originate from the client’s remote IP address. This capability is known as IP transparency.
IP Transparency can be used selectively. For example, if the Traffic Manager was balancing traffic for a Web farm and a mail farm, you might want SMTP traffic to be IP transparent, but not require that the Web traffic is transparent.
Transparency is enabled on a per-pool basis; you can configure a Web pool that is not transparent, and an SMTP pool that is transparent. The Web pool and the SMTP pool can balance traffic onto the same back-end nodes, or different nodes.
IP Transparency is available by default on all versions of the Traffic Manager appliance image, virtual appliance, or cloud service. Traffic Manager software variants can use native IP transparency functionality on Linux or UNIX hosts under the following conditions:
•The Traffic Manager software is installed and running as the root user.
•The host operating system uses a kernel at version 2.6.24 or later.
•The host operating system uses iptables at version 1.4.11 or later (versions of iptables earlier than 1.4.11 are also supported provided the “--transparent” option is available).
Routing Configuration
Each server node that receives transparent traffic must route its responses back through the Traffic Manager that sent it. This is achieved by configuring the default route on the node to be one of the back-end IP addresses of the Traffic Manager. See your operating system documentation for details about configuring the default route.
When the server node replies to a request, it will address its response to the remote client IP. Provided the routing is configured correctly, the originating Traffic Manager will intercept this traffic, terminate the connection and process it as normal. The Traffic Manager will then forward the (possibly modified) response back to the client.
It is normally appropriate to configure the Traffic Manager to simply forward on all other packets that are not addressed to it. This will allow the system to function as a local router, so that servers using it as their default route can then contact other systems on nearby or remote networks. The Traffic Manager will need to NAT any packets that it forwards from back-end nodes on private networks.
Refer to your operating system documentation to configure "IP Forwarding" on a host system running the Traffic Manager software variant. See Configuring System Level Settings to configure this behavior on a Traffic Manager virtual appliance or cloud instance.
Local Routing Problems
If you use IP transparency, clients on the same network as the back-end nodes will not be able to balance traffic through the Traffic Manager to those nodes.
This is because the back-end server nodes always attempt to reply to the source IP address of the connection. If the source IP address (the client’s IP) resides on a local network, the server nodes will attempt to contact the client directly rather than routing via the Traffic Manager system that originated the connection. The connection will appear to hang.
In this case, it might be appropriate to segment the back-end network so that, from the server nodes’ perspective, the clients appear to reside on a separate subnet, which must be reached via the default gateway (the Traffic Manager system). Alternatively, a TrafficScript rule could selectively set the source IP address of the server connection to an IP on the Traffic Manager system if the client lies on the same network as the server nodes.
IP Transparency and Traffic Manager Clusters
IP routing is more complex in the case where a cluster of Traffic Managers is used, because each server node can only route back through one IP address.
See Traffic IP Addresses and Traffic IP Groups for recommendations in this situation.