Frequently Asked Questions
This section covers frequently asked questions around clustering configuration.
•What are the ports used for ICS clustering?
•Do the active nodes monitor the state of their own interface?
•What happens when link goes down in an Active/Active cluster?
•Why a cluster node that is up and running appears in the WebUI as unreachable?
•What is the Synchronization Packet Size?
•How the local nodes are informed if passive becomes leader?
•In an Active/Passive cluster, both nodes act as active. What is happening?
•Is the traffic among the members of the cluster encrypted?
•What happens when the current leader node goes down?
•In an A/P cluster what is the relationship between the cluster leader and the VIP owner?
•Must admins make all configuration updates at the leader node?
What are the ports used for ICS clustering?
UDP: 4803, 4804
TCP: 4808, 4809, 4900 – 4910, 8009, 8010
Protocol |
Port |
When |
Purpose |
---|---|---|---|
TCP/IP |
4808 | Clustering on, Always | P2P encrypted communication |
4809 | Clustering on, Always | P2P clear text communication | |
4900-4910 | For a short period during handshake | Key exchange for group communication, state sync where applicable | |
8009 |
- |
Port is used for incremental session sync across the cluster. |
|
8010 |
- |
Port is used for Initial/bulk session sync across the cluster like one node rebooted or new node join to cluster. |
|
UDP
|
4803 |
Clustering on, Always |
Group communication |
4804 |
Clustering on, Always |
Token Heartbeat |
• Verify on all firewalls that exist between the nodes of the cluster that the above ports are open for communication between the cluster nodes.
•The communication can be verified by using the cluster troubleshooting tool. For more information on the cluster troubleshooting tool see: KB9746 - How to use the Cluster Troubleshooter tool.
Do the active nodes monitor the state of their own interface?
Each node monitors both of its interfaces by sending an ARP who-has (ARPing) to the default gateway. This ARP message is sent every 5 seconds. The ICS will wait up to 5 seconds for a response. If after 5 seconds, no response is received, the ICS begins a wait period of 45 seconds. If there is still no response, the ICS marks the interface as down.
The ARP timeout value is configurable from the network settings page for each interface. Additionally, you can configure how many ARP ping timeouts are received before marking the interface as down. This applies to both interfaces and all nodes in the cluster. On the cluster properties page, there is an option to have each ICS disable their external interface in the event their internal interface goes down. This is a cluster-wide setting.
What happens when link goes down in an Active/Active cluster?
When the link goes down between two nodes in an Active/Active cluster, each member will think it's the only node in the cluster and will not sync data to the other node. User sessions are stored in the synchronized DSCache, so each node still thinks it has all user sessions. When the nodes rejoin, the master node's cache will overwrite the slave's cache.
Why a cluster node that is up and running appears in the WebUI as unreachable?
A cluster member may appear as unreachable even when it is online and can be pinged. Here are reasons why a node can show as unreachable:
•Its password is incorrect. If the node has never joined the cluster or if the password has changed in between this might be a possibility
•It does not know about all the nodes of the cluster. Two nodes must have the same membership to belong in the same cluster
•It has different group communication mode
•It has a different version of the software
•A firewall or other improperly configured network device is between the two nodes preventing communication . See the TCP/UDP ports (mentioned above) that need to be open for the machines in a cluster to communicate. The Maintenance > Troubleshooting > Clustering > Network Connectivity tab in the admin UI can be used to test connectivity between ICS nodes.
What is the Synchronization Packet Size?
Synchronization Packet size depends on the synchronized data. Generally, it is approximately 1MB for 1000 users when a node is added to the cluster and synchronized. After synchronization, data updates are generally very small, only a few KB.
How the local nodes are informed if passive becomes leader?
When active node fails, then passive node becomes active and takes over cluster VIP.
After I join the machine to a running cluster, my session times out and I have to login again. What is the reason?
This is the expected behavior. The states (including the active sessions) for the member that joins the cluster are overwritten by the state in the cluster. Therefore, the session you use to join the cluster is closed.
I created a cluster and added a node, but it appears as Unreachable. When I login to the other machine, it does not appear to be a member of the cluster. Why?
It is not enough to just add a node to a machine already in the cluster. You also need to go to the machine that is being added to the cluster and use the "join cluster" UI to add the machine to the cluster. ICS provides such a UI in two places:
a) part of the WebUI and
b) part of the console UI when a machine boots.
In an Active/Passive cluster, both nodes act as active. What is happening?
If the two machines lose connectivity (that could happen in high latency environments) the heartbeat between the machines is lost and both machines become active. The number of seconds that the heartbeat can lose before a member takes over is adjustable from the WebUI.
Is the traffic among the members of the cluster encrypted?
Yes, all the data traffic is secured using AES/128Bit encryption and MD5. The intent has been to provide adequate confidentiality and integrity to the messages inside a protected network/VPN. We discourage clustering the devices over a public WAN infrastructure unless additional protection between the subnets exist (e.g. through site-to-site ScreenOS VPNs).
What is a cluster leader?
Whenever a node (re)joins a cluster, it receives configuration and runtime state updates from the nodes already in the cluster. One of the nodes acts as the representative of the cluster and sends the cluster state update to the joining node. This node is referred to as the leader of the cluster.
How is the leader elected?
The leader election algorithm is rather complex and is mostly but not always deterministic. The algorithm favors the current leader during simple membership changes. When two or more cluster partitions merge, each partition comes with its own leader - all but one of which must relinquish leadership. The leader from the partition that has the node with the highest "sync rank" (configurable from the cluster status page) retains the leadership. If the tie cannot be broken based of sync ranks, node names are used in a way similar to sync rank.
The non-determinism in leadership election comes into play when multiple nodes attempt to rejoin the cluster simultaneously - in this case node with higher node name (based on ascii code) will be elected as the leader.
What happens when the current leader node goes down?
When the current leader goes down, a new leader gets elected. If the old leader was involved in a state of synchronization in a joining node at the time it went down, the new leader takes over that responsibility. Other than that, a leader node going down is a very mundane event - it does not trigger anything special in the system.
What is the relationship between the license primary (the node that has the feature licenses) and the cluster leader?
There is no relationship. The license primary and cluster leader are orthogonal concepts.
How is a leader in an Active/Passive (A/P) cluster different from the leader in an Active/Active (A/A) cluster?
An A/P leader is exactly the same as an A/A leader.
In an A/P cluster what is the relationship between the cluster leader and the VIP owner?
Cluster leadership and VIP ownership are orthogonal concepts. There is no relationship. It is perfectly legitimate to have one node as the cluster leader and the other node as the VIP owner.
Must admins make all configuration updates at the leader node?
There is no such requirement. While a cluster is up and running with no membership changes happening, the leader has *no* special role to play. Changes can be made at any cluster node and they will get replicated across the entire cluster almost instantaneously. Changes initiated at the non-leader nodes do *not* incur *any* additional overhead compared with changes initiated at the leader node.