![]() | ![]() | ![]() |
| |||||||
![]() | ||||||||||
![]() | ![]() | |||||||||
Resin 3.1 Documentation Examples Changes Overview Installation Configuration Quercus SOA/IoC JSP Servlets and Filters Admin (JMX) EJB Amber Security Performance Hessian XML and XSLT Third-party Troubleshooting/FAQ tags Common Tasks Relax Schema howto Config FAQ Scrapbook DB Scrapbook Virtual Hosting Database Load Balancing Sessions Clustered Sessions Tuning ISP WebApp Deploy |
Clustered Distributed SessionsResin's cluster protocol for distributed sessions can is an alternative to JDBC-based distributed sessions. In some configurations, the cluster-stored sessions will be more efficient than JDBC-based sessions. Because sessions are always duplicated on separate servers, cluster sessions do not have a single point of failure. As the number of servers increases, JDBC-based sessions can start overloading the backing database. With clustered sessions, each additional server shares the backup load, so the main scalability issue reduces to network bandwidth. Like the JDBC-based sessions, the cluster store sessions uses sticky-session caching to avoid unnecessary network traffic. ConfigurationThe cluster configuration must tell each host the servers in the cluster and it must enable the persistent in the session configuration with use-persistent-store. Because session configuration is specific to a virtual host and a web-application, each web-app needs enabled individually. The web-app-default tag can be used to enable distributed sessions across an entire site. Most sites using Resin's load balancing will already have the cluster configured. Each block corresponds to a host, including the current host. Since cluster sessions uses Resin's srun protocol, each host must listen for srun requests.<resin xmlns="http://caucho.com/ns/resin"> <cluster id="app-tier"> <server id="app-a" host="192.168.0.1"/> <server id="app-b" host="192.168.0.2"/> <server id="app-c" host="192.168.0.3"/> <server id="app-d" host="192.168.0.4"/> <persistent-store type="cluster"> <init path="cluster"/> </persistent-store> ... <host id=""> <web-app id='myapp'> ... <session-config> <use-persistent-store/> </session-config> </web-app> </host> </cluster> </resin> Usually, hosts will share the same resin.conf. Each host will be started with a different to select the correct block. On Unix, startup will look like:resin-3.0.x> bin/httpd.sh -conf conf/resin.conf -server c start On Windows, Resin will generally be configured as a service: resin-3.0.x> bin/httpd -conf conf/resin.conf -server c -install-as ResinC always-save-sessionResin's distributed sessions needs to know when a session has changed in order to save the new session value. Although Resin can detect when an application calls , it can't tell if an internal session value has changed. The following Counter class shows the issue:package test; public class Counter implements java.io.Serializable { private int _count; public int nextCount() { return _count++; } } Assuming a copy of the Counter is saved as a session attribute, Resin doesn't know if the application has called . If it can't detect a change, Resin will not backup the new session, unless is set. When is true, Resin will back up the session on every request.... <web-app id="/foo"> ... <session-config> <use-persistent-store/> <always-save-session/> </session-config> ... </web-app> Like the JDBC-based sessions, Resin will ignore the flag for cluster sessions. Because the cluster protocol notifies servers of changes, is not needed.SerializationResin's distributed sessions relies on Java serialization to save and restore sessions. Application object must for distributed sessions to work.Protocol ExamplesSession RequestTo see how cluster sessions work, consider a case where the load balancer sends the request to a random host. Host C owns the session but the load balancer gives the request to Host A. In the following figure, the request modifies the session so it must be saved as well as loaded. ![]() The session id encodes the owning host. The example session id, , decodes to an srun-index of 3, mapping to Host C. Resin determines the backup host from the cookie as well. Host A must know the owning host for every cookie so it can communicate with the owning srun. The example configuration defines all the sruns Host A needs to know about. If Host C is unavailable, Host A can use its configuration knowledge to use Host D as a backup for instead..When the request first accesses the session, Host A asks Host C for the serialized session data ( ). Since Host A doesn't cache the session data, it must ask Host C for an update on each request. For requests that only read the session, this TCP load is the only extra overhead, i.e. they can skip . The flag, in contrast, will always force a write.At the end of the request, Host A writes any session updates to Host C ( ). If always-save-session is false and the session doesn't change, this step can be skipped. Host A sends the new serialized session contents to Host C. Host C saves the session on its local disk ( ) and saves a backup to Host D ( ).Sticky Session RequestSmart load balancers that implement sticky sessions can improve cluster performance. In the previous request, Resin's cluster sessions maintain consistency for dumb load balancers or twisted clients like the AOL browsers. The cost is the additional network traffic for and . Smart load-balancers can avoid the network traffic of and .![]() Host C decodes the session id, . Since it owns the session, Host C gives the session to the servlet with no work and no network traffic. For a read-only request, there's zero overhead for cluster sessions. So even a semi-intelligent load balancer will gain a performance advantage. Normal browsers will have zero overhead, and bogus AOL browsers will have the non-sticky session overhead.A session write saves the new serialized session to disk ( ) and to Host D ( ). will determine if Resin can take advantage of read-only sessions or must save the session on each request.Disk copyResin stores a disk copy of the session information, in the location specified by the . The disk copy serves two purposes. The first is that it allows Resin to keep session information for a large number of sessions. An efficient memory cache keeps the most active sessions in memory and the disk holds all of the sessions without requiring large amounts of memory. The second purpose of the disk copy is that the sessions are recovered from disk when the server is restarted.FailoverSince the session always has a current copy on two servers, the load balancer can direct requests to the next server in the ring. The backup server is always ready to take control. The failover will succeed even for dumb load balancers, as in the non-sticky-session case, because the srun hosts will use the backup as the new owning server. In the example, either Host C or Host D can stop and the sessions will use the backup. Of course, the failover will work for scheduled downtime as well as server crashes. A site could upgrade one server at a time with no observable downtime. RecoveryWhen Host C restarts, possibly with an upgraded version of Resin, it needs to use the most up-to-date version of the session; its file-saved session will probably be obsolete. When a "new" session arrives, Host C loads the saved session from both the file and from Host D. It will use the newest session as the current value. Once it's loaded the "new" session, it will remain consistent as if the server had never stopped. No Distributed LockingResin's cluster sessions does not lock sessions. For browser-based sessions, only one request will execute at a time. Since browser sessions have no concurrently, there's no need for distributed locking. However, it's a good idea to be aware of the lack of distributed locking. ConclusionAlthough reliability generally will end up costing some performance, the trick for a good implementation is to increase reliability with a minimal cost. In some environments, using JDBC distributed sessions or simple file-based persistent sessions will improve a site's robustness with low-enough cost. Because of its scalability, it's more likely that Resin's cluster distributed sessions will be a better choice for most deployment configurations.
|