credit Tom Alcott
A: Performance does not differ significantly between database persistence and memory-to-memory replication. This is because 95% of the cost of replicating or persisting sessions is incurred in the serialization/deserialization of the session object -- which must occur regardless of how the session is distributed. Also, as the size of the session object increases, performance degrades -- again, about equally for both session distribution options.
Instead, the decision will be based partially on how the two technologies differ: With a database, you actually persist the data (to disk), so a highly available database server can survive a cascading failure, while using application servers as session stores and replicators for this purpose may not. In the case of a "gold standard" (two identical cells/domains), a highly available database can pretty much assure session failover between domains, while with memory to memory, there can only be a single replicator common to the two cells; hence, it becomes a single point of failure (SPOF).
Thus, for configurations where cross-cell session failover is a requirement, a highly available database is the only option for eliminating a SPOF. Note that while sharing sessions across cells is supported, this is not generally recommended. By sharing state between cells, it makes it significantly more difficult to independently upgrade components (application and WAS) in the two cells. In the end, the decision then becomes based on what technology you are most comfortable with and which delivers the required quality of service for your availability requirements.
With memory-to-memory replication, the amount of session information you can store is bounded by the JVM heap size of your application server(s). Even with the advent of 64-bit JVM support in WebSphere Application Server V6.01, the maximum application server heap size is going to be significantly smaller than the amount of disk space you have available on a database server that is serving as a session store. Therefore, I am still of the opinion that database persistence remains the best option, although I know that in many organizations it is more expedient to use memory-to-memory replication to avoid conflicts over roles and responsibilities between system and database administrators.
A common cause of serious problems is the use/abuse of the HTTP session for all kinds of garbage. It all starts when one programmer decides to throw a little something in there, then another, and another, until your poor session looks like one of those polluted ponds filled with old tires and washing machines.
Stuffing something into the session is a very common shortcut taken under duress of an impending deadline. The effect on JVM heap sizes and performance (particularly when using persistent sessions) is dramatic. Doing code reviews (see #3) can help to prevent this. Session sizes should be from 2-4K, and there are several ways to measure them; the Tivoli® Performance Monitor that comes with WebSphere Application Server, for example, will show you session sizes.
Also read
1. Comment lines: Erik Burckart: What you want to know about HTTP session persistence
2. Chapter 12 of WebSphere Application Server V7 Administration and Configuration Guide