d8888 888 888      88888888888 888      d8b                                 888       888          888       .d8888b.           888                               
      d88888 888 888          888     888      Y8P                                 888   o   888          888      d88P  Y88b          888                               
     d88P888 888 888          888     888                                          888  d8b  888          888      Y88b.               888                               
    d88P 888 888 888          888     88888b.  888 88888b.   .d88b.  .d8888b       888 d888b 888  .d88b.  88888b.   "Y888b.   88888b.  88888b.   .d88b.  888d888 .d88b.  
   d88P  888 888 888          888     888 "88b 888 888 "88b d88P"88b 88K           888d88888b888 d8P  Y8b 888 "88b     "Y88b. 888 "88b 888 "88b d8P  Y8b 888P"  d8P  Y8b 
  d88P   888 888 888          888     888  888 888 888  888 888  888 "Y8888b.      88888P Y88888 88888888 888  888       "888 888  888 888  888 88888888 888    88888888 
 d8888888888 888 888          888     888  888 888 888  888 Y88b 888      X88      8888P   Y8888 Y8b.     888 d88P Y88b  d88P 888 d88P 888  888 Y8b.     888    Y8b.     
d88P     888 888 888          888     888  888 888 888  888  "Y88888  88888P'      888P     Y888  "Y8888  88888P"   "Y8888P"  88888P"  888  888  "Y8888  888     "Y8888  
                                                                 888                                                          888                                        
                                                            Y8b d88P                                                          888                                        
                                                             "Y88P"                                                           888   

All Things WebSphere

Concerns and issues relating to all versions of WebSphere Application Server

Tuesday, May 10, 2011

 

Dynacache Questions and Answers on Data Replication Service (DRS) replication best practices

Questions on Dynacache best practices vis a vis replication ... 

Q We plan to use one or more replication domains that span a large number of servers with a number of object cache instances, and to use sharing policy PUSH.
Unfortunately PUSH does not scale. Go with NOT_SHARED.

Q We have observed that it takes a little more than one second for the replication to begin after a cache is being looked up the first time. We want to lookup all cache instances early at the server start up process, in order to fill the caches with existing data in the cluster, making it available to the application as soon as the data is requested.
In WAS v7 Dynacache has a JVM generic customer property called "com.ibm.ws.cache.CacheConfig.createCacheAtServerStartup" which when set to "true" will create cache instances automatically at server startup instead of the default on-demand behavior. This definitely should be set if deploying to WAS7. If on an earlier version of WAS look at  technote  http://www-01.ibm.com/support/docview.wss?rs=180&uid=swg21313480 .
See http://wasdynacache.blogspot.com/2010/04/dynacache-and-drs-replication-faq.html .

Q We are concerned about the push mechanism and whether the initial push of say 100-150 Mb existing data distributed over perhaps 30-40 cache instances into a new-started JVM could have (temporary) performance implications on one or multiple JVM:s that are up and running.
Your concern is warranted and that is why I said PUSH does not scale. When a new JVM starts up *ALL* the other JVMs send it cached data.  So the same cached data is sent by ALL the running JVMs to the new one.  If this is a lot of cache data, I have often seen the older JVMs that are pushing the data die due to OOMs. If you so desire you can completely disable bootstrap by calling DistributedMap.setDRSBootstrap(false). This will prevent any JVM from sending data to the upcoming JVM.

Q When data is pushed into a newborn cache, will all cached element come from the same JVM, or could some elements be pushed from JVM 1 and others from JVM 2
Unfortunately there is no workload distribution when sending data by the existing JVMs to the new one. We don't even optimize and have one JVM send the data, instead ALL the JVMs send all the cache data in that object cache instance to the new JVM.
Some customers write cache population scripts to populate the cache for each JVM after it comes up and configure the cache in the NOT_SHARED mode which only replicates invalidations to keep the cache consistent. Other option is to use PUSH with drs bootstrap disabled.

Comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Subscribe to Post Comments [Atom]





<< Home

Archives

December 2006   September 2008   January 2009   February 2009   March 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   October 2010   January 2011   February 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   September 2012   October 2012   November 2012   January 2013   May 2013   June 2013   July 2013   September 2013   October 2013   June 2014   August 2014  

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]