In the hazelcast module, there is a custom implementation of a
jdbcconfig cache provider (HzCacheProvider). The idea is to have shared
cache for jdbcconfig between all your nodes. I have found it is a huge
drain on resources and it seems to provide no benefits.
- hazelcast distributes catalog events to all nodes, and jdbcconfig
invalidates catalog objects based on catalog events; so if each node has
its own individual cache that cache is still always kept up to date and
catalog changes are distributed immediately to all nodes.
- the hazelcast cache uses XStreamPersister to serialize all the catalog
data. that means that each time you take an object out of the cache (so
each time you request a catalog object) it loads it from XML and does
all the work that is normally only done at start-up (create the data
store, try to connect to it, etc...) this is extremely expensive, and on
top of that, causes massive traffic between nodes.
we have a configuration with about 800 layers and a health check that is
done every 5 minutes that calls a getcapabilities, this was causing
5Mbps traffic between two nodes !!
So I found it is a really good thing to comment the thing out in the
spring application context, which stopped all of that traffic and
everything is still just as functional. So I wonder, why it was ever
created, what was the idea behind it?