We have configured metadata catalogue which is now functioning. But sometimes we experienced it was running very slow when editing metadata. Indeed, Tomcat service was using 100% of CPU memory. Once it was enough to exit edit mode to return to normal CPU usage, another time Tomcat service needed to be restarted. It was also impossible to delete some saved metadata records. When delete icon is clicked Geonetwork pops up with "Self-suppression not permitted" or "Java heap space" messages.
Increasing Tomcats Initial memory pool to 256M and Maximum memory pool to 2048M seems to solve the problem, catalogue runs more smooth (our system is Windows Server 2012 Standard 64bit, 5,45 GB of RAM, JDK 8, 64bit JVM, Tomcat 9.0).
Values 256 and 2048 are quite random, I did not find any recommendations on memory pool values in the documentation.
What is other users experience? What Java heap size values you use to run Geonetwork smoothly?
Do you have any other recommendations for configuring Java and Tomcat to run Geonetwork with the best performance?
We have a very similar setup, windows server 2012, JDK8, Tomcat 8.5, and we had to re adjust our memory pool as well as we too were experiencing those same issues. Currently our values are 300mb for Initial and 1500mb for maximum. We are still in the testing phase but I've been slowly raising up the maximum as we add more records and functionalities. Not sure if there is a recommended value more what works best for each environment and tinkering to find the optimal values.