Hi All,
Jay SenSharma

Jay SenSharma

In response to the Comment: http://middlewaremagic.com/weblogic/?p=4456#comment-2026 We developed a Post on Java Heap. Thanks to “Swordfish” for querying us very important topic.
.
In most of the environments we usually see some kind of Memory related issues like OutOfMemory/Server Slowless kind of things. most of the times it is related to the in accurate tuning of the JVM and some times it happens due to the Application Frameworks Bug/ In accurate tuning configuration or it may happen due to the Application Code as well ..like Object Leakin in the Application code.
.
NOTE: The pre requisit for this Post is that you are already aware of different Memory Spaces available as part of the Java Process..If Not then Please quickly review: http://middlewaremagic.com/weblogic/?p=4456
.
Here we are going to see that what causes the OutOfMemory issues  and Why it happens along with some basic First Aid Steps to debug this kind of  issues.

What is OutOfMemory?

An OutOfMemory is a condition in which there is not enough space left for allocating required space for the new objects or libraries or native codes. OutOfMemory can be divided in tow main categories:

1). OutOfMemory in Java Heap:

This happens when the JVM is not able to allocate the required memory space for a Java Object. There may be many reasons behind this…like
Point-1). Very Less Heap Size allocation. Means setting the MaxHeapSize (-Xmx) parameter to a very less value.
.
Point-2). The Leaking of Objects. Either the Application is not unreferencing the unused Objects or the Third part frameworks (Hibernate/Spring/Seam…etc) might not be releasing the references of the objects due to some inaccurate configurations.
.
Point-3). In Many cases it may be the reason that Application codes are getting the JDBC connections objects from the DataSource are not being released back to the Connection Pool.
.
Point-4). Garbage Collection strategy may be in correct according to the environmental/application requirements.
.
Point-5). In-accurate setting of Application/Frameworks Cache.
Example:
Exception in thread "Thread-10" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2882)
at java.lang.AbstractStringBuilder.expandCapacity(Abs tractStringBuilder.java:100)
at java.lang.AbstractStringBuilder.append(AbstractStr ingBuilder.java:390)
at java.lang.StringBuilder.append(StringBuilder.java: 119)
at java.lang.Throwable.toString(Throwable.java:344)

2). Native OutOfMemory:

Native OutOfMemory is a scenario when the JVM is not able to allocate the required Native Libraries and JNI Codes in the memory.
Native Memory is an area which is usually used by the JVM for it’s internal operations and to execute the JNI codes. The JVM Uses Native Memory for Code Optimization and for loading the classes and libraries along with the intermediate code generation.
The Size of the Native Memory depends on the Architecture of the Operating System and the amount of memory which is already commited to the Java Heap. Native memory is an Process Area where the JNI codes gets loaded or JVM Libraries gets loaded or the native Performance packs and the Proxy Modules gets loaded…
Native OutOfMemory can happen due to the following main reasons:
.
Point-1). Setting very small StackSize (-Xss). StackSize is a memory area which is allocated to individual threads where they can place their thread local objects/variables.
.
Point-2). Usually it may be seen because of Tuxedos incorrect setting. WebLogic Tuxedo Connectors allows the interoperability between the Java Applications deployed on WebLogic Server and the Native Services deployed on Tuxedo Servers. Because Tuxedos uses JNI code intensively.
.
Point-3). Less RAM or Swap Space.
Example: For details on this kind of error Please refer to: http://middlewaremagic.com/weblogic/?p=422
.
Point-4). Usually it may occur is our Application is using a very large number of JSPs in our application. The JSPs need to be converted into the Java Code and then need to be compiled. Which reqires DTD and Custom Tag Library resolution as well. Which usually consumes more native memory.
Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:574)
at TestXss.main(TestXss.java:18)

3).  OutOfMemory in PermGen Space:

Permanent Generation is a Non-Heap Memory Area inside the JVM Space. Manytimes we see OutOfMemory in this Area. PermGen Area is NOT present in JRockit JVMs. For more details on this area please refer to: http://middlewaremagic.com/weblogic/?p=4456.
.
The PermGen Area is measured independently from the other generations because this is the place where the JVM allocates Classes, Class Structures, Methods and Reflection Objects. PermGen is a Non-Heap Area.It means we DO NOT count the PermGen Area as part of Java Heap.
The OutOfMemory in PermGen Area can be seen because of the following main reasons:
Point-1). Deploying and Redeploying a very Large Application which has many Classes inside it.
.
Point-2). If an Application is getting deployed/Updated/redeployed repeatedly using the Auto Deployment feature of the Containers. In that case the Classes belonging to the application stays un cleaned and remains in the PermGen Area without Class Garbage Collection.
.
Point-3). If  “-noclassgc” Java Option is added while starting the Server. In that case the Classes instances which are not required will not be Garbage collected.
.
Point-4). Very Less Space for allocated the “=XX:MaxPermGen”
Example: you can see following kind of Trace in the Server/Stdout Logs:
<Notice> <Security> <BEA-090171> <Loading the identity certificate and private key stored under the alias DemoIdentity from the jks keystore file D:ORACLEMIDDLE~1WLSERV~1.3serverlibDemoIdentity.jks.>
Exception in thread "[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'" java.lang.OutOfMemoryError: PermGen space
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:260)
at java.net.URLClassLoader.access$000(URLClassLoader.java:56)
at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

What to do in case of OutOfMemory In JavaHeap?

Whenever we see an OutOfMemory in the server log or in the stdout of the server. We must try to do the following things as first aid steps:
.
Point-1). If possible enable the following JAVA_OPTIONS in the server start Scripts to get the informations of the Garbage Collection status.
-verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails  -Xloggc:/opt/app/GCLogsDirectory/gc.log
.
Point-2). It is always needed to see what all objects were present when the OutOfMemory error occured to identify whether those objects belongs to the Application Code/ Application Framework Codes/ The Application Server APIs. Sothat we can isolate the issue. In order to get the details of the Heap Objects collect “HeapDump” either using JHat (not a better tool) or JMap (Much Better compared to the Jhat tool). Plese refer to the post to know how to do it : http://middlewaremagic.com/weblogic/?p=2241
.
Point-3). Once we collected the Heap Dump we can easily monitor the Heap Details using best GUI toold like “Jhat Web Browser” or using “Eclipse Memory Analyzer”.

OutOfMemoryError: GC overhead limit exceeded?

The “GC overhead limit exceeded “ indicates that, more than 98% of the total time is spent doing garbage collection and less than 2% of the heap is recovered.

The “GC overhead limit exceeded” in general represent the following cause:
Point-1). When the heap is too small or the current size might not be suitable for your application. Try increasing the -Xmx value while starting your process.

Point-2). There might be a memory leak which means a particular kind of object might be getting created again and again but might not be getting garbage collected due to a leak in the code (application code/ third party application code, Application Server code leak, Or it may be a JVM memory leak).

Point-3). The old generation size of the heap might be very small compared to the new generation. So that the object might be getting passed to the Old Generation prematurely. And we know that GC happens less frequently in Old Generation compared to the Young Generation.

Point-4). If increasing the Heap size (-Xmx) OR tuning the Old Generation size does not help then it might be a memory leak in the application code/container code.

Better to take a heap dump and see what kind of objects are getting filled up inside the Heap, That will indicate which might be leaking or if the heap size is sufficient or not.

What to do in case of Native OutOfMemory?

Point-1). Usually Native OutOfMemory causes Server/JVM Crash. So it is always recommended to apply the following JAVA_OPTIONS flags in the Server Start Script to instruct the JVM to generate the HeapDump  “-XX:+HeapDumpOnOutOfMemoryError
By default the heap dump is created in a file called java_pidpid.hprof in the working directory of the VM, as in the example above. You can specify an alternative file name or directory with the “-XX:HeapDumpPath=C:/someLocation/
.
Note: Above Flags are also suitable to collect HeapDump in case of JavaHeap OutOfMemory as well. But these flags never gurantees that the JVM will always generate the Heap Dump in case of any OutOfMemory Situation.
.
Point-2). Usually in case of Native OutOfMemory a “hs_err_pid.log” file is created in case of Sun JDK and “xxxx.dump” file is created in case of JRockit JDK. These log files are usually Text Files and tells about the Libraries which caused the Crash. These files need to be collected and analyzed to find out the root cause.
.
Point-3). Make Sure that the -XX:MaxHeapSize is not set to a Very Large Space…because it will cause a very less Native Space allocation. Because as soon as we increase the HeapSize, the Native Area decreases. Please see the Post: http://middlewaremagic.com/weblogic/?p=4456
.
Point-4). Keep Monitoring the process’s memory using the Unix utility ‘ps’ like following:
ps -p <PID> -o vsz
Here you need to pass the WebLogic Server’s PID (Process ID) to get it’s Threading Details with respect to the Virtual Memory Space.
.
Point-5). If the Heap Usages is less Or if you see that Your Application usages less Heap Memory then it is always better to reduls the MaxHeapSize  so that the Native Area will automatically gets increased.
.
Point-6). Sometimes the JVMs code optimization causes Native OutOfMemory or the Crash…So in this case we can disable the Code Optimization feature of JVM.
(Note: disabling the Code Optimization of JVM will decrease the Performance of JVM)
For JRockit JVM Code Optimization can be disabled using JAVA_OPTION  –Xnoopt
For Sun JDK Code Optimization can be disabled using   JAVA_OPTION  -Xint

What to do in case of OutOfMemory In PermGen?

Point-1). Make Sure that the PermGen Area is not set to a very less value.
.
Point-2). Usually if an Application has Many JSP Pages in that case every JSP will be converted to a *.class file before JSP Request Process. So a large number of JSPs causes generation of a Large number of *.class files all these classes gets loaded in the PermGen area.
.
Point-3). There is no standard formula to say which value of MaxPermSize will suit your requirement. This is because it completely depends on the kind of framework,APIs, number of JSPs…etc you are using in your application. The number of class which has to be loaded will vary based on that. but if you want to really tune the MaxPermSize then you should first start with some base value like 512M or 256M and then If you still get the OutOfMemory then please follow below instruction to troubleshoot it.
Point-4). If you are repeatedly getting the OutOfMemory in PermGen space then it could be a Classloader leak….
May be some of the classes are not being unloaded from the permgen area of JVM . So please try to increase the -XX:MaxPermSize=512M  or little more and see if it goes away.
If not then add the following JAVA_OPTIONS to trace the classloading and unloading to find out the root cause :
-XX:+TraceClassloading and -XX:+TraceClassUnloading
Point-5).

If users want to investigate which kind of classes are consuming more PermGen space then we can use the “$JAVA_HOME/bin/jmap” utility as following:

    $JAVA_HOME/bin/jmap -permstat $PID  >& permstat.out

Above utility will dump the list of classes loaded in that JVM process Process (we are passing the processID to this command as $PID). This helps us in understanding if there is any classloader leak or if a particular class is consuming more memory in PermGen…etc Collecting HeapDump also gives a good idea on this.

.
.
Thanks
Jay SenSharma
If you enjoyed this post, please considerleaving a comment or subscribing to the RSS feed to have future articles delivered to your feed reader.