How to address out of memory errors ( OOM )
search cancel

How to address out of memory errors ( OOM )

book

Article ID: 159679

calendar_today

Updated On:

Products

Data Loss Prevention Enforce Data Loss Prevention

Issue/Introduction

If I run into Out Of Memory Errors ( OOM ) within the Java heap size, what does it mean and how can it be addressed ?

Resolution

Background information: Java Memory handling

 

In general, the process size (java.exe) consists of the Java heap as well as the C/C++ heap as well as the actual java executable and loaded libraries by the JVM itself.

 

The Java heap and the C or C++ heap are two different things (running in the same process space). If you increase Java's heap the less memory is available to your C or C++ JNI code. The -Xmx, -Xms and _Xmn only affects the Java Object Heap space , not the process heap ( = java.exe ).

The entire process space is what you see when you observe the memory that the java.exe consumes. This does not reflect the used Java heap size. The process space includes the JVM itself, the loaded system libraries as well as native processes (such as JNI), Code generation, buffers, direct memory space, garbage collection and so forth.

 

The old 32-bit OS limitation of 2 GB limit is against the entire Java process size, not the JVM. Although the overall process limit is less relevant in a 64-bit OS, the topic is still relevant - because Java process sizes are assigned by DLP as part of its configuration.

In our case, our framework relies on a lot of JNI and native calls. Original DLP 32-bit implementations capped the min/max size of the Java heap at ~ 1.2 GB. Also, since the Java process would require continuous memory, the min and max setting should be the same, so the process allocates the required memory space from the get-go. Currently, the JVM settings have a max that exceeds the minimum or "init" setting, e.g., the Advanced Server setting for BoxMonitor.FileReaderMemory sets 1200M for the "init" (-Xms1200M) and 4Gb for the max ("-Xmx4G").

 

Q: If we are observing an OutOfMemoryError, does this mean that the Java heap is exhausted?

Not necessarily. Sometimes the Java heap has free space but an OutOfMemoryError can occur. The error could occur because of

 

        * Shortage of memory for other operations of the JVM.

        * Some other memory allocation failing. The JVM throws an OutOfMemoryError in such situations.

        * Excessive memory allocation in other parts of the application, unrelated to the JVM, if the JVM is just a part of the process, rather than the entire process (JVM through JNI, for instance).

       

In order to determine in more detail what is causing the OOM we would need more data such as a JVM dump and stack trace. See below section ‘Additional changes for further investigation’.

 

 

General recommendations

 

You might try to increase the JVM heap size related to the JVM and the associated Service in question.

But keep in mind that due to the loaded libraries this should be in general for testing. By default FileReader and Incident Persister JVM sizes are already sized accordingly and shouldn’t require tweaking.

 

 

Minimizing memory consumption

 

1)      Lookup API

 

If a CSV plug-in is used, the entire CSV file and additional buffers are created in memory and can consume large memory sets.

In order to minimize the memory usage due to the CSV plug-in it would be recommended to migrate the data to a separate data source such as LDAP server or other DB environment.

 

Advantages:

The usage of LDAP lookup or script lookup would allow the data to be changed dynamically without affecting the uptime of the Incident Persister. This may become more important in case the data set grows.

From a memory perspective, the data is handled within a separate process ( LDAP Server, script, data source for script ) and thus will not use any memory within the Java process.

 

From a performance perspective, the latency due to the LDAP or script lookup can be neglected if properly setup.

Also, the update of the queries data can be done in real-time and could be part of an ETL process, As a result any change would be occurring immediately and not require a reloading of the plug-in, which occurs at this point once a day.

 

2)      Incident memory consumption

 

Make sure that the max incident size is set to the default of 30 MB.

Verify that the default FileReader.MaxFileSize in the Detection Server advanced tab is set to 30 MB or lower. Otherwise files would be generated that would require to be handled within the memory of the Incident Persister.

 

 

On a side note:

There has been a bug within 10.0 that would not the cross-component count which caused the generated incident to grow. Meaning the setting  MAX_INCIDENT_FILE_SIZE had no effect on the endpoint and as a result huge files could be generated, and potentially cause a crash due to OOM on the Incident Persister. This is very unlikely the case here. The root issue has been addressed in v11 (ETrack 2161443)

3)      Incident memory consumption – II

 

Confirm the value of the appropriate setting below and adjust to a lower setting if the value noted is higher than the default of 100:

 

* For Data Identifiers: DI.MaxViolations

* For Regular Expressions: IncidentDetection.patternConditionMaxViolations

* For EDMs: EDM.MaximumNumberOfMatchesToReturn

 

The maximum number of matches can be changed through the Advanced Server Details page. If the values are larger, this will result in larger data sets for the individual incident that will be processed in memory. 

 

 

Additional changes for further investigation

 

1. Add the following to the JVM initialization parameters, it will dump the full heap into an hprof file within the bin directory as a result of a OOM crash.

Open for example the file Vontu/Protect/config/VontuIncidentPersister.conf or whichever JVM is running into the OOM crash and add the following line under the section 'Java Additional Parameters'

 

wrapper.java.additional.xx=-XX:+HeapDumpOnOutOfMemoryError

 

Then restart the Service to ensure the JVM gets fully re-initialized.

 

 

2. If the OutOfMemory error occurs, please collect the following

 

·         Most recent hs_err files under Vontu/Protect/bin

·         Most recent hprof files under Vontu/Protect/bin

·         All current log files from Vontu/Protect/log including the most recent tomcat general logs and access logs for the last week.

·         Current Vontu/Protect/config directory