Handling out-of-memory exception

I’m trying to generate some performance plots of my code which is manipulating data structures of ever increasing size. The way that I currently do this is to just run the program with different limits of what I’m pretty sure will finish. But I’d love to set limits which are too large and just have the program run until it fails, and then let my huge structure allocations go out of scope, and continue with smaller sized structures. Then finally compute and plot the results.

Can this be done? Can I really catch and handle out-of-memory exceptions? I remember in the old days in C, when I called malloc, and malloc returned 0, then I could handle this in my C-program.

In Scala/JVM if I get an out-of-memory exception, does that mean something has been corrupted? or did it fail to allocate and avoid the corruption?

Handling OOM on the Java VM is very hard in the general case. Once such an error occurs, you know you’re running right at the limit of available memory. Even so much as concatenating two Strings in the error handler might fail.

To get guarantee of not cascading OOM errors, you’d have to pre-allocate everything that would be needed by your error handler. Not impossible, and maybe makes sense in your specific use case.

The way to handle OOMs is to catch OutOfMemoryError in a try/catch block, BTW. Not sure if you knew that from the text of your post. Here’s a link to the Oracle VM’s notes on the subject, which might be as good a place as any to start learning about the VM in OOM conditions: https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks002.html

Best regards,

Brian Maso

1 Like

In Java you don’t get 0 returned from malloc (because there’s no malloc in the language, unless you’re using sun.misc.Unsafe or something like it) nor null returned from new. new always returns non-null reference. If you’re running out of memory then most of the time GC will firstly increase the frequency of GC cycles and if too much GC overhead is detected then random thread that was stalled on allocation will get OutOfMemoryError. If that doesn’t fix the problem then GC will continue GC cycles, interrupt another randomly picked thread with OoME if it can’t free enough memory and so on. OTOH, if you’re allocating very big objects (i.e. that’s usually a big array) then you’ll get OoMEs quicker, without waiting forever for GC to detect too big GC overhead.

In Java 11+ there’s Epsilon GC which guarantees quick failure on memory exhaustion, without any GC cycles. Depending on your allocation patterns, it could be a solution to your problem, e.g. you could spawn JVMs with Epsilon GC turned on and try successively bigger dataset sizes until the point of failure. I’m not sure if that’s the proper solution as Epsilon GC doesn’t collect any garbage, so if your algorithm generates lots of quickly dying gargbage (search for “generational hypothesis” in context of garbage collection to find why it’s a common case) then Epsilon GC failures can point to too small maximum problem size.