Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

Saturday, March 13, 2010

Don’t be smart. Never implement a resource bundle cache!

by Eduardo Rodrigues

Well, first of all, I’d like to apologize for almost 1 year of complete silence. Since I’ve transferred from Oracle Consulting in Brazil to product development at the HQ in California, it’s been a little bit crazy here. It took a while for me to adjust and adapt to this completely new environment. But I certainly can’t complain, cause it’s been AWESOME! Besides, I'm of the opinion that, if there’s nothing really interesting to say, then it’s better to keep quiet :)

Having said that, today I want to share an interesting experience I had recently here at work. First I’ll try to summarize the story to provide some context.

The Problem

A few months ago, a huge transition happened in Oracle’s internal IT production environment when 100% of its employees (something around 80K users) were migrated from old OCS (Oracle Collaboration Suite) 10g to Oracle’s next generation enterprise collaboration solution: Oracle Beehive. Needless to say, the expectations were big and we were all naturally tense, waiting to see how the system would behave.

Within a week with the system up and running, some issues related to the component I work on (open source Zimbra x Oracle Beehive integration) started to pop up. Among those, the most serious was a mysterious memory leak, which had never been detected before during any stress test or even after more than a year of production, but was now causing containers in the mid-tier cluster to crash after a certain period.

After a couple days of heap dump and log files analysis, we discovered that the culprit were 2 different resource caches maintained by 2 different components in Zimbra’s servlet layer, both related to its internationalization capabilities. In summary, one was a skin cache and the other was a resource bundle cache.

Once we dove into Zimbra’s source code, we quickly realized we were not really facing a memory leak per se but an implementation which clearly underestimated the explosive growth in memory consumption that a worldwide deployment like ours has a huge potential to trigger.

Both caches were simply HashMap objects and, ironically, their keys were actually the key to our problem. The map keys were defined as a combination that included client’s locale, user agent and, in the case of the skins cache, the skin name was also included. Well… you can probably imagine how many different combinations of these elements are possible within a worldwide system deployment, right? Absolutely! In our case, each HashMap would quickly reach 200MB. Of course, consuming 400MB out of 1GB of configured heap space with only 2 objects is not very economic (to say the least).

So, OK. Great! We found our root cause (which is awesome enough in this kind of hard-to-analyze-production-only bugs). But now comes the harder part: how can we fix it?!

The Solution

First of all, it’s very important to keep this very important aspect in mind: we were dealing with a source code that wasn’t ours, therefore, keeping changes as minimal as possible was always crucial.

One thing we noticed right away was the fact that we were most likely creating multiple entries in both maps that ended up containing identical copies of a same skin or resource bundle content. That’s because our system only supported 15 distinct locales, which means, every unsupported client locale would fallback to one of the supported locales, ultimately, the default English locale. However, the map key would still be composed with the client’s locale, thus creating a new map entry, and even worse, mapping to a new copy of the fallback locale. Yes, locales and skins that had already been loaded and processed, were constantly being reloaded, reprocessed and added to the caches.

So, our first approach was to perform a small intervention with the only intention to prevent any unsupported client locale from originating a new entry in those maps. Ideally, we would want to change the maps’ key composition but we were not very comfortable with this idea, mainly because we were not sure we fully understood all the consequences of that, and fix the problem causing another was not an option.

Unfortunately, days after patching the system, our containers were crashing with OutOfMemory exceptions again. As we discovered – the hardest way – simply containing the variation of the locale component in the maps’ key composition was enough to slow down the heap consumption but not enough to avoid the OOM crashes.

Now it was time to put our “fears” aside and dig deeper. And we decided to dig in two simultaneous fronts: the skin cache and the resource bundle cache. In this post, I’ll only talk about the resource bundle front leaving the skin cache front to a next post.

When I say “resource bundle”, I’m actually referring to Java’s java.util.ResourceBundle, more specifically its subclass java.util.PropertyResourceBundle. With that in mind, 2 strange things caught my attention while looking carefully into the heap dumps:

  1. Each ResourceBundle instance had a “parent” attribute pointing to its next fallback locale and so on, until the ultimate fallback, the default locale. This means that each loaded resource bundle could actually encapsulate other 2 bundles.
  2. There were multiple ResourceBundle instances (each one with a different memory address) for 1 same locale.

So, number 1 made me realize that the memory consumption issue was even worse than I thought. But number 2 made no sense at all. Why have a cache that is only stocking objects but is not able to reuse existing ones? So I decided to take a look at the source code of class java.util.ResourceBundle in JDK 5.0. The Javadoc says:

Implementations of getBundle may cache instantiated resource bundles and return the same resource bundle instance multiple times.

Well, turns out Sun’s implementation (the one we use) DOES CACHE instantiated resource bundles. Even better, it uses a soft cache, which means all content is stored as soft references, granting the garbage collector the permission to discard one or more of its entries if it decides it needs to free up more heap space. Problem solved! – I thought. I just needed to completely remove the unnecessary resource bundle cache from Zimbra’s code ant let it take advantage of the JVM’s internal soft cache. And that’s exactly what I tried. But, of course, it wouldn’t be that easy…

Since at this point I already knew exactly how to simulate the root cause of our problem, I started debugging my modified code and I was amazed when I saw that the internal JVM’s cache was also stocking up multiple copies of bundles for identical locales. The good thing was that now I could understand what was causing #2 above. But why?! The only logical conclusion was, again, to blame the cache’s key composition.

The JVM’s resource bundle cache also uses a key, which is composed by the bundle’s name + the corresponding java.util.Locale instance + a weak reference to the class loader used to load the bundle. But then, how come a second attempt to load a resource bundle named “/zimbra/msg/ZmMsg_en_us.properties”, corresponding to en_us locale and using the very same class loader was not hitting the cache?

After a couple hours thinking I was loosing my mind, I finally noticed that, in fact, each time a new load attempt was made, the class loader instance, although of the same type, was never the same. And I also noticed that its type was actually an extended class loader implemented by inner-class com.zimbra.webClient.servlet.JspServlet$ResourceLoader. When I checked that code, I immediately realized that class com.zimbra.webClient.servlet.JspServlet, which itself is an extension of the real JspServlet being used in the container, was overriding method service() and creating a new private instance of custom class loader ResourceLoader and forcefully replacing the current thread’s context class loader with this custom one, which was then utilized to load the resource bundles.

My first attempt to solve this mess was to make the custom class loader also override methods hashCode() and equals(Object) so they would actually proxy the parent class loader (which was always the original one that was replaced in method service()). Since the web application’s class loader instance would always be the same during the application’s entire life cycle, both hashCode and equals for the custom loader would consistently return the same results at all times, thus causing the composed keys to match and cached bundles to be reused instead of reloaded and re-cached. And I was wrong once again.

Turns out, as strange as it may look at first sight, when the JVM’s resource bundle cache tries to match keys in its soft cache, instead of calling the traditional equals() to compare the class loader instances, it simply uses the “==” operator, which simply compares their memory addresses. Actually, if we think more about it, we start to understand why they implemented this way. Class loaders are never expected to be constantly instantiated, over and over again, during the life cycle of any application, so why make an overhead method call to equals()?

Finally, now I knew for sure what was the definitive solution. I just needed to transform the private instances of ResourceLoader into a singleton, keeping all the original logic. Bingo! Now I could see the internal bundle cache being hit as it should be. Problem solved, at last!

At the end, after having completely removed the custom resource bundle cache implemented in Zimbra’s servlet layer and performed the necessary changes to make Zimbra take full and proper advantage of the built-in bundle cache offered by the JVM, instead of wasting a lot of time and memory reloading and storing hundreds of instances of resource bundles, mostly multiple copies of identical bundles, I could now confirm that despite all different client locales coming in clients' requests, the JVM’s bundle cache was holding no more than those corresponding to the 15 supported locales. With that, we had finally fixed the memory burning issue for good.

Conclusion

As this article’s title suggests, don’t try to be smarter than the JVM itself without first checking whether it’s doing it’s job well enough or not. Always do carefully read the Javadocs and, if needed, check your JVM’s source code to be sure about its behavior.

And remember…. never implement a resource bundle cache in your application (at least if you’re using Sun’s JVM) and be very careful when implementing and using your own custom class loaders too.

That’s all for now and…
Keep reading!

Sunday, April 20, 2008

A comprehensive XML processing benchmark

by Eduardo Rodrigues


Introduction


I think I've already mentioned it here but, anyway, I'm currently leading a very interesting and challenging project for a big telecom company here in Brazil. This project is basically a complete reconstruction of the current data loading system used to process, validate and load all cellphone statements, which are stored as XML files, into an Oracle CMSDK 9.0.4.2.2 repository. For those who aren't familiar, Oracle CMSDK is an old content management product, which has succeeded the older Oracle iFS (Internet File System). Because it's not an open repository, we are obligated to use its Java API if we want to programmatically load or retrieve data into or from the repository. That, obviously, prevents us from taking advantage of some of the newest tools available like Oracle's XML DB or even the recent Oracle Data Integrator.

Motivation


One of our biggest concerns in this project is with the performance the new system must deliver. The SLA is really aggressive. So, we decided to make some research to find out the newest XML processing technologies available, try and compare them in order to make sure which ones would really help us in the most efficient way. The only constraints are: we must not consider any non-industry-standard solution nor any non-production (or non-stable) releases.

Test Sceneries


That said, based on research and also on previous experience, these were the technologies I've chosen to test and compare:

I've initially discarded DOM parsers based on the large average size of the XML files we'll be dealing with. We most certainly can't afford the excessive memory consumption involved. I've also discarded Oracle StAX Pull Parser, because it was still a preview release, and J2SE 5.0 built-in XML parsers, since I know they're a proprietary implementation of Apache Xerces based on a version certainly older than 2.9.1.

The test scenery designed was very simple and was intended only to measure and compare performance and memory consumption. The test job would be just to parse a real-world XML file containing 1 phone statement, retrieving and counting a predefined set of elements and attributes. In summary, rules were (for privacy's sake, real XML structure won't be revealed):
  1. Parse all occurrences of "/root/child1/StatementPage" element
  2. For each <StatementPage> do:
    1. Store and print out value of attribute "/root/child1/StatementPage/PageInfo/@pageNumber"
    2. Store and print out value of attribute "/root/child1/StatementPage/PageInfo/@customerCode"
    3. Store any occurrence of element <ValueRecord>, along with all its attributes, within page's subtree
    4. Print out the number of <ValueRecord> elements stored
  3. Print out the total number of <StatementPage> elements parsed
  4. Print out the total number of <ValueRecord> elements parsed

Also, every test should be performed for 2 different XML files: a small file (6.5MB), containing a total of 420 statement pages and 19,133 value records and a large one (143MB) with 7,104 pages and 464,357 value records.

Based on the rules above, I then tested and compared the following technology sets:
  1. Apache Digester using Apache Xerces2 SAX2 parser
  2. Apache Digester using Oracle SAX2 parser
  3. Sun JAXB2 using Xerces2 SAX2 parser
  4. Sun JAXB2 using Oracle SAX2 parser
  5. Sun JAXB2 using Woodstox StAX1 parser
  6. Pure Xerces2 SAX2 parser
  7. Pure Oracle SAX2 parser
  8. Pure Woodstox StAX1 parser

Based on this tutorial fragment from Sun: http://java.sun.com/webservices/docs/1.6/tutorial/doc/SJSXP3.html and considering that performance is our primary goal, I've chosen StAX's cursor API (XMLStreamReader) over iterator. Still aiming for performance, all tested parsers have been configured as non-validating.

In time; all tests were executed on a Dell Latitude D620 notebook, with an Intel Centrino DUO T2400 CPU @ 1.83GHz running on Windows XP Professional SP2 and Sun's Java VM 1.5.0_15 in client mode.

Results


These were the performance results obtained after parsing the small XML file (for obvious reasons, I decided to measure heap usage only when the large file was parsed):

Performance results for small XML file
As you can see, Apache Digester's performance was extremely and surprisingly poor despite all my efforts to improve it. So, I had no other choice than to discard it for next tests with the large XML file, from which the results are presented bellow:

Performance results for large XML file
Notice that the tendency toward a better performance when <!DOCTYPE> tag is removed from the XML document has been clearly confirmed here.

As for memory allocation comparison, I've once again narrowed the tests only to the worst case from performance tests above: large XML file including <!DOCTYPE> tag. The results obtained from JDev's memory profiler were:

Memory allocation for large XML file
Another interesting information we can extract from these tests is how much XML binding represents in terms of overhead when compared to a straight parser:

Overhead charts

Conclusion


After a careful and thorough revision and confirmation of all results obtained from the tests described here, I tend to recommend a mixed solution. Considering its near 12MB/s throughput verified here, I'd certainly choose pure Woodstox StAX parser every time I'll have to deal with medium to large XML sources but, for convenience, I'd also choose JAXB 2 whenever there's a XML schema available to compile its classes from and the size of the source XML is not a concern.

As for complexity, I really can't say that any one of the tested technologies was found considerably more complex to implement than the others. In fact, I don't think this would be an issue for anybody with an average experience with XML processing.

Important Note


Just for curiosity, I've also tested Codehaus StaxMate 1.1 along with Woodstox StAX parser. It's a helper library built on top of StAX in order to create an easier to use abstraction layer for StAX cursor API. I can confirm the implementor's affirmation that StaxMate shouldn't represent any significant overhead for performance. In fact, performance results were identical when compared to pure Woodstox StAX parsing the large XML file. I can also say that it really made my job pretty easier. The only reason I won't consider StaxMate is that it depends on a StAX 1.0 API non-standard extension which is being called "StAX2" by guys at Codehaus.

That's all for now.

Enjoy and... keep reading!

Sunday, September 2, 2007

JavaOne 2007 - Performance Tips 2 - Finish the finalizers!

by Eduardo Rodrigues

Continuing from my last post about some lessons learned at JavaOne'07 on Java performance since JDK 1.5, there's something we usually do not pay much attention to but which can get us some trouble: object finalizers.

Every time we override the protected void finalize() throws Throwable method, we are implicitly creating a postmortem hook to be called by the Garbage Collector after it finds that the object is unreachable and before it actually reclaims the object's memory space. In general, we override finalize() with the best of the intentions which is to ensure that all necessary disposal of system resources and any other cleanup will be performed before the object is permanently discarded. So why is that an issue?

Well, we all should know that finalize() is an empty method declared in java.lang.Object class, therefore, inherited by any existing Java class. So, when it's overridden, the JVM can't assume the default trivial finalization for the object anymore which means that "fast allocation" won't happen here. In fact, "finalizable" objects have much slower allocation simply because the VM must keep track of all finalize() hooks. Besides, those objects also give much more work to the GC. It takes at least 2 GC cycles (which are also slower) to reclaim a "finalizable" object. The first is the usual one when the GC identifies the object as garbage. The difference is that now it has to enqueue the object on finalization queue. Only during a next cycle GC will dequeue and call the object's finalize() method and, if we're lucky, discard the object and reclaim its space, or else, it may take another cycle just to finally get rid of that object.

If we look closer, we'll notice that putting more pressure on the GC and slowing down both initialization and finalization processes are not the only problems here. Let's take a quick look at the J2SE 5.0 API Javadoc for the Object.finalize() method:

"(...) After the finalize method has been invoked for an object, no further action is taken until the Java virtual machine has again determined that there is no longer any means by which this object can be accessed by any thread that has not yet died, including possible actions by other objects or classes which are ready to be finalized, at which point the object may be discarded. The finalize method is never invoked more than once by a Java virtual machine for any given object. Any exception thrown by the finalize method causes the finalization of this object to be halted (...)"

It is quite clear to me that there's a potential temporary (or even permanent) "memory leak" matter hidden in that piece of Javadoc. Since the JVM is obligated to execute the finalize() method before discarding any object overriding it, in fact, due to the additional GC cycles described above, not only that specific object will be retained longer in the heap but also any other objects that are still reachable from it. In the other hand, even after executing finalize(), the VM will not reclaim an object's space if, by any means, it may still be accessed by any object or class, in any living thread, even if they're also ready to be finalized. Like it isn't enough, if any exception is thrown uncaught during finalize() execution, the finalization of the object is halted and there's a good chance that, in this case, this object will be retained forever as garbage.

At last, the fact that the finalize() method should never be invoked more that once for any given object certainly implies the use of synchronization which is one more performance threatening element.

So, next time you consider writing a finalizer in a class, please, take a second look at it. And if you really have to do that, be really careful with the code you write and try to follow these tips:
  • Use finalizers only as a last resort!

  • Even if you do not explicitly override the finalize() method, library classes you extend may have done it. Look at the example bellow:

    class MyFrame extends JFrame {
    private byte[] buffer = new byte[16*1024*1024];
    (...)
    }

    In JDK 1.5 and earlier, the 16MB buffer will survive, at least, 2 GC cycles before any MyFrame instance is discarded. That's because JFrame library class does declare a finalizer. So, try to split objects in cases like this:

    class MyFrame {
    private JFrame frame;
    private byte[] buffer = new byte[16*1024*1024];
    (...)
    }

  • Even if you're considering to use a finalizer to dispose expensive and scarce resources, keep in mind that, being scarce, it's very likely that they will be exhausted before memory (assuming that memory is usually plentiful). So, in these cases, prefer to pool scarce resources instead.
To be continued...

Sunday, June 24, 2007

JavaOne 2007 - Performance tips

by Eduardo Rodrigues
Hello everybody!

I know I've promised more posts with my impressions on JavaOne 2007. So, here it goes...

Some of the most interesting technical sessions I've attended to were on J2SE performance and monitoring. In fact, I would highlight TS-2906: "Garbage Collection-Friendly Programming" by John Coomes, Peter Kessler and Tony Printezis from the Java SE Garbage Collection Group at Sun Microsystems. They certainly gave me a new vision on the newest GCs available.

And what does GC-friendly programming have to do with performance? Well, if you manage to write code that doesn't needlessly spend GC processing, you'll be implicitly avoiding major performance impacts to your application.

Today there are different kinds of GCs and a variety of approaches for them too. We have generational GCs which keeps young and old objects separetely in the heap and uses specific algorithms for each generation. We also have the incremental GC which tries to minimize GC disruption working in parallel with the application. There's also the possibility of mixing both using a generational GC with the incremental approach being applied only to the old generation space. Besides, we have campacting and non-compacting GCs; copying, mark-sweep and mark-compact algorithms; linear and free lists allocation and so on. Yeah... I know... another alphabet soup. If you want to know further about them, here are some interesting resources:


The first and basic question should be "how do I create work for the GC?" and the most common answers would be: allocating new memory (higher allocation rate implies more frequent GCs), "live data" size (more work to determine what's live) and reference field updates (more overhead to the application and more work for the GC, especially for generational or incremental). With that in mind, there are some helpful tips for writing GC-friendly code:
  • Object Allocation

    In recent JVMs, object allocation is usually very cheap. It takes only 10 native instructions in fast common cases. As a matter of fact, if you think that C/C++ has faster allocation you're wrong. Reclaiming new objects is very cheap too (especially for young generation spaces in generational GCs). So, do not be affraid to allocate small objects for intermediate results and remember the following:

  • GCs, in general, love small immutable objects and gerational GCs love small and short-lived ones;

  • Always prefer short-lived immutable objects instead of long-lived mutable ones;

  • Avoid needless allocation but keep using clearer and simpler code, with more allocations instead of more obscure code with fewer allocations.

  • As a simple and great example of how the tiniest details may jeopardize performance, take a look at the code bellow:

    public void printVector(Vector v) {
       for (int i=0; v != null && i < v.size(); i++) {
          String s = (String) v.elementAt(i);
          System.out.println(s.trim());
       }
    }


    This must look like a very inocent code but almost every part of it may be optimized for performance. Let's see... First of all, using the expression "v != null && i < v.size()" as the loop condition generates a totally unecessary overhead. Also, declaring the String s inside the loop implies needless allocation and, last but not least, using System.out.println is always an efficient way of making you code really slow (and that's inside the loop!). So, we could rewrite the code like this:

    public void printVector(Vector v) {
       if (v != null) {
          StringBuffer sb = new StringBuffer();
          int size = v.size();

          for (int i=0; i < size; i++) {
             sb.append(((String)v.elementAt(i)).trim());
             sb.append("\n");
          }

          System.out.print(sb);
       }
    }


    And if we're using J2SE 1.5, we could do even better:

    public void printVector(Vector<String> v) {
    //using Generics to define the vector's content type

       if (v != null) {
          StringBuilder sb = new StringBuilder();
          //faster than StringBuffer since
          //it's not synchronized and thread-safety
          //is not a concern here

          for (String s : v) { //enhanced for loop
             sb.append( s.trim() );
             //we're using Generics, so
             //there's no need for casting
             sb.append( "\n" );
          }

          System.out.print(sb);
       }
    }


  • Large Objects

    Very large objects are obviously more expensive to allocate and to initalize (zeroing). Also, large objects of different sizes can cause memory fragmentation (especially if you're using a non-compacting GC). So, the message here is: always try to avoid large objects if you can.


  • Reference Field Nulling

    Differently of what many may think, nulling references rarely helps the GC. The exception is when you're implementing array-based data structures.


  • Local Variable Nulling

    This is totaly unecessary since the JIT (Just In-Time compiler) is able to do liveness analysis for itself. For example:

    void foo() {
       int[] array = new int[1024];
       populate(array);
       print(array);
       //last use of array in method foo()
       array = null;
       //unnecessary! array is no
       //longer considered live by the GC
       ...
    }


  • Explicit GCs

    Avoid them at all costs! Applications does not have all the information needed to decide when a garbage colletion should take place, besides, a call to System.gc() at the wrong time can hurt performance with no benefit. That's because, at least in HotSpottm, System.gc() does a "stop-the-world" full GC. A good way of preventing this is using -XX:+DisableExplicitGC option to ignore System.gc() calls when starting the JVM.

    Libraries can also make explicit System.gc() calls. An easy way to find out is to run FindBugs to check on them.

    If you're using Java RMI, keep in mind that it uses System.gc() for its distributed GC algorithm, so, try to decrease its frequency and use -XX:+ExplicitGCInvokesConcurrent option when starting the JVM.


  • Data Structure Sizing

    Avoid frequent resizing and try to size data structures as realistically as possible. For example, the code bellow will allocate the associated array twice:

    ArrayList list = new ArrayList();
    list.ensureCapacity(1024);


    So, the correct should be:

    ArrayList list = new ArrayList(1024);


  • And remember... array copying operations, even when using direct memory copying methods (like System.arrayCopy() or Arrays.copyOf() in J2SE 6), should always be used carefully.

  • Object Pooling

    This is another old paradigm that must be broken since it brings terrible allocation performance. As you must remember from the first item above, GC loves short-lived immutable objects, not long-lived and highly mutable ones. Unused objects in pools are like bad tax since they are alive and the GC must process them. Besides, they provide no benefit because the application is not using them.

    If pools are too small, you have allocations anyway. If they are too large, you have too much footprint overhead and more pressure on the GC.

    Because any object pool must be thread-safe by default, the use of synchronized methods and/or blocks of code are implicit and that defeats the JVM's fast allocation mechanism.

    Of course, there are some exceptions like pools of objects that are expensive to allocate and/or initialize or that represent scarse resources like threads and database connections. But even in these cases, always prefer to use existing well-known libraries.
to be continued...

Wednesday, June 6, 2007

JDeveloper Tips #2: Fine-tuning the configuration

by Eduardo Rodrigues
Yet another great tip - this one is specially directed to those using JDeveloper on Windows.

It may seem strange but the amount of programmers aware of the possibility of customizing JDev's initialization settings isn't so big as you may expect. Many don't even know about the existence of a configuration file. Well, there is a configuration file and it's located at %JDEV_HOME%\jdev\bin\jdev.conf (%JDEV_HOME% being the directory where you've installed JDeveloper). If you open this file you'll see a great number of options, properties, etc. The guys at Oracle did their job and commented on every one, so it won't be difficult to figure out their purpose.

Having said that, I'd like to share with you some lessons learned through my own experience that have certainly made my work with JDeveloper much smoother:

#
# This is optional but it's always
# interesting to keep your JDK up to date
# as long you stay in version 1.5
#
SetJavaHome C:\Program Files\Java\jdk1.5.0_12

#
# Always a good idea to set your User Home
# appropriately. To do so, you must
# configure an environment variable in
# the operating system and set its value
# with the desired path
# (i.e. JDEV_USER_HOME=D:\myWork\myJDevProjs).
# Then you must set the option bellow with
# the variable's name.
#
# You'll notice that when you change
# the user home directory, JDev will ask
# you if you want to migrate from a
# previous version. That's because it
# expects to find a "system" subdirectory.
# If you don't wanna loose all your config
# I recommend that you copy the "system"
# folder from its previous location
# (%JDEV_HOME%\jdev\system is the default) to
# your new JDEV_USER_HOME before restarting
# JDev.
#
SetUserHomeVariable JDEV_USER_HOME

#
# Set VFS_ENABLE to true if your
# projects contain a large number of files.
# You should use this specially if
# you're using a versioning system.
#
AddVMOption -DVFS_ENABLE=true

#
# Try to make JDev always fit in your available
# physical memory.
# I really don't recommend setting the maximum
# heap size to less than 512M but sometimes it's
# better doing this than having to get along with
# unpleasant Windows memory swapping.
#
# Just a reminder: this option does not establish
# an upper limit for the total memory allocated
# by the JVM. It limits only the heap area.
#
AddVMOption -Xmx512M

#
# Use these options bellow ONLY IF you're
# running JDeveloper on a multi-processor or
# multi-core machine.
#
# These options are designed to optimize the pause
# time for the hotspot VM.
# These options are ignored by ojvm with an
# information message.
#
AddVMOption -XX:+UseConcMarkSweepGC
AddVMOption -XX:+UseParNewGC
AddVMOption -XX:+CMSIncrementalMode
AddVMOption -XX:+CMSIncrementalPacing
AddVMOption -XX:CMSIncrementalDutyCycleMin=0
AddVMOption -XX:CMSIncrementalDutyCycle=10

#
# On a multi-processor or multi-core machine you
# may uncomment this option in order to
# limit CPU consumption by Oracle JVM client.
#
# AddVMOption -Xsinglecpu

#
# This option isn't really documented but
# it's really cool!
# Use this to prevent Windows from paging JDev's memory
# when you minimize it.
# This option should have the same effect as
# the KeepResident plug-in with the advantage
# of being a built-in feature in Sun's JVM 5.
#
AddVMOption -Dsun.awt.keepWorkingSetOnMinimize=true