Showing posts with label curiosities. Show all posts
Showing posts with label curiosities. Show all posts

Monday, May 24, 2010

How to upgrade your Dell’s BIOS directly from Ubuntu

I know this post is totally off topic but I faced this same issue last week and I’m pretty sure this will be very handy for a lot of people out there. So why not share it, right?!

Many people worldwide are migrating from Microsoft Windows to Linux nowadays, specially Ubuntu, which is probably the most friendly and stable distribution currently. I’m sort of one of those people. I’ve always used Unix but mainly at my work environment. But I’ve recently decided to switch one of my desktop PCs, a Dell Optiplex 755, from a 32-bit Windows XP Professional to a brand new 64-bit Ubuntu 10.04. And so far I’m an extremely happy user, I must say (finally making full and intelligent use of my 4GB RAM and certainly much more efficient use of my Core 2 Duo CPU).

Problem was, as I was eager to get rid of my old Windows XP, I didn’t pay attention to details, such as the PC’s BIOS version. By the moment I realized it was still A14 while the most recent was A17, I had already installed Ubuntu without any dual boot option and spent hours installing cool apps and tweaking everything to my own personal taste. As you probably know, flashing the newest BIOS release from Dell without any DOS or Windows partition on the PC would now be quite a handful and would probably involve using some freeware stuff to create a DOS or Windows recovery bootable CD and then execute Dell’s flash utility from an USB drive or something like that.

As usual, I sought help from our good friend Google and found some promising blogs and forums on the subject. However, none gave me a full-circle solution. But after some deeper research, I was able to put all necessary pieces together and compile the easiest steps to flash virtually any Dell’s BIOS directly from Ubuntu’s terminal prompt and using the most reliable source: Dell’s upgrade utility itself.

So, these are the magical steps, compiled from several different forums and articles  (they might work on other Linux distributions as well but I only tried on Ubuntu):

  1. apt-get update (not always needed, but won’t hurt)
  2. apt-get install libsmbios-bin
  3. getSystemId (will display info about your Dell, including BIOS version, System ID and Service Tag)
  4. Download the most recent BIOS upgrade for your system from http://support.dell.com as you would if using Windows
  5. Execute the downloaded utility on a command line. For example: O755-A17.exe –writehdrfile
    This step can be executed on any Windows machine or directly on Ubuntu using Wine (apt-get install wine).
  6. (you can skip this step if Wine was used in step 5) FTP or copy the extracted .hdr file to your Ubuntu
  7. modprobe dell_rbu (loads the dell_rbu driver)
  8. trigger the BIOS upgrade using dellBiosUpdate –u –f <the .hdr file from step 5>
  9. reboot the machine (the BIOS will be automatically upgraded during boot)

Voilá! You have safely upgraded your Dell’s BIOS directly from Ubuntu, without having to create any external boot CD or USB drive.

Hope this helps. Enjoy!

Friday, May 7, 2010

The X (Path) File

by Eduardo Rodrigues

This week I came across one of those mysterious problems where I had some test cases that needed to verify the content of some DOM trees to guarantee that the test went fine. So, of course, best way to achieve this is using XPath queries. And because the DOM trees involved were all quite simple, I figured writing the XPath queries to verify them would be like a walk in the park. But it wasn’t.

I spent hours and hours trying to figure out what was I doing wrong, googling around but nothing seemed to make any sense at all. Then, just when I was almost giving up and throwing myself through the window, I finally realized that tiny little detail that explained everything and pointed me out to the right solution. The culprit was the default namespace specified in my root element!

Turns out, whenever a namespace URI is specified without any prefix (like xmlns=”http://foo.com/mynamespace”), this is considered to be the document’s default namespace and it usually don’t affect parsers. But, as I found out, it does affect XPath big time. XPath, by definition, will always consider namespaces, even the default one. The problem with that is, because a default namespace don’t have any specific prefix, we completely lose the ability of using the most simple and common path-like approach when writing queries to locate nodes in the DOM tree.

Here’s a very simple example that illustrates the issue very well. Consider the following well-formed XML:

<?xml version="1.0" encoding="iso-8859-1"?>
<HR Company="Foo Inc.">
    <Dept id="1" name=”Board”>
        <Emp id="1">
            <Name>James King</Name>
            <Salary>150000</Salary>
        </Emp>
        <Emp id="10">
            <Name>Jon Doe</Name>
            <Salary>100000</Salary>
            <ManagerId>1</ManagerId>
        </Emp>
        <Emp id="20">
            <Name>Jane Smith</Name>
            <Salary>100000</Salary>
            <ManagerId>1</ManagerId>
        </Emp>
    </Dept>
</HR> 

If I want to check if there’s really an employee named “Jane Smith” earning a 100K salary in the “Board” department, a very simple XPath query such as “//Dept[@name='Board']/Emp[string(Name)='Jane Smith' and number(Salary)=100000]” would easily do the job.

Now just add an innocent default namespace to the root element:

<HR xmlns=”http://foo.com/HR” Company="Foo Inc.">

and try that very same XPath query that worked so well before. In fact, even the most simple of all queries – “/” – won’t work as expected anymore. That’s just because XPath considers the default namespace context and therefore requires it to be referenced in the query. We just don’t have any way of referring to that namespace in the query since it doesn’t have any prefix associated to it. My particular opinion on this issue is that it represents a huge design flaw in XPath specs., but that’s a completely different (and now pointless) discussion.

Unfortunately, there’s no magic in this case. To keep using XPath queries in this kind of situations, we need to use a more generic (and less compact) syntax where we can be more specific about when we do care about fully qualified (or expanded) names and take namespaces into consideration, or if we just care about local names but do not about namespaces. Bellow is the very same query, using this more generic syntax and these 2 different naming flavors, both providing the exact same outcome:
  1. If you need (or want) to consider the namespace:
    //*[namespace-uri()=’http://foo.com/HR’ and local-name()=’Dept’ and @name='Board']/*[namespace-uri()=’http://foo.com/HR’
    and local-name()=’Emp’ and string(Name)='Jane Smith' and number(Salary)=100000]
  2. If you just care about the elements’ names, then just remove the “namespace-uri” conditions:
    //*[local-name()=’Dept’ and @name='Board']/*[local-name()=’Emp’ and string(Name)='Jane Smith' and number(Salary)=100000]
The reason why I prefer to use function local-name() instead of name() is simply because, together with namespace-uri(), this is the most generic way of selecting nodes since local-name() doesn’t include the prefix, even if there is one. In other words, even if you had a node such as <hr:Dept>, local-name() would return simply “Dept”, while name() would return “hr:Dept”. It’s much more likely that the prefix for a particular namespace will vary amongst different XML files than its actual URI. Therefore, using predicates that combine functions namespace-uri() and local-name() should work in any case, regardless of which prefixes are being used at the moment.

Enjoy!

References

Friday, April 9, 2010

Oracle + Sun + Iron Man 2: Awesome!

A cool Iron Man 2 teaser...

Saturday, March 13, 2010

Don’t be smart. Never implement a resource bundle cache!

by Eduardo Rodrigues

Well, first of all, I’d like to apologize for almost 1 year of complete silence. Since I’ve transferred from Oracle Consulting in Brazil to product development at the HQ in California, it’s been a little bit crazy here. It took a while for me to adjust and adapt to this completely new environment. But I certainly can’t complain, cause it’s been AWESOME! Besides, I'm of the opinion that, if there’s nothing really interesting to say, then it’s better to keep quiet :)

Having said that, today I want to share an interesting experience I had recently here at work. First I’ll try to summarize the story to provide some context.

The Problem

A few months ago, a huge transition happened in Oracle’s internal IT production environment when 100% of its employees (something around 80K users) were migrated from old OCS (Oracle Collaboration Suite) 10g to Oracle’s next generation enterprise collaboration solution: Oracle Beehive. Needless to say, the expectations were big and we were all naturally tense, waiting to see how the system would behave.

Within a week with the system up and running, some issues related to the component I work on (open source Zimbra x Oracle Beehive integration) started to pop up. Among those, the most serious was a mysterious memory leak, which had never been detected before during any stress test or even after more than a year of production, but was now causing containers in the mid-tier cluster to crash after a certain period.

After a couple days of heap dump and log files analysis, we discovered that the culprit were 2 different resource caches maintained by 2 different components in Zimbra’s servlet layer, both related to its internationalization capabilities. In summary, one was a skin cache and the other was a resource bundle cache.

Once we dove into Zimbra’s source code, we quickly realized we were not really facing a memory leak per se but an implementation which clearly underestimated the explosive growth in memory consumption that a worldwide deployment like ours has a huge potential to trigger.

Both caches were simply HashMap objects and, ironically, their keys were actually the key to our problem. The map keys were defined as a combination that included client’s locale, user agent and, in the case of the skins cache, the skin name was also included. Well… you can probably imagine how many different combinations of these elements are possible within a worldwide system deployment, right? Absolutely! In our case, each HashMap would quickly reach 200MB. Of course, consuming 400MB out of 1GB of configured heap space with only 2 objects is not very economic (to say the least).

So, OK. Great! We found our root cause (which is awesome enough in this kind of hard-to-analyze-production-only bugs). But now comes the harder part: how can we fix it?!

The Solution

First of all, it’s very important to keep this very important aspect in mind: we were dealing with a source code that wasn’t ours, therefore, keeping changes as minimal as possible was always crucial.

One thing we noticed right away was the fact that we were most likely creating multiple entries in both maps that ended up containing identical copies of a same skin or resource bundle content. That’s because our system only supported 15 distinct locales, which means, every unsupported client locale would fallback to one of the supported locales, ultimately, the default English locale. However, the map key would still be composed with the client’s locale, thus creating a new map entry, and even worse, mapping to a new copy of the fallback locale. Yes, locales and skins that had already been loaded and processed, were constantly being reloaded, reprocessed and added to the caches.

So, our first approach was to perform a small intervention with the only intention to prevent any unsupported client locale from originating a new entry in those maps. Ideally, we would want to change the maps’ key composition but we were not very comfortable with this idea, mainly because we were not sure we fully understood all the consequences of that, and fix the problem causing another was not an option.

Unfortunately, days after patching the system, our containers were crashing with OutOfMemory exceptions again. As we discovered – the hardest way – simply containing the variation of the locale component in the maps’ key composition was enough to slow down the heap consumption but not enough to avoid the OOM crashes.

Now it was time to put our “fears” aside and dig deeper. And we decided to dig in two simultaneous fronts: the skin cache and the resource bundle cache. In this post, I’ll only talk about the resource bundle front leaving the skin cache front to a next post.

When I say “resource bundle”, I’m actually referring to Java’s java.util.ResourceBundle, more specifically its subclass java.util.PropertyResourceBundle. With that in mind, 2 strange things caught my attention while looking carefully into the heap dumps:

  1. Each ResourceBundle instance had a “parent” attribute pointing to its next fallback locale and so on, until the ultimate fallback, the default locale. This means that each loaded resource bundle could actually encapsulate other 2 bundles.
  2. There were multiple ResourceBundle instances (each one with a different memory address) for 1 same locale.

So, number 1 made me realize that the memory consumption issue was even worse than I thought. But number 2 made no sense at all. Why have a cache that is only stocking objects but is not able to reuse existing ones? So I decided to take a look at the source code of class java.util.ResourceBundle in JDK 5.0. The Javadoc says:

Implementations of getBundle may cache instantiated resource bundles and return the same resource bundle instance multiple times.

Well, turns out Sun’s implementation (the one we use) DOES CACHE instantiated resource bundles. Even better, it uses a soft cache, which means all content is stored as soft references, granting the garbage collector the permission to discard one or more of its entries if it decides it needs to free up more heap space. Problem solved! – I thought. I just needed to completely remove the unnecessary resource bundle cache from Zimbra’s code ant let it take advantage of the JVM’s internal soft cache. And that’s exactly what I tried. But, of course, it wouldn’t be that easy…

Since at this point I already knew exactly how to simulate the root cause of our problem, I started debugging my modified code and I was amazed when I saw that the internal JVM’s cache was also stocking up multiple copies of bundles for identical locales. The good thing was that now I could understand what was causing #2 above. But why?! The only logical conclusion was, again, to blame the cache’s key composition.

The JVM’s resource bundle cache also uses a key, which is composed by the bundle’s name + the corresponding java.util.Locale instance + a weak reference to the class loader used to load the bundle. But then, how come a second attempt to load a resource bundle named “/zimbra/msg/ZmMsg_en_us.properties”, corresponding to en_us locale and using the very same class loader was not hitting the cache?

After a couple hours thinking I was loosing my mind, I finally noticed that, in fact, each time a new load attempt was made, the class loader instance, although of the same type, was never the same. And I also noticed that its type was actually an extended class loader implemented by inner-class com.zimbra.webClient.servlet.JspServlet$ResourceLoader. When I checked that code, I immediately realized that class com.zimbra.webClient.servlet.JspServlet, which itself is an extension of the real JspServlet being used in the container, was overriding method service() and creating a new private instance of custom class loader ResourceLoader and forcefully replacing the current thread’s context class loader with this custom one, which was then utilized to load the resource bundles.

My first attempt to solve this mess was to make the custom class loader also override methods hashCode() and equals(Object) so they would actually proxy the parent class loader (which was always the original one that was replaced in method service()). Since the web application’s class loader instance would always be the same during the application’s entire life cycle, both hashCode and equals for the custom loader would consistently return the same results at all times, thus causing the composed keys to match and cached bundles to be reused instead of reloaded and re-cached. And I was wrong once again.

Turns out, as strange as it may look at first sight, when the JVM’s resource bundle cache tries to match keys in its soft cache, instead of calling the traditional equals() to compare the class loader instances, it simply uses the “==” operator, which simply compares their memory addresses. Actually, if we think more about it, we start to understand why they implemented this way. Class loaders are never expected to be constantly instantiated, over and over again, during the life cycle of any application, so why make an overhead method call to equals()?

Finally, now I knew for sure what was the definitive solution. I just needed to transform the private instances of ResourceLoader into a singleton, keeping all the original logic. Bingo! Now I could see the internal bundle cache being hit as it should be. Problem solved, at last!

At the end, after having completely removed the custom resource bundle cache implemented in Zimbra’s servlet layer and performed the necessary changes to make Zimbra take full and proper advantage of the built-in bundle cache offered by the JVM, instead of wasting a lot of time and memory reloading and storing hundreds of instances of resource bundles, mostly multiple copies of identical bundles, I could now confirm that despite all different client locales coming in clients' requests, the JVM’s bundle cache was holding no more than those corresponding to the 15 supported locales. With that, we had finally fixed the memory burning issue for good.

Conclusion

As this article’s title suggests, don’t try to be smarter than the JVM itself without first checking whether it’s doing it’s job well enough or not. Always do carefully read the Javadocs and, if needed, check your JVM’s source code to be sure about its behavior.

And remember…. never implement a resource bundle cache in your application (at least if you’re using Sun’s JVM) and be very careful when implementing and using your own custom class loaders too.

That’s all for now and…
Keep reading!

Tuesday, July 8, 2008

There's a cook in JDev's development team indeed

Sometime ago I was surprised with a peculiar "tip of the day" which simply mentioned a traditional angel cake recipe.

Well, today I got the confirmation. There's certainly a cook amongst JDev's developers!

Look at the "tip" showed to me today:



Hmmmm... interesting... :)

Wednesday, April 16, 2008

Is there a cook in JDev's team?

This week I found something at least very curious when I launched my JDeveloper 10.1.3.3 as I do almost every morning. This was the "Tip of the Day" it showed me:



Well, I don't know what this means but, anyway, here is a full recipe, just in case: http://www.foodnetwork.com/food/recipes/recipe/0,1977,FOOD_9936_15602,00.html

:)