Monday, May 24, 2010

How to upgrade your Dell’s BIOS directly from Ubuntu

I know this post is totally off topic but I faced this same issue last week and I’m pretty sure this will be very handy for a lot of people out there. So why not share it, right?!

Many people worldwide are migrating from Microsoft Windows to Linux nowadays, specially Ubuntu, which is probably the most friendly and stable distribution currently. I’m sort of one of those people. I’ve always used Unix but mainly at my work environment. But I’ve recently decided to switch one of my desktop PCs, a Dell Optiplex 755, from a 32-bit Windows XP Professional to a brand new 64-bit Ubuntu 10.04. And so far I’m an extremely happy user, I must say (finally making full and intelligent use of my 4GB RAM and certainly much more efficient use of my Core 2 Duo CPU).

Problem was, as I was eager to get rid of my old Windows XP, I didn’t pay attention to details, such as the PC’s BIOS version. By the moment I realized it was still A14 while the most recent was A17, I had already installed Ubuntu without any dual boot option and spent hours installing cool apps and tweaking everything to my own personal taste. As you probably know, flashing the newest BIOS release from Dell without any DOS or Windows partition on the PC would now be quite a handful and would probably involve using some freeware stuff to create a DOS or Windows recovery bootable CD and then execute Dell’s flash utility from an USB drive or something like that.

As usual, I sought help from our good friend Google and found some promising blogs and forums on the subject. However, none gave me a full-circle solution. But after some deeper research, I was able to put all necessary pieces together and compile the easiest steps to flash virtually any Dell’s BIOS directly from Ubuntu’s terminal prompt and using the most reliable source: Dell’s upgrade utility itself.

So, these are the magical steps, compiled from several different forums and articles  (they might work on other Linux distributions as well but I only tried on Ubuntu):

  1. apt-get update (not always needed, but won’t hurt)
  2. apt-get install libsmbios-bin
  3. getSystemId (will display info about your Dell, including BIOS version, System ID and Service Tag)
  4. Download the most recent BIOS upgrade for your system from http://support.dell.com as you would if using Windows
  5. Execute the downloaded utility on a command line. For example: O755-A17.exe –writehdrfile
    This step can be executed on any Windows machine or directly on Ubuntu using Wine (apt-get install wine).
  6. (you can skip this step if Wine was used in step 5) FTP or copy the extracted .hdr file to your Ubuntu
  7. modprobe dell_rbu (loads the dell_rbu driver)
  8. trigger the BIOS upgrade using dellBiosUpdate –u –f <the .hdr file from step 5>
  9. reboot the machine (the BIOS will be automatically upgraded during boot)

Voilá! You have safely upgraded your Dell’s BIOS directly from Ubuntu, without having to create any external boot CD or USB drive.

Hope this helps. Enjoy!

Wednesday, May 12, 2010

Micromanaging Memory Consumption

by Eduardo Rodrigues

As we all know, specially since Java 5.0, the JVM guys have been doing good job and have significantly improved a lot of key aspects, specially performance and memory management, which basically translates into our good old friend, the Garbage Collector (a.k.a. GC).

In almost all articles I’ve read on the memory subject, including those from Sun itself, a particular comment was always present. That briefly is:

The JVM loves small-and-short-living objects.

Don’t “nullify” variables (myObject = null;) when you decide they aren’t needed anymore as a way of hinting the GC that the objects once referenced by those variables are OK to be disposed.

I guess, after reading this “message” so many times, I finally internalized it in the form of a programming style, if I may. It’s actually very simple and takes advantage of a very basic structure, which is extremely common and kind of taken for granted. I’m talking about the very well-known code block. Yes, I’m talking about those code snippets squeezed and indented between a pair of curly braces like { <my lines of code go here> }.

In general, most programmers use these structures just because they have to as they are mandatory in so many portions of the Java language’s syntax. You need them when declaring classes, methods, try-catch-finally blocks, multi-line for loops, multi-line if-then-else blocks, etc. But the detail many programmers seem to forget is that these code blocks may actually be defined anywhere in a method body, unattached to any particular keyword or command. Even more, code blocks can be nested as well.

Besides being syntactically mandatory, the use of code blocks demarcated by opening and closing curly braces also imply a very important feature of the language. Code blocks also define variables scopes! I’ll explain…

Any variable that happens to declared inside a curly-braces-pair-demarcated code block will “exist” only within the context of that particular code block (or scope). Such variables are said to be “local variables”. Actually, if we try to use a local variable outside of its scope, we’ll most certainly get a compilation error, because that variable literally doesn’t exist outside that code block (or scope) where it was declared. And right there lies the very code of this best-practice tip.

Specifying well-defined scopes for all your local variables is actually the best way of hinting the GC about what strong references are or not in use when it kicks in. Simple enough, any strong reference coming from a variable declared inside a scope that is currently not being executed, is clearly to be considered as not in use by the GC, thus increasing the chances of proper and prompt disposal of the referenced object (if no other strong references to it exist, of course).

So, in order to better illustrate my case, here is a simple example. First let’s consider this very innocent piece of code:

public class Foo
{
   public final static void main(final String args[])
   {
        try 
        {
            DocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance();
            DocumentBuilder builder = builderFactory.newDocumentBuilder();
            URL cfgUrl = this.getClass().getClassLoader().getResource("config.xml");
            File cfgFile =  new File(cfgUrl.toURI());

            Document cfg = builderFactory.newDocumentBuilder().parse(cfgFile);
            XPath xpath = XPathFactory.newInstance().newXPath();
            Node cfgNode = (Node)xpath.evaluate("//*[local-name()='config']", cfg, XPathConstants.NODE);

            (...)
           
        } catch (Exception ex) {
           ex.printStackTrace();
        }
    }
}

If we consider that in the very first part, variables builderFactory, builder and cfgUrl are not really needed after cfgFile is instantiated, rewriting that part like this would be preferred:

public class Foo
{
   public final static void main(final String args[])
   {
        try 
        {
            Document cfg;

            {
                DocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance();
                DocumentBuilder builder = builderFactory.newDocumentBuilder();
                URL cfgUrl = this.getClass().getClassLoader().getResource("config.xml");
                File cfgFile =  new File(cfgUrl.toURI());
                cfg = builder.parse(cfgFile);
            }

            XPath xpath = XPathFactory.newInstance().newXPath();
            Node cfgNode = (Node)xpath.evaluate("//*[local-name()='config']", cfg, XPathConstants.NODE);

            (...)
           
        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }
}

With that, when the execution is passed the red code block, all local variables declared only in that context will cease to exist for all practical means. This simple example is a mere illustration and certainly doesn’t represent any major benefit but, believe me, in a real life code, using this approach of well-defined scopes for local variables may have a significant and positive impact on your application’s memory consumption profile.

As you can see, this is indeed a very simple Java best-practice tip. It’s easy to adopt, has no collaterals whatsoever and can prove to be very powerful. So, why not use it?

Enjoy!

Friday, May 7, 2010

The X (Path) File

by Eduardo Rodrigues

This week I came across one of those mysterious problems where I had some test cases that needed to verify the content of some DOM trees to guarantee that the test went fine. So, of course, best way to achieve this is using XPath queries. And because the DOM trees involved were all quite simple, I figured writing the XPath queries to verify them would be like a walk in the park. But it wasn’t.

I spent hours and hours trying to figure out what was I doing wrong, googling around but nothing seemed to make any sense at all. Then, just when I was almost giving up and throwing myself through the window, I finally realized that tiny little detail that explained everything and pointed me out to the right solution. The culprit was the default namespace specified in my root element!

Turns out, whenever a namespace URI is specified without any prefix (like xmlns=”http://foo.com/mynamespace”), this is considered to be the document’s default namespace and it usually don’t affect parsers. But, as I found out, it does affect XPath big time. XPath, by definition, will always consider namespaces, even the default one. The problem with that is, because a default namespace don’t have any specific prefix, we completely lose the ability of using the most simple and common path-like approach when writing queries to locate nodes in the DOM tree.

Here’s a very simple example that illustrates the issue very well. Consider the following well-formed XML:

<?xml version="1.0" encoding="iso-8859-1"?>
<HR Company="Foo Inc.">
    <Dept id="1" name=”Board”>
        <Emp id="1">
            <Name>James King</Name>
            <Salary>150000</Salary>
        </Emp>
        <Emp id="10">
            <Name>Jon Doe</Name>
            <Salary>100000</Salary>
            <ManagerId>1</ManagerId>
        </Emp>
        <Emp id="20">
            <Name>Jane Smith</Name>
            <Salary>100000</Salary>
            <ManagerId>1</ManagerId>
        </Emp>
    </Dept>
</HR> 

If I want to check if there’s really an employee named “Jane Smith” earning a 100K salary in the “Board” department, a very simple XPath query such as “//Dept[@name='Board']/Emp[string(Name)='Jane Smith' and number(Salary)=100000]” would easily do the job.

Now just add an innocent default namespace to the root element:

<HR xmlns=”http://foo.com/HR” Company="Foo Inc.">

and try that very same XPath query that worked so well before. In fact, even the most simple of all queries – “/” – won’t work as expected anymore. That’s just because XPath considers the default namespace context and therefore requires it to be referenced in the query. We just don’t have any way of referring to that namespace in the query since it doesn’t have any prefix associated to it. My particular opinion on this issue is that it represents a huge design flaw in XPath specs., but that’s a completely different (and now pointless) discussion.

Unfortunately, there’s no magic in this case. To keep using XPath queries in this kind of situations, we need to use a more generic (and less compact) syntax where we can be more specific about when we do care about fully qualified (or expanded) names and take namespaces into consideration, or if we just care about local names but do not about namespaces. Bellow is the very same query, using this more generic syntax and these 2 different naming flavors, both providing the exact same outcome:
  1. If you need (or want) to consider the namespace:
    //*[namespace-uri()=’http://foo.com/HR’ and local-name()=’Dept’ and @name='Board']/*[namespace-uri()=’http://foo.com/HR’
    and local-name()=’Emp’ and string(Name)='Jane Smith' and number(Salary)=100000]
  2. If you just care about the elements’ names, then just remove the “namespace-uri” conditions:
    //*[local-name()=’Dept’ and @name='Board']/*[local-name()=’Emp’ and string(Name)='Jane Smith' and number(Salary)=100000]
The reason why I prefer to use function local-name() instead of name() is simply because, together with namespace-uri(), this is the most generic way of selecting nodes since local-name() doesn’t include the prefix, even if there is one. In other words, even if you had a node such as <hr:Dept>, local-name() would return simply “Dept”, while name() would return “hr:Dept”. It’s much more likely that the prefix for a particular namespace will vary amongst different XML files than its actual URI. Therefore, using predicates that combine functions namespace-uri() and local-name() should work in any case, regardless of which prefixes are being used at the moment.

Enjoy!

References

Saturday, May 1, 2010

The easy-small-simple-quick-step-by-step-how-to article on AspectJ you’ve been looking for is right here

by Eduardo Rodrigues

That’s right. Have you ever spent hours of your precious time googling the Web trying to find an easy, small, simple, quick and step-by-step tutorial, article or sample on how to use AspectJ to solve that very simple use case where you only need to add some trace messages when certain methods of a certain library are called (and you don’t have access to its source code)? If your answer is “YES” then this post is exactly what you’ve been looking (maybe even praying) for.

For convenience, readers may download all files mentioned bellow at http://sites.google.com/site/errodrigues/Home/aspectj_sample.zip?attredirects=0&d=1

Step 0: get the latest stable version of AspectJ

AspectJ can be downloaded from http://www.eclipse.org/aspectj/downloads.php. I recommend using the latest stable release, of course. As of now, this would be release 1.6.8.

To install the package, just run “java -jar aspectj-1.6.8.jar”

Step 1: write the aspect code

Well, AspectJ doesn’t really have what I would call a very intuitive syntax so I’ll not try to explain it more than the strictly necessary for this post. A good way to start learning is reading the official documentation at http://www.eclipse.org/aspectj/docs.php.
In my case, I only needed to write 1 single and very simple aspect capable of capturing all calls to 1 particular method in 1 particular class and then log those calls when the method’s argument was equal to a particular value. In my opinion, this “trace calls” use case is the most simple, obvious and probably one of the most used for any AOP (Aspect Oriented Programming) solution. Therefore, this simple example will probably cover the requirements of most of the readers. That’s what I hope, at least. So here is my aspect FULLProjection.aj:

import java.util.logging.Logger;
import java.util.logging.Level;
import com.foo.Projection;

/**
 * Simple trace AspectJ class. In summary, what it does is:
 * Before executing any call to method setProjection() on any instance
 * of class com.foo.Projection, execute method logFullProjection() defined
 * in this aspect.
 *
 * Modifier "privileged" gives this aspect access to all private members
 * of the captured object.
 */
privileged aspect FULLProjection {

    private static final Logger logger = Logger.getLogger(Projection.class.getName());

    /**
     * AspectJ syntax: defining the execution points to be captured at runtime.
     * target(p) sets the target object to be the instance of
     * class Projection which method setProjection() is called on.
     */
     pointcut tracedCall(Projection p):
          call(void Projection.setProjection()) && target(p);

    /**
     * AspectJ syntax: defining what should run right before the
     * pointcut specified above is executed.
     * Argument p will contain the exact instance of class Projection
     * which is being executed at runtime.
     */
     before(Projection p): tracedCall(p) {
          // m_projectionName is a private member of object p.
          // that's why this aspect must be declared as privileged.
          if("FULL".equals(p.m_projectionName))
               logFullProjectionByZimbra(); // call our Java method, which does the trace
     }

    /**
     * Standard Java method to be executed.
     * Just logs the call if it came from any class under mypackage.*
     */
     private void logFullProjection()
     {
          try {
               StringBuilder c = new StringBuilder("FullProjectionCall: ");
               StackTraceElement st[] = Thread.currentThread().getStackTrace();

               for (StackTraceElement e : st)
               {
                    if(e.getClassName().startsWith("mypackage."))
                    {
                         c.append(e.getClassName())
                          .append(":")
                          .append(e.getMethodName())
                          .append(":")
                          .append(e.getLineNumber());
                         logger.log(Level.WARNING, c.toString());
                         break;
                    }
               }
          } catch(Throwable t) {
               return;
          }
     }
}

Most recent versions of AspectJ have 3 distinct modes of performing the necessary instrumentation on needed classes:
  1. instrument the original source code directly (when available) at compile time;
  2. weave the class files after before using or deploying them as a post-compilation step;
  3. weave classes on demand at runtime (a specific agent must be configured when starting the JVM).
I won’t discuss the pros and cons. In my my case, option 1 was discarded because I didn’t have access to the source code and option 3 was not good because I didn’t want to mess with the container’s setup. So I chose option 2.

Step 2: instrument classes with AspectJ

Since I decided to use option 2 above, my compilation process didn’t need to change a bit. What I needed was to add a post-compile step in which the necessary AspectJ instrumentation would be performed on already compiled classes. The right tool to do that is AspectJ’s command-line compiler: ajc.

This was the hardest part for me because I could not find any tutorial or example on the Web that would show me, in a direct and simple way, how to use ajc. So, instead of making the same mistake and start trying to describe and explain the compiler tool and its options, I’ll simply paste the shell script I used in my case:

#!/bin/bash

# AspectJ's install dir
ASPECTJ_HOME=/scratch/aspectj
# Defining the classpath to be used by ajc
MYCLASSPATH=<include here all elements needed by the classes to be instrumented>

$ASPECTJ_HOME/bin/ajc -classpath $MYCLASSPATH -argfile my_aspects.cfg

The trick here was to use file my_aspects.cfg to define all other parameters to be passed to ajc. Here is its content:

-1.5          # Java 1.5 compatibility
-g            # add debug info
-sourceroots  # path containing the aspects to be compiled (FULLProjection.aj in my case)
/scratch/WORK/aspects
-inpath  # path containing the classes to be instrumented (com.foo.Projection inside platform.jar in my case)
/scratch/WORK/jlib/platform.jar
-outjar  # output JAR file (I preferred to have a separate JAR and keep the original)
/scratch/WORK/platform-instrumented.jar

This file must contain only 1 command-line option per line. When an option requires a subsequent value (like –sourceroots), the value must also be declared on a separate line, subsequent to the option line. A complete reference to ajc can be found at http://www.eclipse.org/aspectj/doc/released/devguide/ajc-ref.html.

Notice that, as long as you don’t choose the runtime instrumentation mode, there’s no need to deploy your aspects with the application. They can be completely separate from your source code as well.

Conclusion

AspectJ as well as any other AOP solution can be a very powerful and useful ally to any software developer. Even as an architectural element by itself. Its possible applications go far beyond the simple use case shown in this post. So, if you’re still undecided, give it a try. Don’t be afraid of using it when necessary and, if you have the opportunity, consider including it as an active component since the beginning of your application’s development cycle.

And remember… if possible, share your experiences with the community (in an objective and clear way, please).

Enjoy!