Logging needs some lazy evaluation

If it's one situation where lazy evaluation is needed in Java, it's logging. Until something better comes up and we'll have logging injected via AOP or something similar, a log message will be just the result of an extra line in our Java files, and this is a problem.

A normal log message is something like this:

log.fine("Some parameter is:" + someVariable);

This looks quite harmless especially since we know that depending on the log level, our message might be saved or not.

But say we have an expensive function:

log.fine("The extra informations starts at" + reallyLongLastingFunction());

The problem above is obvious: the log string will be built no matter what and our reallyLongLastingFunction() will be called each time, including when the log won't actually be saved.

The solution to this is to pollute your code with something like:

if(log.isLoggable(Level.FINE)){
log.fine("The extra informations starts at" + reallyLongLastingFunction());
}

This way our string creation and expensive function call is done only if the log really is needed. But this adds extra boilerplate in the code as well as makes you maintain the log level (for example what if I change the line to log.finer -- I have to update the if).

If all the log methods would have lazy evaluation this problem would go away -- the code won't be executed until actually needed and there will be only one line in the code.

The AOP style is to inject the logging using bytecode engineering. Perhaps it would be nice to do a post-processing of the resulted JAR artifacts and replace all log calls with something that injects that if check, etc.

And speaking of memory and CPU wasted on logging, there's nothing like seeing that your biggest CPU user is caused by the increased log level while debugging. Debuggers should know how to filter log calls, including time spent in the string building itself otherwise they don't really help.

In Romania, voting is still not an important right

I couldn't believe it tonight when they closed the voting booths with people waiting at the door and afterwords shouting that they want to vote.

It's just surreal! On one side every politician tells people to please go vote, but here you have people willing to vote, but the voting booths are being closed because it 21:00 sharp.

Mind you, it doesn't matter I had been standing in line since at least 20:30, I had no right to vote because I didn't physically get far enough in the line of people to get to vote by 21:00.

I was totally certain they will allow all of us to vote. How wrong I was !

I have never been particularly proud of being Romanian, but tonight I felt like a non-citizen. It's sad how low we are as a country on Maslow's hierarchy of needs: still just a hungry and tired bunch of people that just wanted to be done with this "voting" thing by closing the booths as soon as possible.

Just Sad.

Official NetBeans build with Romanian localization

Head over to the NetBeans download website and notice the language popup also has Romanian now.

More info about the localization on the Joseki Bold dedicated page.

Aversion towards localization a sign of technological barbarism

English is obviously the lingua-franca of everything computer and computer-science related. Having a single language does help everybody since it easily allows people to communicate and exchange ideas.

The side-effect of using English for everything computer related is that it decreases the focus on using the local language for computer-related discussions. Or, if the local language is used, it is filled with English words ! The more complex the discussion becomes, the more English is used until it becomes almost easier to use English full-time and just revert to the local language when some explaining is required using examples. I think this is the reason some multinationals revert to English as the official language -- for computer related workers, it doesn't affect the productivity, especially since people of different nationalities might end up working together.

Well, this means that while English has evolved to be a technology-centric language, most of the other languages either try to play catch-up or, most likely, don't run into the race at all and just import all the English words.

In my country developers, for example, dislike applications localized into the native language. The more technical the application is (like a developer tool), the more foreign it seems to them to see the text in the local language instead of English. Native words disturb them and metaphors seem weird when they make the cognitive connection: a mouse is actually a rodent ! I'm pretty sure English people also thought of rodents the first time they heard "mouse" in a phrase -- but this has been changed nowadays. Computers have become so ubiquitous that "mouse" usually means that computer peripheral. "Firewall" is not some fireman expression or a burning wall, but something computer related, etc.

By focusing so much on the English language and not allowing themselves to jump-start the native language metaphors related to computer science, developers are the main culprit of keeping the native language into a phase of technological barbarism.

An then, they act surprised when the local market almost doesn't exist and their parents can't understand a thing from computers and need help with the simplest things (usually they can't understand the language on screen, which is English).

Green software

Long ago I wrote a blogpost where I was mentioning that for an always-on (wall-plugged) workstation, the latest (then) fad of lowering power consumption is not that essential since, as a developer, all you care about is overall machine speed to get the job done and the cost of power is negligible compared to the cost of on developer hour (and then rent, administrative overhead, etc.)

Well, that is one aspect. The other aspect is when power consumption is important. This is clearly a major factor for large datacenters where a big chunk of their cost is power (for the machines and cooling) so they keep a keen eye on performance per watt. The specifics of the business are different there: you don't have developers on top of each machine, but you have hundreds / thousands of machines providing some service to remote users. The cost of maintaining that datacenter determines the cost you sell your services to users and your overall competitiveness.

Another scenario I've personally noticed as of late (see my other article somehow related to this) is the importance of performance per watt when working on your laptop's battery !

Now, the overall system performance per watt is a given of the machine you happen to own. You can't actually tweak that very much, except some hardware upgrade here and there and operating system optimization.

So what you are left is the actual software you use everyday and its performance per watt. Let's call that productivity per watt. Lower performing sofware might exhibit different issues:


  • Consuming too much CPU
  • Hitting the disk too often. IDEs are notoriously culprits here when a normal clean build deletes and recreates hundreds of files on disk. Even some feature like compile on save doesn't save much since developers save often and all this disk writing might actually stop the disk from going to sleep (and thus consume less power).
  • Hitting the network too often or too much. No point in checking for software updates when the user is on battery. No point downloading that 200MB "update" file -- updates should consist of binary diffs and be as small as possible (Google is looking into this as it becomes very important the more users you have). Also, laptops generally use WiFi is they are not plugged in and (I think) that consumes even more power than ethernet.


These are all optimization issues but the main culprit is not scaling down when on battery: this includes being smart about redundant tasks like re-indexing the Maven repository or checking of non-essential upgrades that could be deferred to a time when you're not on laptop battery .

What we seem to be missing is a new metric to evaluate applications: productivity per watt and teaching users to pick applications the same way they pick an A+ energy rating fridge.

Looking forward to see which IDE uses less power to refactor a class or just to stay idle.

The most complex simple GUI: VirtualBox snapshot handling

It's amazing how the guys doing Virtual Box (purchased by Sun and now by Oracle) have managed to screw up so badly their snapshot mechanism and specifically their GUI allowing the user to handle it:

1. Let's start with the easy pickings: they support linear snapshots only. What did they select to display this? Obviously not a list, but a tree !

2. Their snapshot documentation is less than 1 page in the PDF help file. Out of this, half is spent explaining how to "take" a snapshot which is probably the easiest thing they support. A quarter of the page includes a scary note about possible data loss which references some VBoxManage interface which is some script that has nothing to do with the GUI.

Also, their documentation doesn't have a single screenshot or at least pictures of the buttons users are supposed to press.

In the remaining quarter of the documentation they briefly mention revert and discard snapshot.

At no point do they bother explaining how their mechanism works, you kinda have to read between the lines what the philosophy is.

3. When you press "Discard snapshot" it actually merges changes. So you're not losing data actually, you're just losing the little timestamp that the snapshot gave you. They couldn't have picked a more confusing wording and it's not what you would expect.

4. Next to the notion of "snapshot" they have a separate notion of "current state" or "state" which is somehow related but kinda different. I'm not entirely certain if "state" is something like a placeholder that I can fill with different snapshots or just the tip of the snapshots list. Their wording makes it sound one or the other.

Also, not only did they pick such a wrong wording and show such lack in clearly explaining what their application is supposed to do, but they don't even seem to find this important. A bug report on their website complaining about this very thing is marked as minor !

I guess this is a sign of some geek mentality. They assume that people must train using the uber-software and actually read and memorize the 250 page PDF (probably at some point it all makes sense). The sad thing is that the GUI is really simple and this minimalism should allow them to focus on the details, but a third of it is broken.

Obviously I'm not talking about their "Settings" GUI where you configure the machine, that's very complex.

I'm talking about the normal GUI the users gets to see every other time after the creation of the virtual machine which has 3 tabs: "Details", "Snapshots" and "Description". The only one the user will interact for the rest of the virtual machine's lifetime is "Snapshots" and this is the one they thought it's minor. Go figure.

Matrix thoughts

AI lemma (anti-Matrix):
We do not live in a simulation of a similar universe as a simulation will consume more energy than a real existence.

A corollary would be:
The universe simulating our existence might be so different that the above lemma doesn't apply.

iPhone Location Manager taking forever

On the iPhone, the Location manager that provides the GPS location is a nice API to use.

It does have some issues though: CLLocationManger doesn't work if it's called from another thread !

I first noticed something was really funny when my delegate wasn't being called at all.

Neither – locationManager:didUpdateToLocation:fromLocation: nor – locationManager:didFailWithError: was called and my application was just waiting there forever for some GPS information.

My first thought was that it was some issue with my memory management as I wasn't holding a reference to the location manager in any class, just in the method where it was created. But still, it didn't work.

Then, I though it was a problem about the threading model being used (I waited for the GPS location in another thread in order not to block the GUI). Sure enough, that seemed to be the problem, and at least another person complained about it. Not sure it is a matter of threading or a matter of memory pool being used.

But more to the point, always create your CLLocationManager instance in the main thread, and not in another thread. Having a singleton method there which is called from the main thread somewhere assures you that the location manager is created in the proper thread/pool.

Developer surprise on OSX

I had a strange bug in the OSX Address Book application: I had a rule that included all the address cards not present in any other rule.

This worked initially, but after an update, Address Book got confused and entered into an infinite cycle (it was probably trying to ignore the cards in the rule itself and then went on to resolve that recursively).

Anyhow, the good thing was the application crashed only if I scrolled on top of that particular rule. And since I had quite a lot, I was safe to open the application at least.

But, still, having a semi-buggy application isn't fun to use. So I went and looked at the Address Book file format which seemed to be some sqlite3 database, but I couldn't fix the problem from there.

To my surprise Apple has a public API for the Address Book !

So I wrote these short lines of code:
    ABAddressBook *AB = [ABAddressBook sharedAddressBook];

NSArray * groups = [AB groups];

for(int i=0;i<[groups count];i++){
ABGroup * group = [groups objectAtIndex:i];
NSString * name = [group valueForProperty: kABGroupNameProperty];

if([@"BadBadRule" compare:name]==NSOrderedSame){
[AB removeRecord:group];
[AB save];
}
}

and that was it ! No more Address Book crashes ! Turns out OSX is really nice to tweak if you are willing to code a bit.

My Slicehost / VPS analysis

Fist time VPS user



Starting a few months back, I have a VPS from Slicehost. It's the cheapes one they've got, with only 256MB RAM.

I never worked on a VPS, I only had either dedicated physical servers in the company datacenter (at the previous job) or CPanel-based hosted accounts (for some other clients).

All in all, a VPS is just as one might expect: almost like a normal server only slower.

And the slowness is starting to bug me a bit, specifically the problem that I don't know how slow is it supposed to be.

The fixed technical details from Slicehost is that you'll have 256MB RAM, 10GB or disk storage and 100GB bandwidth.

Now there are 2 issues here. One which seems quite obvious and another one I'll introduce later.

CPU



OK, the 1st problem is that you don't know how much CPU cycles you are going to get. Being a VPS means it runs on some beefy server (Slicehost says it's a Quad core server with 16GB of RAM).

According to Slicehost's FAQ:

Each Slice is assigned a fixed weight based on the memory size (256, 512 and 1024 megabytes). So a 1024 has 4x the cycles as a 256 under load. However, if there are free cycles on a machine, all Slices can consume CPU time.


This basically means that under load, each slices gets CPU cycles depending on the RAM it has (ie. price you pay). A 256MB slice gets 1 cycle, the 512MB slice gets 2 cycles, 1GB slice gets 4 cycles and so on.

The problem here is of course, that one is not certain that they only have on the server a maximum amount of slices, but Slicehost is clearly overselling as top usually displays a "Steal time" of around 20%.

So, assuming a machine is filled 100% with slices and there is no multiplexing, it means that a 256MB slice gets 6.25% of a single CPU under load.

6.25 isn't much at all, but considering that the machine isn't always under load, the slice seems to get a decent amount of CPU nonetheless.

If we consider the overselling issue and that 20% is stolen by Xen to give to other VPS, we get to an even 5 %.

Now, this might not be as bad as it sounds CPU-wise as I've noticed Xen stealing time when my CPU-share is basically idle anyhow so maybe it doesn't affect my overall performance.

For example: ./pi_css5 1048576 takes about 10 seconds which is more than decent.

IO



The bigger problem with VPS seems the be the fact that hard drives aren't nearly as fast as RAM. And when you have a lot of processes competing for the same disk, it's bound to be slow.

What Slicehost doesn't mention is if the "fixed weight" sharing rule they use for CPU cycles applies to disk access too. My impression is that it is.

After trying to use my VPS as a build server I've noticed it grind to a halt.

top shows something like this:


Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 62.2%id, 20.9%wa, 0.0%hi, 0.0%si, 16.9%st


but the load average for a small build is something like


load average: 1.73, 2.06, 1.93
and it easily goes to 3, 4 and even 9! when I also try to do something else there.

So, looking at the information above, we can note that 62.2%, the CPU is just idle, while the actualy "working" tasks, ie. 20.9% are waiting for IO. The rest of 16.9% CPU time is stolen by Xen and given to other virtual machines, and I don't think it really matters given that the load is clearly IO-bound.

And here lies the problem: just how fast might Slicehosts' hard drives be ? And how many per slice ? Actually more like: how many slices per drive ?

From a simple test I made, a simple build that takes 30 seconds on my MacBook Pro (2.4Ghz/2GB ram/laptop hard drive-5400rpm) takes about 20 minutes on the slice. This means the VPS is 40 times slower when doing IO-bound tasks.

Another large build that takes around 40 minutes on my laptop took 28 hours on the server. Which respects the about 40 times slower rule.

Now considering the above number and a 20% steal time, I'd expect to have a 20% overselling of slices on a physical machine. Meaning, at 16GB per machine, roughly 76 slices of 256MB on one machines. Taking into account the 1:40 rule above for IO speed, this means that they have about 2 hard drives in a server.

Conclusions



It's certainly liberating to have complete control over a server. CPanel solution just don't cut it when you need to run various applications on strange ports. Of course, the downsize is that you also have to do all the administration tasks, secure it, etc.

The Slice host services are very decent price-wise, the "administrator" panel they have provides you with everything you need, even a virtual terminal that goes to tty1 of the machine (very handy if for some reason SSH doesn't work for example).

Even the smallest slice I'm using right now has enough space, RAM and bandwidth for small tasks. If you just use it sparingly during business hours, the "fixed weight" sharing rule gives you enough CPU / IO for most tasks.

But for heavy usage, I think the solution is either to get a more expensive slice or start building your own machine.

IO-bound tasks are almost impossible to run due to the 1:40 slowness noticed. This means that you need to get at least the 4GB slice to have it run decently. Of course, that's $250 compared to the $20 slice I have right now.

CPU doesn't seem to be a problem, at least for my kind of usage. It seems responsive enough during normal load and mostly idle under heavy load (so idle that Xen gives my CPU cycles to other virtual machines). Initially I was expecting this to be a major problem while moving my build server there, but boy, was I wrong. IO-limitations don't even compare with the CPU limitations.

Getting 5% or more of a fast CPU doesn't even compare to getting 2.5% of an even slower resource like that hard drive if you are compiling.

Further experiments



During the time I was considering the CPU to be my future bottleneck, I was thinking which option would be better: 2 x 256MB slices or a bigger 512MB slice.

According to their rules and offering, the two configurations are totally comparable. Even more, using their sharing rule, 2 x 256MB slices should get at least the same CPU cycles under load as the 512MB one. (Further emails from Slicehost's support led me to believe the rule might be oversimplified, but they didn't tell me in what way -- I think the weight of the smallest slices might be even smaller with the bigger slices getting even more of their share).

So, if under load they get the same CPU cycles, it means that when the machine has CPU cycles to spare, I have 2 candidate slices to get those spares.

So the question was: for a 5% price increase I would pay for 2 x 256 slices compared to 1 x 512 slice, will I get at least 5% more CPU cycles ?

I'm still not certain with the data I've computed that it might happen. Also, the new question now would be: will I get at least 5% more IO operations ?


Non-agression



The above post isn't a rant against Slicehost. I think they are providing a decent service for their price. It is interesting though to see which kind of usage can one put on a VPS and which are better to be run on the server in the basement.


512MB update



Well, isn't this interesting. A vertical upgrade to 512MB of RAM is another world entirely. Maybe the new VPS is on a less-loaded machine, but at first sight, it's looking way better: the previous 28 hours build (fresh) takes now only 40 minutes for a small update. I'll try a clean build later this week and see how fast it is.

So it seems it wasn't only a problem of slow IO, it was also a big problem of not enough RAM leading to swap file trashing.