Wednesday, November 23, 2011

Java knowhow-2

By now it's quite obvious that the double-check-lock method of synchronizing for a singleton is broken. It has been comprehensively proved based on Java's memory model & the out-of-order writes. So, what's the alternative for this (while still loading lazily)?

The most famous solution seems to be from Bill Pugh where he employs an ingenious way of doing the same without the need for any synchronization. He does this by exploiting the Java's semantics guarantee that a class is loaded only when referenced for the first time.

Here's the implementation based on his solution (courtesy: Wikipedia).

public class Singleton {
        // Private constructor prevents instantiation from other classes
        private Singleton() { }
 
        /**
        * SingletonHolder is loaded on the first execution of Singleton.getInstance() 
        * or the first access to SingletonHolder.INSTANCE, not before.
        */
        private static class SingletonHolder { 
                public static final Singleton instance = new Singleton();
        }
 
        public static Singleton getInstance() {
                return SingletonHolder.instance;
        }
}

More on JMM here (http://www.ibm.com/developerworks/java/library/j-jtp08223/). In addition, an awesome explanation of how ConcurrentHashMap is implemented.

A JMM overview

Before we jump into the implementation of put(), get(), and remove(), let's briefly review the JMM, which governs how actions on memory (reads and writes) by one thread affect actions on memory by other threads. Because of the performance benefits of using processor registers and per-processor caches to speed up memory access, the Java Language Specification (JLS) permits some memory operations not to be immediately visible to all other threads. There are two language mechanisms for guaranteeing consistency of memory operations across threads -- synchronized and volatile.


According to the JLS, "In the absence of explicit synchronization, an implementation is free to update the main memory in an order that may be surprising." This means that without synchronization, writes that occur in one order in a given thread may appear to occur in a different order to a different thread, and that updates to memory variables may take an unspecified time to propagate from one thread to another.

While the most common reason for using synchronization is to guarantee atomic access to critical sections of code, synchronization actually provides three separate functions -- atomicity, visibility, and ordering. Atomicity is straightforward enough -- synchronization enforces a reentrant mutex, preventing more than one thread from executing a block of code protected by a given monitor at the same time. Unfortunately, most texts focus on the atomicity aspects of synchronization to the exclusion of the other aspects. But synchronization also plays a significant role in the JMM, causing the JVM to execute memory barriers when acquiring and releasing monitors.

When a thread acquires a monitor, it executes a read barrier -- invalidating any variables cached in thread-local memory (such as an on-processor cache or processor registers), which will cause the processor to re-read any variables used in the synchronized block from main memory. Similarly, upon monitor release, the thread executes a write barrier -- flushing any variables that have been modified back to main memory. The combination of mutual exclusion and memory barriers means that as long as programs follow the correct synchronization rules (that is, synchronize whenever writing a variable that may next be read by another thread or when reading a variable that may have been last written by another thread), each thread will see the correct value of any shared variables it uses.

Some very strange things can happen if you fail to synchronize when accessing shared variables. Some changes may be reflected across threads instantly, while others may take some time (due to the nature of associative caches). As a result, without synchronization you cannot be sure that you have a consistent view of memory (related variables may not be consistent with each other) or a current view of memory (some values may be stale). The common -- and recommended -- way to avoid these hazards is of course to synchronize properly. However, in some cases, such as in very widely used library classes like ConcurrentHashMap, it may be worth applying some extra expertise and effort in development (which may well be many times as much effort) to achieve higher performance.

Thursday, November 17, 2011

Java knowhow - 1

Often I run into situations where I want to have multiple constructors which do pretty much the same things except that they call their own versions of the super class constructors. So, I end up creating a private initialization method that will be called from each of these constructors. Too annoying. The 'initialization blocks' will come in handy in these situations. This block of code is guaranteed to be copied to every constructor that you write. So, no more annoying private initialization methods. :)

Sunday, November 13, 2011

Rabin-Karp Algorithm

A cool algorithm that can be a very good interview question. This algorithm deals with searching a set of strings in a large text. The typical use case is to detect plagiarism.

Here is the pseudo code from Wikipedia entry.

 1 function RabinKarp(string s[1..n], string sub[1..m])
 2     hsub := hash(sub[1..m]);  hs := hash(s[1..m])
 3     for i from 1 to n-m+1
 4         if hs = hsub
 5             if s[i..i+m-1] = sub
 6                 return i
 7         hs := hash(s[i+1..i+m])
 8     return not found
 
The trick here is not to let the line 7 run in O(m). This can be easily achieved by leveraging the fact that we already have the hash code computed for s[i] to s[i+m-1]. So, if we use a rolling-hash technique, we can get this done in constant time. For example,

s[i+1..i+m] = s[i..i+m-1] - s[i] + s[i+m]

The real benefit of this algorithm is seen when we search for multiple patterns within a text. We could use a bloom filter to store the hashes.

For the single pattern matches, a better algorithm is provided by Knuth-Morris-Pratt. The Wikipedia article on the same is really informative.

Saturday, November 12, 2011

Java HashMap implementation, load factor, synchronization etc.,

HashMap is one of the data structures used very commonly in Java & I've always had my curiosity on what exactly goes behind the implementation of the same. On the surface, it looks fairly straightforward. It has to be using some simple hashing technique (of course, with collision resolution). But what bothered me was where exactly do the keys reside! It should have occurred even if I had put in a little more thought about how the collision resolution can be done. Basically, the values with the same hash code will end up in the same bucket & they have to be accommodated using a linear linked list. If this is the case, how do we retrieve a value for a given key (among the possible candidates within that bucket)? Simple, just store the key along with the value so that you can compare & return the value for the correct key. It also helped my confusion on where do the keys reside so that an iterator kind of function can pick all the keys present in the hash map.

Another important thing to note about HashMap (or Collections in general) is about their synchronization. The 'Collections' framework offers a bunch of static methods that will return a synchronized versions of the collection passed to them. But to what extent does this synchronization help? The doc says that it just guarantees individual operations to be synchronized & any complex operations using them should be synchronized manually. Note that the internal synchronization uses the object itself as the lock & so, we need to use the same as well.

An interesting point here is, if we don't use the synchronized version of the collection, what are the side effects? The hint again lies in the collision resolution. Assume that when we are about to do a put, the load factor limit of the hash map is reached. When this happens, Java tries to relocate the hash map to another location with twice the size of the buckets currently used. Also, note that the linear linked list used for collision resolution is actually inserting the elements at the front of the list rather than the end (so as to avoid tail traversing for insertions). Because of this, when the rehashing at the new location happens, the elements in the linked list are now inserted in the opposite order. So, given all these, when two threads try to do the relocation of the hash map, it can potentially lead to an infinite loop (basically one thread can change the next pointer of an element in the list to point to the previous element & the other thread changes the next pointer of the previous element to point to the element changed by the first thread). To avoid this tricky problem, synchronization is definitely needed on the collection if multiple threads are manipulating it.