Thursday, October 27, 2011

Maximum in a sliding window

Given an array of n integers & a window of size w, keep sliding the window from the left-most to the right-most while emitting the maximum of the current window before every move.

The brute-force solution will work in O(nw) because for each of the (n-w) possible moves of the window, we need to look at all the w elements in each of those windows.

A slight improvement is possible by using a heap data structure. Maintain a max-heap of the pair (element & position). While removing an element and finding the replacement for the root, keep removing the elements that are not required. In the worst case, there would a space requirement of O(n) to achieve this.

The best approach for this problem is to use a double-ended queue (deque) data structure. The idea is to keep just the minimal number of elements in this queue while doing the operations. For example, when the current queue has (3 1 5) and the incoming element is 7, then there's no need to keep any of the elements in the queue other than the incoming one. So, keep inserting the elements into the back of the queue as long as they are smaller than the current last. Also, if you store the positions instead of the actual numbers in the deque, it would be easy to eliminate the stale ones from the front. This can run in O(n) as the numbers are inserted/deleted constant number of times.

Wednesday, October 26, 2011

Propogate zeros to rows & columns of a matrix

Given a matrix of integers, if there are zeros in a particular position, make the whole of the corresponding row & column as zeros. Be as space conservative as possible.

The most straightforward solution to the problem is to have 2 arrays, one for storing the positions of zeros in the rows & the other for the columns. Obviously, we've to do better than this.

We can do the same with a single array too. The basic idea is to scan the matrix either row-by-row or column-by-column. Let's assume row-wise scan for this solution. While doing this scan, mark the columns having zeros in the array. If there's a zero in a row, make the whole row as zeros. After completing the row-wise scan, look at the array for the columns having zeros & make them all zeros in the second pass.

There's a beautiful third solution to this problem. Here we just use 2 variables to store the information required. Scan the first row & the first column and if there is a zero in those, mark the same in the two variables. Now, start scanning without the first row & column. If you encounter a zero, mark the same in the first element of the corresponding row & column as zero. Once this pass is completed, using the information in the first row & column, propagate the zeros to the other rows & columns. Finally, based on the information available in the 2 variables, modify the first row / column as required.


Sunday, October 23, 2011

Insert & Median

Maintain a data structure that supports two operations 'insert' & 'get median'. Preferably the median operation should be in constant time.

First solution that came to mind was using a linear linked list and maintain a sorted list of elements. The cost of both the operations are O(log n) in this case.

When looking for a better solution, I stumbled upon this ingenious solution of using 2 heaps to achieve the same. The basic idea is to maintain 2 heaps and a single element having the total number of elements seen so far. The first heap should have half the total number of elements & maintained as a max heap whereas the second heap should have the other half of the elements & maintained as a min heap. This way, the median is always the root of the max heap (if the #elements is odd) or the average of the roots of both heaps (if the #elements is even). The trick is in how do we maintain this structure / property all throughout the insertions. As the heaps are equally split in the #elements, we also know right upfront on which heap a given insertion which land up in. Once we know that, we can always find if the given element directly fits into the heap or it replaces (or rather moves to the other heap) an existing element in a heap. Using this approach we can provide a O(1) 'get median' operation & O(log n) insertion time.

Heap is my favorite data structure. Something about it just fascinates me no end. I would love to find out if there are other such interesting problems that can be solved using heaps.

Saturday, October 8, 2011

Indexing

Found this site http://puzzles.nigelcoldwell.co.uk/index.htm recently. Has a lot of good puzzles. Have heard about 50% of those in school & college. Nice to workout those things again.

On similar note, the site http://www.ihas1337code.com (now moved to http://www.leetcode.com/) is a very valuable source for interview preparation for the big giants.

Sunday, October 2, 2011

Find the median of two sorted arrays

This is one of those rare problems where the solution for the generalized problem is just as costly as the solution for a specific case. In other words, we can find any kth item of this combined array with the same complexity.

The brute-force approach is to do a merge-sort & find the median. The cost of this would be O(n+m) where n is the size of the first array & m is the size of the second. Second simple way is to see that we don't need to do a complete merge-sort but rather just till we find the kth element. Hence the cost would be down to O(k). Obviously, we need something better than this. Turns out that this can be done in O(lg n + lg m).

First, compare the middle elements of A and B, which we identify as Ai and Bj. If Ai is between Bj and Bj-1, we have just found the i+j-1 smallest element. Therefore, if we choose i and j such that i+j = k+1, we are able to find the kth smallest element. Basically, the above can be stated as,

Maintaining the invariant
i + j = k + 1,

If Bj-1 < Ai < Bj, then Ai must be the kth smallest,
or else if Ai-1 < Bj < Ai, then Bj must be the kth smallest.

When one of the above condition is satisfied for our initial i & j, we are done. Otherwise, we need to think about which set to use for the further comparisons.

If Ai < Bj, then Ai < Bj-1, because otherwise, we must have chosen Ai. This means that, the chosen Ai is quite low and we must include bigger elements from the set A (i.e., from i+1 to n). On the other hand, the chosen value from B is already too large. Hence, we shall not consider anything above the current chosen value of B (i.e., from 1 to j-1). So, basically we are shrinking our sample space in a logarithmic fashion & hence the final cost would be O(lg n + lg m).


Number of rectangles in a chessboard

I was asked by an interviewer to find out the number of squares in a chessboard to start with. I simply took the iterative approach. For a 1x1 board has 1, 2x2 board has 5, 3x3 board has 14 & so on. It all easily fits into the series 12, 12 + 22, 12 + 22 + 32 etc., Hence, for a 8x8 chessboard, it would be sum of squares of the numbers from 1 to 8.

Then he asked me if the same approach would work for finding out the number of rectangles in the board. I tried something for a couple of minutes & it didn't seem to form any pattern. Later while looking up on the net for the solution, I realized that for finding the rectangles, any iterative approach would be simply futile.

The actual solution to the problem is very simple. Let's see how it works for a square first. The basic idea is to find out the number of possible types of squares & to see where all they can be fit. For example, the square of size 1x1 can be fit in 64 positions, the square of size 2x2 can be fit in 49 positions, the square of size 3x3 can be fit in 36 positions and so on.

For a rectangle, the same thing applies. We need to first identify the various possible shapes of the rectangles that can be formed. Anything from 1x1 to 1x8 and 8x1 to 8x8 is possible. We need to find out where all these would fit & that would be it.

Learning JavaScript

Though a little late, I finally realized that I would be pretty soon ostracized if I don't learn JavaScript. Being in an Internet company, it's more of a mandate rather than a nice to know language. Given that, I started learning it right today & thought it's a good thing to capture some of the important points for later reference.
  1. JS doesn't have the notion of Classes. Everything is an Object. They call it the 'prototypal OO' language to mark this distinction. To remember this fact, here is an example. In classic OO language, we would say "create an object called Bob which is of class Person" whereas in JS, we've to say, "Take the object Person & reuse it as a prototype for a new object called Bob".
  2. In JS, there are no private/public/protected access modifiers. Everything is public. But the effect of the same is achieved using scope.
  3. Nice things like 'Infinity', 'NaN' and '2e5' are available to express the numbers very clearly.
  4. All functions have the hidden argument called 'arguments' that helps in accessing all the parameters passed to that function as an array.
  5. Variables not declared using 'var' are implicitly declared with global scope.
  6. In JS, functions are data. That is, a function with name 'foo' is actually a variable of type 'function' with the value as the body of the function.
  7. 'Scope Chaining' exists in JS. That is, an inner function can access variables defined in the scope of the outer function. Also, it follows the usual 'Lexical Scope' for functions. That is, the environment is created while the function is defined rather than when it is run.
  8. A variable can be undefined using the 'delete <variable>' syntax.
  9. 'Closure' is a concept where a function keeps a link to its parent's scope even after the parent has returned. There are various ways in which this can happen & should be handled very carefully. The access protection mentioned in point 2 can be achieved using this.