My 5 minute lightning talk on Cascalog

Cascalog makes it a lot simpler to build distributed strategy backtesters on terabytes of market data, for example. It is a data processing library for building MapReduce jobs. I've been spiking out a data processing project with it at work for the past couple of weeks. So I thought I might as well give a lightning talk about it at our monthly developers meetup. Here are my presentation slides introducing Cascalog and outlining its features.

The possibilities...

Algorithmic ownage

I felt good when I simplified one of my algorithms and sped it up 10 times. I felt so good that I even wrote an entire blog post about it patting myself on the back. Then last week I got an email from Kevin of Keming Labs suggesting a few alternatives.

;;Three ways to count the number of occurrences in a collection
;; ("orange" "bottle" "coke" "bottle") => [("bottle" "coke" "orange") (2 1 1)]
(let [c '("orange" "bottle" "coke" "bottle")]

  (let [counts (reduce #(conj %1 (hash-map %2 (inc (get %1 %2 0))))
                       {} c)]
    [(keys counts)
     (vals counts)])


  (let [l (for [[k v] (group-by identity c)]
            [k (count v)])]
    [(map first l)
     (map second l)])


  (let [counts (apply merge-with +
                      (map #(apply hash-map %)
                           (partition 2 (interleave c (repeat 1)))))]
    [(keys counts)
     (vals counts)]))

First of all, his solutions looked much cleaner than mine. Then over the weekend I was able to incorporate his 3 algorithms into my program. I ran a few benchmarks and here are the average of 2 tests using a dataset of 28,760 items.

  • My algorithm. Elapsed time: 68372.026532 msecs.
  • Kevin's solution #1. Elapsed time: 156.940976 msecs.
  • Kevin's solution #2. Elapsed time: 60.165483 msecs.
  • Kevin's solution #3. Elapsed time: 296.162042 msecs.

Total ownage. That's what I like about sharing my work; once in a blue moon, a reader drops by and generously show me how I can improve a solution 1,000 times! Now the ball is in my hands to understand what he has done and improve myself. Collaborating and learning, that's why I open source.

Update: I've done some more digging and it seems that one of the reasons for the drastic improvement in performance is due to the use of transients in the built-in functions. Lesson of the day, leverage the language's inherent optimization by staying with core data structures and functions as much as possible.

Eureka moment on design patterns for functional programming

Understanding design patterns for object-oriented programming made my life easier as a Java programmer. So I have been looking for a comparable book for functional programming ever since my sojourn into this age-old paradigm. It looks as though I'm not the only one looking too. But the thing is, I think I've just had a revelation of sort.

There is one and only one guideline to designing functional architectures -- Keep it simple. Simple as in keeping your functions having a single purpose only. Simple as in work with the data directly and don't conjure unnecessary intermediaries. Simple, as elaborated by Rich Hickey in his talk, Simple Made Easy. Much of this is conveyed in Bloch's Effective Java:

item \#5 \– Avoid creating unnecessary objects
item \#13 \– minimize accessibility of classes and members

, for examples.

As Majewski said in a stackoverflow reply (Update 2013 – question no longer available on stackoverflow),

The patterns movement started because object-oriented programming
was often turning into spaghetti objects ... Functional programming
is a restricted style of programming, so it didn't need to grow a set
of restricted conventions to limit the chaos.

As such, there is no design pattern book for functional programming. I didn't get that earlier this year. But something clicked recently. During the past few months, I've been doing some consulting and open source projects solving algorithmic problems with Clojure.

One of the problems in a project that I was faced with this week is calculating the occurrence of each distinctive element within a list of elements. Say we have a list, coll = ("orange", "bottle", "coke", "bottle"). The output would be something like [("orange", "bottle", "coke") (1 2 1)]

This is my first solution.

(defn eval-decompose
  [coll]
  (let [super-d  (distinct coll)
        freqs    (loop [ps  []
                        d   super-d]
                   (if (seq d)
                     (let [c  (count (filter (partial = (first d)) coll))]
                       (recur (conj ps c) (rest d)))
                     ps))]
    (map #(vector %1 %2) super-d freqs)))

The specs are not exactly as I described but the concept remains. What I did is to use tail calls (it's supposed to be fast, isn't it?) to aggregate each counter to produce a vector of counts. Then I map each pair of fragment with its corresponding count to generate a final output collection. Sounds overly complicated, doesn't it?

This is the first warning of a bad functional design. For a collection of 30,000 items, this function took 11 minutes to compute on my notebook. This looks like a good place to exploit the parallel nature of this problem.

Specifically, the counting of each fragment is independent of other fragments. Thus, there's no need for the program to wait for one fragment to finish to process the next. I simplified the program to remove this inherent assumption of procedural processing. Here is the gist of the refactored code where each function only does one job. Since the processing are modularised, I can parallelize the algorithm easily with the use of pmap instead of map on the last line as shown below.

(defn match-count
" Given key, k, returns number of occurrences of k in collection, coll.
"
  [k coll]
  (let [match?  (fn [i x]
                  (if (= k x)   ;; closure on k
                    (inc i)
                    i))]
    (reduce match? 0 coll)))

(defn calc-counts
" Returns a list of counts for the occurrences of each key of keys, ks,
  within the collection, coll.
"
  [ks coll]
  (pmap #(match-count % coll) ks))

I've split the first function into 3 functions (2 shown here). As Hickey said in his talk, simplifying can often produce more, not less, functions. Yet, the program is not only easier to read and runs in less than a minute. An order of magnitude faster! There are still lots for me to learn. I want to find more challenging projects to push my own limits. But rather than solving arbitrary problems, I prefer to tackle real-world challenges. So if you know of anyone that can benefit from collaborating with a functional developer to build robust and scalable software, please pass along my contact.

Follow up: Kevin Lynagh showed me three better ways of doing this in a follow-up post – Algorithmimc ownage. Humbled.

Local Hadoop test cluster up and running

Thanks to Cloudera's CDH3 image, I have a virtual machine with Hadoop on CentOS 5 working. I'm more of an Ubuntu guy, so CentOS is a new for me. But nothing Google couldn't solve. I also ran into a Hadoop exception about the java heap space. I couldn't find a solution online so I just bumped up the memory on the virtual machine and it solved the problem. In any case, I managed to run the pi calculation example on my local Hadoop cluster. []

[]: http://www.quantisan.com/static/images/2011/09/hadoop-example-pi.jpg

←   newer continue   →