Oracle/Sun JDK on EC2 Amazon Linux

Here is a cheat sheet on how to get an Amazon Linux EC2 instance tricked out with Oracle JDK instead of the default OpenJDK.

Remove OpenJDK without removing dependencies:
$ sudo rpm --erase --nodeps java-1.6.0-openjdk java-1.6.0-openjdk-devel
$ sudo rpm --erase --nodeps java-1.6.0-openjdk java-1.6.0-openjdk-devel

Get oracle/sun jdk:
$ wget --no-check-certificate --no-cookies --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2Ftechnetwork%2Fjava%2Fjavase%2Fdownloads%2Fjdk-7u3-download-1501626.html;" http://download.oracle.com/otn-pub/java/jdk/7u25-b15/jdk-7u25-linux-x64.rpm

Install new jdk
$ sudo yum install jdk-7u25-linux-x64.rpm

Create new symlinks and switch to them
$ for i in /usr/java/jdk1.7.0_25/bin/* ; do \
f=$(basename $i); echo $f; \
sudo alternatives --install /usr/bin/$f $f $i 20000 ; \
sudo update-alternatives --config $f ; \
done

Create symlink to JDK for use by aws tools:

$ cd /etc/alternatives
$ sudo ln -sfn /usr/java/jdk1.7.0_25 java_sdk
$ cd /usr/lib/jvm
$ sudo ln -sfn /usr/java/jdk1.7.0_25/jre jre

Amazon AWS: Elastic Beanstalk now supports Python

Amazon Web Services recently announced that support for Python is available. Support for Java, PHP and .NET was already available.

Elastic Beanstalk comes with support for Amazon RDS. It will allow you to quickly deploy your Django (or other Python framework that use WSGI) based apps using MySQL quickly, taking care of scaling and availability issues.

Now using the “eb” command line tool and “git”, one can get Heroku like productivity to develop, test and deploy your PHP, Java, .NET and now Python apps. Go AWS!

HTML5 FileSystem API

Someone complaining that the HTML5 filesystem API is too verbose, and implementing a library to make things easier:

 

http://kybernetikos.com/2012/07/27/fsapi/

 

 

Link: “Why are AWS Command-Line Tools so Slow?”

Diomeded D. Spinellis hunts down sources of latency in aws command line tools. Excerpt:

Amazon’s Elastic Compute Cloud command-line tools are useful building blocks for creating more complex shell scripts. They allow you to start and stop instances, get their status, add tags, manage storage, IP addresses, and so on. They have one big disadvantage: they take a long time to run. For instance, running ec2-describe-instances for six instances takes 19 seconds on an m1.small AWS Linux instance. One answer given , is that this is caused by JVM startup overhead. I found that hard to believe, because on the same machine a Java “hello world” program executes in 120ms, and running ec2-describe-instances –help takes just 321ms. So I set out to investigate, and, using multiple tracing tools and techniques, this is what I found.

Backbone.js views done right

While everyone is going gaga over backbone.js, here is something to keep in mind when designing large scale websites. Backbone.js views done right – http://pulse.me/s/ah2sc

Clojure: a pretty good lisp

I had had a look at clojure when it had first come out. It looked interesting, but was in its infancy. I recently started looking at it again. It is slightly different from Common Lisp. A very interesting thing about clojure is that, being “hosted” on the Java virtual machine, it has access to java libraries.

Here is a little clojure program I wrote (following the book “The Joy of Clojure”):

(ns tut1.gui
  (:gen-class))

(defn f-values [f xmax ymax]
  "Return a vector "
  (for [x (range xmax)
        y (range ymax)]
    [x y (rem (f x y) 256)]))

(defn draw-values [frame f xmax ymax]
  (let [gfx (.getGraphics frame)]
    (.clearRect gfx 0 0 xmax ymax)
    (doseq [[x y v] (f-values f xmax ymax)]
      (.setColor gfx (java.awt.Color. v v v))
      (.fillRect gfx x y 1 1))))

(defn draw-graphic [frame f xmax ymax]
  "Draw f(x y) for all x,y combinations on a 200 by 200 GUI frame."
  (let []
    (.setVisible frame true)
    (.setSize frame (java.awt.Dimension. xmax ymax))
    (draw-values frame f xmax ymax)))

(defn remove-graphic [frame]
  (.dispose frame))

And I call it from the repl like so:

tut1.gui=> (def frame (java.awt.Frame.))
#tut1.gui/frame
tut1.gui=> frame
#<Frame java.awt.Frame[frame0,0,22,0x0,invalid,hidden,layout=java.awt.BorderLayout,title=,resizable,normal]>
tut1.gui=> (draw-graphic frame bit-xor 300 300)
nil

which draws a pretty graphic. But, but, but! The important point in all this is that we are using Java libraries and object, from a REPL, dynamically. We can instantiate java objects, call methods on them, etc. For example, when I get tired of the graphic drawn above, I can remove it by running this on the REPL:

tut1.gui=> (.dispose frame)

Measuring performance of Lisp functions

Consider the following exercise (Ex. 1.5 in “Paradigms of Artifical Intelligence Programming” by Peter Norvig):

Write a function to compute the dot product of two sequences of numbers, represented as lists. The dot product is computed by multiplying corresponding elements and then adding up the resulting products. Example:
(dot-product '(10 20) '(3 4)) => 110

Here are four implementations (the first one is mine, the other three are the solutions to the exercise in the book)

1. My solution: Recursion with accumulator — tail call optimized (TCO)

(defun dot-product (lst1 lst2 &optional (acc 0))
  "Computes the dot product of two sequences, represented as lists."
  (if (not lst1)
      acc
      (let ((x (first lst1))
	    (y (first lst2))
	    (lst11 (rest lst1))
	    (lst22 (rest lst2)))
	(dot-product lst11 lst22 (+ acc (* x y))))))

2. Solution 1 in PAIP: apply and mapcar

(defun dot-product1 (lst1 lst2)
  (apply #'+ (mapcar #'* lst1 lst2)))

3. Solution 2 in PAIP: recursive without TCO

(defun dot-product2 (lst1 lst2)
  (if (or (null lst1) (null lst2))
      0
      (+ (* (first lst1) (first lst2))
	 (dot-product2 (rest lst1) (rest lst2)))))

4. Solution 3 in PAIP: iteration and indexing

(defun dot-product3 (lst1 lst2)
  (let ((sum 0))
    (dotimes (i (length lst1))
      (incf sum (* (elt lst1 i) (elt lst2 i))))
    sum))

Performance

We test the solutions like this:

Set up some test data:

CL-USER> (defparameter *a* (make-list 100000 :initial-element 1))

dot-product: Use the time macro to measure the performance of the function dot-product:

CL-USER> (time (dot-product *a* *a*))
Evaluation took:
  0.002 seconds of real time
  0.001302 seconds of total run time (0.001301 user, 0.000001 system)
  50.00% CPU
  3,450,092 processor cycles
  0 bytes consed

100000

The function does not allocate any new memory, uses 50% CPU on average and finishes in 0.001302 seconds for a 10,000 element list.

Similarly, we test the other three functions:

dot-product1:

CL-USER> (time (dot-product1 *a* *a*))
Evaluation took:
  0.003 seconds of real time
  0.002625 seconds of total run time (0.002611 user, 0.000014 system)
  100.00% CPU
  6,967,544 processor cycles
  3,205,632 bytes consed

100000

dot-product2:

CL-USER> (time (dot-product2 *a* *a*))
Control stack guard page temporarily disabled: proceed with caution
...

This one aborted because it ran out of stack space before it could complete.

dot-product3:

CL-USER> (time (dot-product3 *a* *a*))
Evaluation took:
  58.350 seconds of real time
  58.347480 seconds of total run time (58.330376 user, 0.017104 system)
  99.99% CPU
  155,215,653,812 processor cycles
  0 bytes consed

100000

So it seems my solution is twice as fast as the fastest one in PAIP and has the advantage of not using any extra memory, whereas the PAIP solution takes up nearly 3MB (32 bytes per element). My solution is 44,813 times faster than the PAIP solution that does not allocate extra memory. Admittedly, the PAIP solutions are not about getting into performance but more about showing how to write code in Lisp; and neither has the author delved into Tail-Call-Optimization yet; so I am not criticizing PAIP at all.

Well, dot-product1 looks very concise and elegant, doesn’t it? What if we try to run it with a larger list?

CL-USER> (defparameter *a* (make-list 1000000 :initial-element 1))
*A*

And now, alas, it also exhausts the stack space:

CL-USER> (time (dot-product1 *a* *a*))
Control stack guard page temporarily disabled: proceed with caution
; Evaluation aborted on #<SB-KERNEL::CONTROL-STACK-EXHAUSTED {100341FE93}>.

Presumably this is because we are trying to pass in 1,000,000 arguments to apply and the arguments have to be pushed on to the stack. Perhaps we can tweak this a bit so that the arguments are not pushed on to the stack:

(defun dot-product4 (lst1 lst2)
  (reduce #'+ (mapcar #'* lst1 lst2) :initial-value 0))

And now this function does run:

CL-USER> (time (dot-product4 *a* *a*))
Evaluation took:
  0.212 seconds of real time
  0.210735 seconds of total run time (0.182024 user, 0.028711 system)
  [ Run times consist of 0.143 seconds GC time, and 0.068 seconds non-GC time. ]
  99.53% CPU
  563,737,032 processor cycles
  47,974,560 bytes consed
  
1000000

Hmm.. it seems to be allocating even more bytes per element than before (47 bytes per element) and there is now some GC overhead. What about my function?

CL-USER> (time (dot-product *a* *a*)) ; *a* is a list with a million elements
Evaluation took:
  0.014 seconds of real time
  0.013474 seconds of total run time (0.013462 user, 0.000012 system)
  92.86% CPU
  36,021,059 processor cycles
  0 bytes consed
  
1000000

Still jolly good!

Update

Svente posted another (more efficient, very readable) solution in the comments below:

(defun dot-product5 (lst1 lst2)
  (loop for element1 in lst1
       for element2 in lst2
       sum (* element1 element2)))

This code uses two nested for loops and a (invisible, hidden) local variable to hold the sum. How does it perform?

CL-USER> (time (dot-product5 *a* *a*))
Evaluation took:
  0.007 seconds of real time
  0.007470 seconds of total run time (0.007470 user, 0.000000 system)
  100.00% CPU
  19,859,546 processor cycles
  0 bytes consed
  
1000000

It performs very well indeed; better than my recursive solution.

Summary
CPU, time, memory taken with a list of 10,000 elements:

Soluton CPU % Time (s) Processor cycles/element Bytes consed
dot-product 50.0% 0.001302 34.5 0
dot-product1 100.0% 0.002625 69.7 3,205,632
dot-product2 does not even run to completion
dot-product3 99.99% 58.35 1552 0

CPU, time, memory taken with a list of 1,000,000 elements:

Soluton CPU % Time (s) Processor cycles/element Bytes consed
dot-product 100.0% 0.010221 27.2 0
dot-product4 99.21% 0.1259 336 48M
dot-product5 100.0% 0.0079 21.2 0

Lisp on Mac OS X Lion

I did not have any Lisp installed on my newish Macbook Pro. I thought I’d revisit my previous post on how to get SBCL + Emacs + SLIME working on Linux and update it for Mac OS X, using Aquamacs instead of Emacs. However, I found that someone has written a very good post on making Aquamacs + SBCL + SLIME working on Mac OS X Lion already.

Realizing A Service Provider Framework with Java EE / CDI

Consider the following scenario: you are writing some code that requires a service. There may be multiple implementations of the service, and possibly not all the implementations are known a priori. You’d like the client code not to depend on any particular implementation, but would like the implementation to be able to select a particular implementation at runtime and use it.

This is what is called service provider framework: this is a system where multiple service providers implement a service and the system makes the implementations available to it clients, decoupling them from the implementations. Of course, since the client code uses the implementations at some point of time, there is still a runtime dependency between the client code and the individual implementations of the service, but the important point is that the client code does not have to change when new implementations are introduced.

Let’s consider a very simple scenario: there is an interface, called ProviderInterface, and there can be multiple implementations unknown at design time (say, ProviderA and ProviderB). Some client code, call it ProviderLister would like to know what implementations are present and present them to the user for selection. The situation can be illustrated in the figure below.

How would one go about implementing this pattern? In this article, I will explore how one would go about doing this using Java EE CDI services.

Consider the implementation / component diagram below:

Here is the code that can implement this pattern.

The framework

The framework is in the spi jar file. The jar contains the following code in net.nihilanth.demo.spi.IProvider.java:

package net.nihilanth.demo.spi;

public interface IProvider {
    public String getName();
}

And that’s it. No other code, no dependencies.

The plugins

The plugins are also jar files. The plugin1 jar contains the following code in net.nihilanth.demo.plugin1.ProviderA.java:

package net.nihilanth.demo.plugin1;

import net.nihilanth.demo.spi.IProvider;

public class ProviderA implements IProvider {

    @Override
    public String getName() {
        return "ProviderA";
    }

}

The jar also contains a META-INF/beans.xml file which will tell our Java EE container to inspect the contents of the JAR for CDI injectable classes:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://java.sun.com/xml/ns/javaee"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/beans_1_0.xsd">
</beans>

As you can see, the file is actually empty apart from the appropriate namespace declarations. Its presence is enough to let CDI know that it should inspect our JAR and find injectable beans (ProviderA, in our case).

plugin2 contains very similar code, except that it provides net.nihilanth.demo.plugin2.ProviderB class.

Note that plugin1 and plugin2 jars will have a compile time dependency on the spi jar, since the ProviderX classes have a hard dependency on the IProvider interface, which is packaged in the spi jar.

The client code

Our client code is in the container war file. It contains a WEB-INF/beans.xml file with code similar to above to tell CDI to find injectable beans. Besides that, it contains our client code in net.nihilanth.demo.container.ProviderLister.java file, which is a JSF managed bean.

package net.nihilanth.demo.container;

import java.util.ArrayList;
import java.util.List;
import javax.enterprise.inject.Instance;
import javax.faces.bean.ManagedBean;
import javax.faces.bean.RequestScoped;
import javax.inject.Inject;
import net.nihilanth.demo.spi.IProvider;

@ManagedBean
@RequestScoped
public class ProviderLister {

    @Inject
    private Instance<IProvider> providerSource;

    public ProviderLister() {
    }

    public List<String> getProviderList() {
        List<String> names = new ArrayList<String>();

        for (IProvider provider : providerSource) {
            names.add(provider.getName());
        }

        return names;
    }
}

The important code here is:

    @Inject
    private Instance<IProvider> providerSource;

This is the line which tells CDI to inject an Instance object. We can the subsequently use this object to iterate over all implementations of IProvider implementations. Which implementations will be found? That depends on which implementations have been bundled in the war. I use maven to build my artifacts, so in my war’s pom.xml I include all the provider implementations I am interested in at runtime:

<project ...>
    <dependencies>
        <dependency>
            <groupId>net.nihilanth.demo</groupId>
            <artifactId>spi</artifactId>
            <version>1.0-SNAPSHOT</version>
        </dependency>
        <dependency>
            <groupId>net.nihilanth.demo</groupId>
            <artifactId>plugin1</artifactId>
            <version>1.0-SNAPSHOT</version>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>net.nihilanth.demo</groupId>
            <artifactId>plugin2</artifactId>
            <version>1.0-SNAPSHOT</version>
            <scope>runtime</scope>
        </dependency>        
    </dependencies>
</project>

The important thing to note is that the dependency on spi jar is a compile time dependency (the default in maven if you don’t override it with a scope directive), whereas the dependencies on the plugin jars are runtime dependencies: this essentially means that code in the war cannot directly refer to any code in the plugin jars and therefore cannot be coupled to them.

Now we just add a JSF page to list our discovered provider implementations (I am using the Facelets variant of JSF):

<?xml version='1.0' encoding='UTF-8' ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:h="http://java.sun.com/jsf/html"
      xmlns:f="http://java.sun.com/jsf/core">
    <h:head>
        <title>Facelet Title</title>
    </h:head>
    <h:body>
        Hello from Facelets
        <h:dataTable value="#{providerLister.providerList}" var="provider">
            <h:column>
                <f:facet name="header">Provider Name</f:facet>
                #{provider}
            </h:column>
        </h:dataTable>
    </h:body>
</html>

And that’s it! Now your war will be able to list IProvider implementations that were bundled with the war and show them. Of course, you still have to do something useful with them, and I hope to cover that in another post.

What’s going on here

What’s going on is that the Java EE CDI service has figured out that the client code wants to list all implementations of IProvider interface. It inspects the included jars for injectable beans (using the presence of beans.xml files to guide which jars to look into) and registers the found implementations internally. Then, on demand, it provides us with those implementations.

How to add more plugins

So, imaging in the future someone writes a ProviderC class. They should package it into a jar with a beans.xml file, and the author of war just needs to add this jar in the maven dependencies list in the pom.xml file of the war. The new class will be available at runtime.

Related stuff

In his excellent book Effective Java, Joshua Bloch discusses the service provider framework. In his framework, he does not assume the presence of anything like CDI. Therefore, his providers have to have code to register with the service provider framework somehow. I have shown that using CDI, we can get the container to do this registration for us.

Architecture vs Design

Architecture vs Design

What makes Software Architecture different from Software Design?

This is a provocative question. What is the motivation behind this question? Does it matter whether one is doing architecture or design? Isn’t the point of both is to figure out how a system is going to be built, and to communicate that?

Well, yes. In smaller teams, a single person or a few people may be doing architecture and design, along with development, and it is the final product that really matters anyway, so there is no need to split hairs.

In other projects, there may be a mandate to keep distinct architecture and design documents, and distinct people involved in system architecture and design. In fact, there are dedicated Architects in most companies, who do this activity called System Architecture, and who produce System Architecture Documents. What to they do that is distinct from design?

Definitions, definitions…

The core problem is that there is no canonical definition of the term Software Architecture. So, different people have different takes on it. Consider some definitions of Software Architecture:

  • “…is a depiction of the system that aids in the understanding of how the system will behave.” — SEI
  • “…is the set of structures needed to reason about the system, which
    comprise software elements, relations among them, and properties of
    both.” — Wikipedia, quoting Documenting Software Architectures: Views and Beyond, Second Edition
  • “The architecture of a system is its ‘skeleton’. It’s the highest level of abstraction of a system. What kind of data storage is present, how do modules interact with eachother, what recovery systems are in place… Software design is about designing the individual modules / components. What are the responsibilities, functions, of module x?” — Someone on the internet

Ad infinitum. One can see that there are many definitions, some broad and some specific, some vague and some precise, and some that seem to make sense and some that you might not agree with. Most of them seem to be clustered around an unsaid consensus: none of them might claim to be the definition, but on aggregate they seem to be talking about the same thing. Some definitions, while making sense on their own, don’t give guidance on how Architecture is different from design. Others, focusing on the distinction, seem to be saying things that don’t seem quite right (“skeleton”? Really? And what is meant
by the “skeleton of a system”? When does a skeleton stop being a skeleton?)

It seems, the definition of Software Architecture is like the definition of Object Oriented: There is broad consensus, at least in terms of people who practice the art, but just try to get a bunch of people to agree on the one true definition of it and work will come to a halt while debates rage. On the other hand, people seem to be doing fairly ok with creating object oriented software and are able to learn and teach it, without, gasp!, the one true definition.

This does not help the Architect, who needs to decide when to stop refining and detailing and declare the Software Architecture as done, and pass the baton to the downstream people to design and implement.

Issues

The problem with thinking of architecture vs design as being “high level” and “low level” or “abstract” vs “detailed” is that it just re-defines the question in even
more vague language. I mean, just try finding a clear cut definition for abstract vs detailed.

Perhaps the answer lies not in the definitions of the terms, but in the goals? What is an Architect trying to achieve with his Software Architecture that is different from the Designer?

Goals

Let’s agree with the fact that (a) a system’s life begins with a set of requirements, (b) that an Architect will product a System Architecture Document (SAD), and (c) developers will develop a system according to the document, with some Design documents being produced before or during
the development phase.

A system has functional requirements, and some non-functional ones. There are also
quality attributes: flexibility, maintainability, and—oh—multiple people need to be able to develop the system in parallel. It is the architect’s job to describe the system to be built in a way, and in enough detail, that all the things that need to be described to meet these goals are documented, and anything else is left to the discretion of the designers and developers.

So, its not really about the detail or abstraction level. If some particular part of the system can be adequately described as “…does <well understood task> in 20ms”, then so be it: the architecture does not need to go into more detail. On the other hand if there is a critical SOAP service whose granularity, operations, and semantics will affect the function, evolution and performance of the system, then the Architect must go into the nitty-gritty details and demonstrate just how the needs of the stakeholders will be met. No hand-waving will do.

What about modules? Does an Architecture have to deal with Modules? What about sub-modules? Sub-sub-modules? Modules are recursive: they can always be broken down into smaller modules (and again, there is no universal definition of the term).

Regarding how far to go in de-composing a system into modules, the Architect must balance two needs: (a) a project manager, or development manager, or team lead needs to have enough de-composition so that teams or individuals can be put to work on design or development in parallel, so there must be sufficient de-composition that the desired level of parallelism is achieved; (b) the developers should have maximum design flexibility: they should be able to implement the system in any way possible as long as it meets the requirements and quality attributes.

As soon as the Architect has de-composed the system into sufficiently granular parts that parallel development can be scheduled, the de-composition can stop. The Architect is responsible for spelling out how the modules will work together, and impose any conditions or restriction on the individual modules’ design and development that affect the overall system requirements and quality, and the rest can be left for “downstream”.

An individual module may be large or complex enough that it may merit its own Architecture, but from the outside, Architect has done his job or the overall system if the module has been described in sufficient detail for the team responsible for it to be able to design and develop it correctly.

Conclusion

In conclusion, an architecture is what an Architect does, and the Architect needs to do enough of it so that the requirements of all the stakeholders of the system will be met if his/her architecture is followed. An architect’s job is to ensure that the correct system will be built by the people who follow his architecture. This implies that the Architect needs to consider the context: what are those things that must be specified and constrained for it to meet it objectives, versus those things that can be left to the discretion of the implementers?

And
here is a fairly detailed description
of all the things that an architecture
needs to consider or might affect.

Follow

Get every new post delivered to your Inbox.

Join 80 other followers