Friday, March 28, 2014

Sunday, March 23, 2014

Thursday, March 20, 2014

The Minimum Viable Canvas

To follow this post, please read my earlier post, The Minimum Viable Product, in case you haven't.

The Minimum Viable Canvas is a subset of the Lean Canvas by Ash Maurya. The Lean Canvas itself is an adaptation of Alex Osterwalder's Business Model Canvas. You can read about the two canvas's in Ash Maurya's post here.

The Minimum Viable Canvas is not meant to replace the Lean Canvas or the Business model Canvas. The objective is to split these canvas's into two parts. The "Product" part and the "Market" part. For me this was easily done with the Lean Canvas, so that is why the Minimum Viable Canvas is a subset of the Lean Canvas.

The proposition I am making here is that the two part's have to be done by two separate persons/teams. It can be done by the same person/team also, if the are strong in both product development and customer development. Also, it has to be done in order. First the product part, then the Market part.

The Minimum Viable Canvas will use the iterative procedure as required by Steve Blank's Lean Startup Approach.


So how did I come about this? In the last three our four months I have been learning about lean startups almost full time. Read books, articles, studied actual cases and so on. I must have studied over a hundred cases in detail. Here is what I observed.

  1. The person/team have a good problem to solve, and come up with a solution and value proposition that is not unique.
  2. The team has a good problem and unique value proposition, but still come up with a solution that is not unique. This is like the wrong way of going about it, I discussed in my previous post.
I also observed that this has something to do with how technically strong the team.
  1. If the team is not technically strong, there is a high probability that they would end up in either of the two situations described above.
  2. The team is technically strong and still ends up in either of the two situations above. This, I observed may be due to the team getting distracted by the "Market" side of things. They are not focussing enough on the "Product" side.
The Minimum Viable Canvas is my attempt at a solution to the problem. And this is how it should work.
  1. Initially, the team must work on the Minimum Viable Canvas only, until they arrive at the Low Fidelity MVP, or High Fidelity MVP, MVP, whichever may be the case, depending on their technical capabilities and the complexity of the problem.
  2. They should know how they are going to "make" the MVP. (I have discussed this in the last post). They should write a one page brief on how they are going to "make" the solution.
  3. The solution must result in, or comply with, a unique value proposition. (I have discussed this in the earlier post). This is an iterative process until you arrive at the solution and unique value proposition.
Once you have the Minimum Viable Canvas right, you can move on to the Lean Canvas, or Business Model Canvas. If it is the lean canvas, just fill up the equivalent sections. And you will now have a Canvas with the "Market" side of things blank. Now the marketing team takes over and works on the rest of the canvas, and validates the hypothesis. Again this is an iterative process. If you are wrong and need to pivot, start with the Minimum Viable Canvas again.

I have seen many teams trying to do a better tripadvisor, or better flipboard/newsreader. Some taking an existing solution and improving upon it by adding AI/ML to it. Unfortunately almost all of them were going about it the wrong way, as discussed in my earlier post. The Minimum Viable Canvas tries to constrain the team into only thinking of the problem, solution and unique value proposition. Thus, they can improve their chances of coming up with the optimal solution.

The Minimum Viable Canvas is a Product focussed canvas. The order in which to work on the canvas is as follows.
  1. Problem
  2. Unique Value Proposition
  3. Solution
  4. Cost Structure
  5. Key Metrics (How are you going to test the value proposition?)
You must also write a one page brief on how you will make the minimum viable product. And how you will test the value proposition against it.

Tuesday, March 18, 2014

The Minimum Viable Product

For those not familiar with this subject here is a link.

There has been, a lot talked about the minimum viable product (MVP). There are a lot of articles discussing it's pros and cons. In this post I want to do a different take on this subject. The opinions expressed here are my own. I am writing all this based on my experience.

Quoting from an article on Apple designer Jonathan Ive from Time.

“Objects and their manufacture are inseparable. You understand a product if you understand how it’s made,”

He was talking about hardware. But this can also be applied to web based products and services. And also other areas too, I think. Here, we will only look at web based products and services.

So let us rewrite the above quote for web based products.

A value proposition and it's MVP are inseparable. You understand a MVP if you understand how it is made.

Sounds so simple. Then, why do we go wrong over and over again? To answer this question let us look at some examples.

Let's say we have a value proposition. We can build a better search engine than Google and Bing. These search engines have a problem. We will call it Problem X. We have the solution to problem X. And here is our plan of action to build our MVP.

First we build a search engine that works like Google and Bing. Next we add our solution to Problem X, to it. And we have a better search engine. Wrong! This is the wrong way to go about it.

The right way to go about it is, to solve Problem X first. And build a search engine around it. In fact this is exactly what Google did. A lot has been written about it, so I will not go into that here. When Google first launched, they did not try to emulate the other search engines of the time. Instead they started with their solution to Problem X and built the search engine around it. And this is also the reason why Bing is not better than Google. Bing simply did not have a Problem X to start with.

The example above was an extreme case. Let us look at a real life scenario. There is a marketing person. He knows for sure people have a certain problem. Problem X. He has a value proposition. But no solution to Problem X. He knows this is a problem worth pursuing. And he needs a software engineer to build the MVP.

If the software engineer builds the MVP without the solution to Problem X, with the hope that he can add the solution later, he is doing it wrong.

A value proposition and it's MVP are inseparable.

The software engineer can start with the solution to Problem X, only if he knows the solution. He can know the solution only if he knows how to make it.

You understand a MVP if you understand how it is made.

So as an entrepreneur, how do you know if your software engineer has come up with the optimum solution?

Problem -> Solution -> Unique Value Proposition

If your software engineer has come up with a solution, that produces a value proposition similar to your competitors, he has got it wrong. His solution must produce a Unique Value Proposition.

It is all about the "Jonathan Ive" in your team.







Sunday, August 26, 2012

The Best Basic Software Engineering tools

Over the years I have come to the following Basic tools configuration for my software development work, which I think is the best configuration for a software engineer as of today. This is also a kind of advice for people new to software engineering. All this is my personal opinion and I hope someone out there will find this useful. If you really are a good developer you should already know all this by now, and probably this post may not be very useful for you. Also I am not going to give any reasons or justification for what I say below.

The OS


OS X is the best OS for software development as of today and that should be your no 1 OS choice. If you don't have a Mac then Linux should be your second choice. If you are using a Mac then the 13" Mac Book Pro should be your ideal choice. You should prefer a larger size if you do more of design than coding. If you are using Linux go for the 32 bit version, the 64 bit version causes some problems if you are a developer (I experienced this myself). You should also use a SSD because it will speed up your work right from boot onwards. My 13" MacBook Pro boots in 15 seconds flat with an SSD, which is faster than a 13" MacBook Air.

Developing software on windows is like trying to compete with one arm tied behind your back.

The Editor


Sublime Text 2. Period.
Unless you are an Emacs guru!

Version Control


Git. Period.
Make Sure you read the Git Book.
Your remote repository should be Github.
If you want a private and free remote repository use Bitbucket.

Books to read


Effective Programming: More Than Writing Code (As of today the kindle edition is free).
The Regular Expressions Cookbook.
From Nand to Tetris.

Languages to know


Any Lisp like Language. Here is a quote from Eric Raymond.
"Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot." - Eric Raymond

And if you want an easy Lisp like language, learn LispyScript. (I am the author).

JavaScript is getting mainstream by the day. It is now popular on the server side too. Thanks to node.js.

Python of course.

And one of the C's, (C, C++, Objective C).




Sunday, July 24, 2011

Browser supported Single Sign on with Email Addresses


In this post I would like to explore Single Sign on's with email addresses, with the support of the users browser. Browsers do not currently support Single Sign On. Recently Mozilla showcased their concept of BrowserID.

I am not comfortable with their use of asymmetric keys, because this requires the user to manage his own private/public keys. Indeed BrowserID will ease the process for the user, but he still needs a private key on every computer he uses. And this may include public computers at browsing centers etc.

So here, I will present a Single Sign on process, that will not require asymmetric keys. For the sake of this post we will call the users email provider "email.com". The site he wants to sign into "site.com", and "BSSO"  for Browser Supported Single Sign On. This article will not get into the details of algorithm's etc, because each step described here can be carried out in many ways, and has already been implemented in some form or the other by other protocols. A good example is OpenID 2.0.

First, I will describe the process when the user is already signed into "email.com", and wants to sign into "site.com".

Case 1 - User signed into email.com


Step 1
The user browses to "site.com". "site.com" needs to indicate to the user's browser that it supports BSSO. This can be done in many ways. I will give one example here. On site.com's page it can include two elements. One element in the html head part like below
<link href="https://site.com/bsso" rel="bsso_end_point"/>
In the body part it can have an element with id "bsso_sign_in_button".
The rel="bsso_end_point" link element will indicate to the browser that this site supports BSSO and it should listen to the click event of the element with id="bsso_sign_in_button".

Step 2
When the user clicks the "Sign In" button for "site.com" the browser will make an authentication request to "email.com" on behalf of "site.com" with "site.com"s end point. This will need the user to have pre selected his prefered email address(s) in the browsers BSSO setup, if not the browser will show a popup asking the user to select his prefered email address. The browser may also have discovered "email.com"s end point during setup using webfinger.

Step 3
"email.com" returns a positive assertion of the user's email address. This is not a problem because the user is currently signed into "email.com". Also a private "association key" is included along with the assertion.

Steps (2) and (3) are transparent to the user. The browser makes a cross-domain ajax request to "email.com". This is possible because it is the browser making the request and not any javascript on "site.com"s page.

Step 4
The browser now directs the user to "site.com"s end point url with a http post request, with the assertion returned from "email.com" in the post body.

Step 5
"site.com" will now verify the assertion by sending the assertion along with the association directly to "email.com"s endpoint. "site.com" would have also followed the webfinger protocol to determine the end point. It is possible for "site.com" to request a time bound association with "email.com", so that Step 5 and 6 can be avoided in subsequent requests.

Step 6
"email.com" will respond with success or failure. 

Case 2 - User Not signed into email.com

In the case where the user is not signed into "email.com", in Step 3 "email.com" will respond with a "user not signed in" response along with a sign in url that might have an encoded token in its query parameter. (The encoded token is for preventing phishing, I am not yet sure if this token is required or not as of now). The browser will pop up a window and listen to the popup's close event, and direct the user to the returned sign in url. After sign in "email.com" must "close" the popup via javascript. When the popup is closed the browser will continue with Step 2 again. In case the popup was closed without the user signing in, the browser will receive a "user not signed in" for the second time, in which case the browser has to query the user again.

Some Notes
This may look like a lot of steps, but the user only "see's" (1) and (4). Also (5) and (6) are not required after "site.com" and "email.com" have established an association.

Phishing is not possible, because there are no redirects from "site.com".

The user can sign in from anywhere, there is no need to have any private keys on the computer being used.

Unlike BrowserID "email.com" will be aware of the site's the user sign's into. I don't know how much of a problem this is. It's a debatable issue I guess.

Monday, October 25, 2010

Android App Development in Scheme

You can now develop android apps in Scheme. I find this very exciting because you can develop in a simple functional manner. I have started a tutorial for this here.
http://androidscheme.blogspot.com

Tuesday, September 28, 2010

Fast Track Clojure

Fast Track Clojure is a tutorial I am developing for learning the Clojure language. The objective of the tutorial is to get you developing in Clojure as soon as possible.

Clojure is a Lisp based language, and here I would like to quote Eric Raymond on why you should learn Lisp.

"Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot."

You can start with the tutorials here.
http://fasttrackclojure.blogspot.com/2010/09/lesson-1-hello-clojure.html

Thursday, July 01, 2010

Released Eezee MVC

I released Eezee MVC an Easy, Model, View, Controller framework for Google App Engine.
Features

  • Has a Controller Class that does routing, handling and rendering templates
  • Your controllers reside in the controllers folder, views (html Django templates) in views folder,
    models in models folder
  • Allows Controller to recieve GET/POST parameters as function arguments.
  • Use's S Nakajima's excellant gdispatch router
URL: http://eezeemvc.appspot.com/
Project URL: http://code.google.com/p/eezeemvc

Tuesday, June 23, 2009

Federated Identity in your Browser

In this post I am going to discuss the background information that will make a case for Federated Identity Management in Browsers. With the advent of new Browser capabilities, and leveraging the new technologies that will be adopted by Federated Identity specifications, I hope to show how Federated Identity Management can be achieved using Browsers.

Federation of Identity serves to enable portability of Identity information across otherwise autonomous security domains. In other words Federated Identity is about using a single Identity to sign into different web sites (over simplifying a bit here). This is not only about your "username" and "password" but also a about other information that identifies a person like, real name, address, nick name, email etc.

Examples of common Federated Identity usage are, using your Google or Yahoo Account to Log in to other sites like Blogger, Youtube etc. In this case the web site allows authentication via any "Provider" that follows a Federated Identity standard like OpenID eg. This is different from using your Facebook or Twitter Accounts to sign into third Party sites. In the latter case it is called Delegated Identity. In other words the web site you are signing into has delegated authentication to Facebook or Twitter.

The way Federated Identity Log ins work is that, when you visit a Site you are redirected to your Identity Provider, eg. Google. You Log in at your Provider, and also "allow" your provider to provide additional information (your name, email etc), and then you are redirected back to the web site. For one this method is prone to Phishing. If a user inadvertently visits a untrustworthy site, the site could redirect the user to a site that appears to look like the users provider and steal his user name and password.

Another problem is that when you visit a site that supports Federated Identity, you cannot log in to the site just by clicking a button. The site for various reasons will choose to support a selected list of providers from which you have to choose from, leading to what is called Nascarization.

Another problem is that of data portability. Let us say you have your identity, profile and social contacts at Google, and you want to change to Yahoo. There is no way to do that seamlessly as of now.

All this begs the question where is the best place to keep your identity information? With You! That's the obvious answer. And the closest that can get to "You" is your Browser. Unfortunately in the current state of affairs of Federated Identity, the browser only plays the role of via media or broker, between the Identity provider and the web site. If the Browser were to manage your identity you could solve all the three problems above in one fell swoop!

However there are two reasons why browsers do not play a greater role in Federated Identity Management.
  1. There is no commonly accepted standard that will allow browser vendors to support this. This would require a specification to allow the browser, identity provider, web sites to speak a "common" language.
  2. Another solution would have been to implement browser plug ins. This would still require the common standard but at least we do not have to wait for the browser vendors. But the problem here is developing plug ins for all types of browsers is not easy (at least up to now).
New browser developments like Jetpack from Mozilla Labs allows you to develop browser plug ins very easily using Javascript. Opera Unite is another effort to empower the browser. All browser vendors are moving in the direction of empowering the browser. What all this means that extending your browser to support a Federated Log in standard is going to be trivial.

So what we really need is a "Federated Identity Standard for Browsers". There must be a working group for this at one of the standards bodies like OpenID, Open Web, Kantara etc. I have not seen such a working group yet.

I intend to demonstrate how a simple Federated Identity standard can be implemented using Mozilla Jetpack, and some minor tweaks to existing Federated Identity provider and consumer software, in a future post of mine.


Tuesday, June 16, 2009

Opera Unite, will I really use it?

Opera has released its new web server in a browser called Opera Unite. This is not a new idea, and the idea of running your own server, is I guess only appealing to geeks. Having said that, it does give the lay user an ability to run some basic services like picture sharing, chat etc from his own PC.

I can't see a killer application from among the ones they have available now. So we have to wait and see what applications developers will come up with.

Also we have to consider why people don't usually run web servers from their PC's. One reason is of cource bandwidth. If you are connected via ADSL or something like that this is a bad idea. You can do some limited stuff with a small group of friends. But nothing for public consumption.

The second problem is discovery. User's may just have temporary IP addresses. Opera solves this by being your proxy server that allows users to connect to your PC. That means you have to sign up for the Opera Unite service. The part I dont like is accepting their terms of service "By uploading Content to Opera’s site, you grant Opera an unrestricted, blah blah blah ....".

So I don't think this is going to replace my blogger, facebook, twitter etc etc accounts. But I can see where I could use it. For one, to delegate my OpenID. So my OpenID could be something like
http://home.mynickname.operaunite.com/openid.

Now before you run and download Opera I would say hang on. I haven't figured how to do the above my self yet. Its 45 mins since I have downloaded Opera. So its not like just editing your "index.html". Looks to me somebody has to do a "OpenID" Opera Unite Service. So my Opera unite OpenID is actually pointing to a Opera Unite Service by adding a "/openid". And he has to upload the service and Opera has to approve it! Or has anybody figured how to edit index.html yet?

But you see, there is potential here. If you have an application that stores your personal profile data and provides it to applications as and when required, we have the beginnings of data portability. But now the problem is to port your data from browser to browser instead of from web site to web site!

Update.
To set up your openid on your browser do the following. You cannot set it on your default home page. After downloading Opera Unite, Install the web server application by clicking on web server tab. Select a folder to be your web server root. eg C:\openid. Click on automatically create index.html file. Set Access control to public and save. Edit the index and add the following in the HEAD part. Change the href's accordingly to point to your provider.

<link rel="openid2.provider" href="http://www.myopenid.com/server"/>

<link rel="openid2.local_id" href="http://myname.myopenid.com"/>

Your OpenID is
http://home.mynickname.operaunite.com/webserver


Friday, October 31, 2008

Viewport, Column Container and nested layouts

I have added a viewport class and a column container class to JX. Please read my earlier posts if you havent.

The viewport takes over the document body element. So you can have only one viewport instance in your application. If you have content in your body element the viewport will hide it. So it is unobtrusive also. Clients with no javascript will see your content. Viewport extends JX.Container so lays out its components vertically.

The column container works similar to the container in my previous post except that it lays out its components horizontally. You can set fitWidth: true to one of its components and the component width will expand to the remaining width of the parent minus its sibling widths.

You can nest the containers within each other and create complex layouts. I will create a a border layout using the viewport and column container. The border layout will resize accordingly when you resize your browser.

Here is the code to create the border layout.
$(document).ready(function() {
    var viewport = new JX.Viewport({
        css: {padding: '0px', margin: '0px'},
        items: [{
            height: 50,
            css: {backgroundColor: '#aaaaaa', padding: '20px'},
            text: $('#north').text(),
            fitWidth: true
        },{
            jxtype: 'columncontainer',
            fitHeight: true,
            items: [{
                text: $('#east').text(),
                width: 150,
                css: {backgroundColor: '#cccccc', padding: '20px'},
                fitHeight: true
            },{
                text: $('#center').text(),
                fitWidth: true,
                css: {backgroundColor: '#eeeeee', padding: '20px', overflow: 'hidden'},
                fitHeight: true
            },{
                text: $('#west').text(),
                width: 150,
                css: {backgroundColor: '#cccccc', padding: '20px'},
                fitHeight: true
            }]
        },{
            height: 50,
            css: {backgroundColor: '#aaaaaa', padding: '20px'},
            text: $('#south').text(),
            fitWidth: true
        }]
    });
});                               


I have added a new jxtype 'columncontainer' for column containers. Other jxtypes i have added are 'container' and 'component'. You need not specify jxtype if you are creating the component with 'new'. Also default is component ('div').


Thursday, October 30, 2008

A jQuery Container Class

Further to my previous post I have created a container class called JX.Container. The container lays out components vertically just like you would if you append. However it has a few extra features.
In the config options it has a 'items' config which is an array of components or component configs.
You can set 'fitWidth' to each component and the component expand to the width of the container. You have to also call doLayout() on the container, this is required because of 'fitWidth'.
We will use the container class to create a login box. Here is the code.
$(document).ready(function() {
    var loginbox = new JX.Container({
        width: 200,
        css:{
            background: 'lightcyan',
            border: '1px solid darkblue',
            padding: '20px',
            fontSize: '12px',
            fontFamily: 'Arial, Helvetica',
            fontWeight: 'bold',
            color: 'darkblue'
        },
        appendTo: document.body,
        items: [{
            text: 'Enter your User Name'
        },{
            jxtype: 'input',
            attr: {type: 'text'},
            fitWidth: true
        },{
            text: 'Enter your Password'
        },{
            jxtype: 'input',
            attr: {type: 'password'},
            fitWidth: true
        },{
            jxtype: 'input',
            attr: {type: 'button', value: 'Login'}
        }]
    });
    loginbox.doLayout();
});                               

Wednesday, October 29, 2008

Configurable jQuery Components

I wanted to create jQuery components easily by passing in config options, that would also take in jQuery method parameters in the config option. You have to read my previous post to understand what is going on here. First I will show you how it works. In the code below I create a button object by passing config options to JX.Component.
$(document).ready(function() {
    var mybutton = new JX.Component({
        jxtype: 'div',
        text: 'Click Me',
        width: 100,
        css:{background: 'darkblue', color: 'lightblue', textAlign: 'center'},
        appendTo: document.body,
        click: function() {
            alert("You clicked a button with text: " + $(this).text());
        },
        hover: [
            function() {
                $(this).css({cursor: 'pointer', opacity: '0.5'})
            },
            function() {
                $(this).css({cursor: 'default', opacity: '1'})
            }
        ]
    });
});                               

You will notice that all config options except jxtype are actually jQuery method names whose values are the parameters to the jQuery method. jxtype tells the component what type of dom element to create. If you dont specify jxtype default is 'div'. Also note the the 'hover' config option takes in an array as its value. So where ever the corresponding jQuery method takes more than one argument you must use an array here.
Here is the code for the new JX.Component.
JX.Component = function() {
    if (JX.isObject(arguments[0])) {
        var config = arguments[0];
        config.jxtype = config.jxtype ? config.jxtype : 'div'; // default type is div
        JX.Component.superclass.init.call(this, document.createElement(config.jxtype));
        this.applyConfig(config);
    } else
        JX.Component.superclass.init.apply(this, arguments);
};

JX.extend(JX.Component, jQuery, {
    applyConfig: function(config) {
        for (var key in config) {
            var a = JX.isArray(config[key]) ? config[key] : [config[key]];
            eval('this.' + key + '.apply(this, a)');
        };
    },
    jxtype: function(jxtype) { 
        this._jxtype = jxtype;
    }
});

To make reusable components you can create a factory function like this.
function buttonFactory(config) {
 var buttonconfig = jQuery.extend({
        jxtype: 'div',
        width: 100,
        appendTo: document.body,
        click: function() {
            alert("You clicked a button with text: " + $(this).text());
        },
        hover: [
            function() {
                $(this).css({cursor: 'pointer', opacity: '0.5'})
            },
            function() {
                $(this).css({cursor: 'default', opacity: '1'})
            }
        ]
 }, config);
 return new JX.Component(buttonconfig);
};

$(document).ready(function() {
    var mybutton = buttonFactory({
        text: 'Click Me',
        css:{background: 'darkblue', color: 'lightblue', textAlign: 'center'},
    });
});


Monday, October 27, 2008

Extending jQuery the Object Oriented Way

Though I have used quite a few Ajax libraries in the last couple of years, nothing has impressed me like jQuery. However jQuery is not object oriented in the traditional sence. You cannot instantiate a jQuery object like this
var myobject = new jQuery();

or extend jQuery like this.
extend(MyClass, jQuery);

So what if you could object orientify jQuery? Imagine all your classes/widgets as extensions of jQuery. You could call all the jQuery functions from within your own 'this' like
this.addClass('classname');
this.click(function() {alert('you clicked me')});

But then you cannot extend jQuery because it does not have a constructor! It is itself an object. But thats where javascript comes to your rescue. Javascript does not differentiate an object from a class. And the jQuery object has some features that will come as a help. It has a prototype object and though it does not have a constructor it does have an init method. Some of you are already figuring where I am headed. But before we go ahead we need a function that will do a proper object oriented extend. And I will also add a namespace and call it "JX" for jQuery Extend. Here is the code.
var JX = {
    extend: function(bc, sc, o) {
        var f = function() {};
        f.prototype = sc.prototype;
        bc.prototype = new f();
        bc.prototype.constructor = bc;
        bc.superclass = sc.prototype;
        for (var m in o)
            bc.prototype[m] = o[m];
    }
};

JX.extend() is a function that will take in three parameters, a baseclass constructor, a superclass constructor and an object of functions to override any superclass methods. Now let us use this method to create a new Class called JX.Component that will be a base class for all our widgets.
JX.Component = function() {
JX.Component.superclass.init.apply(this, arguments);
};
JX.extend(JX.Component, jQuery, {});

The arguments passed can be any argument you pass into the jQuery $() function. Voila! You have a class that extends jQuery!
Now try this.

$(document).ready(function() {
    var mydiv = new JX.Component(document.createElement('div'));
    mydiv.text("Hello World").click(function(){alert("You Clicked Me!")});
    mydiv.appendTo(document.body);
);

Now let us get down to something more usefull. Let us extend JX.Component to make a button class.

JX.Button = function() {
JX.Button.superclass.constructor.apply(this, arguments);
this.initialize();
};
JX.extend(JX.Button, JX.Component, {
    initialize: function() {
        var component = this;
        this.hover(
            function() {
                component.css({cursor: 'pointer', opacity: '0.5'})
            },
            function() {
                component.css({cursor: 'default', opacity: '1'})
           }
        );
        this.click(function() {
            alert("You clicked a button with text: " +component.text());
        });
    }
});

Now try your button class with this code.

$(document).ready(function() {
    var mybutton = new JX.Button(document.createElement('div'))
        .text("Click Me")
        .css({background: 'darkblue', color: 'lightblue', textAlign: 'center', width: '100px'})
        .appendTo(document.body);
});    

If you noticed the Button constructor called its "superclass.constructor". However JX.Component when it extends jQuery calls its "superclass.init". And thats the trick in extending jQuery.

Friday, September 05, 2008

First ever Printed book had Chinese hardware and Indian software and Open Source Licence!

I was reading a book by Amartya Sen (Noble prize winner in Economics 1998) and found an interesting little fact I wasn't aware of, so I thought I must post it here. Let me quote from the book here.

"The first printed book in the world with a date (corresponding to 868 CE), which was a Chinese translation of a Sanskrit treatise, the so called 'Diamond Sutra', (Kumarajiva had translated it in 402 CE), carried the remarkable motivational explanation: 'for universal free distribution'".

And from the footnote.

"Kumarajiva was a half-Indian half-Kucian scholar who studied in India but had a leading position in the Institute of Foriegn Languages and literature in Xian, from 402 CE".

So here we have the first printed book in the world and the machinery it was printed on (the hardware) being Chinese, and the contents (software) attributable to original author and the translator of the book both being Indian.

Wednesday, September 03, 2008

Google's Chrome Strategy

If you haven't yet downloaded the new Google Chrome browser, launched yesterday, you can download it here. It's worth it. This is what a browser really should be. However this post is not about workings of the browser. There are a lot of articles about this in the last couple of days. This post is about Google's strategy for knocking Microsoft out of the race.

Really why would Google want to create a new browser when we have atleast half a dozen competent browsers already. As a matter of fact Google pays to support Firefox and that agreement is valid through 2011. And the Google Chrome browser itself is based on the WebKit rendering engine which is already being used by Apple's Safari browser. So really no advantage there except for the fact that chrome runs each tab you open as a separate process.

So what does Chrome have that the other browsers don't? A javascript compiler. Thats right, other browsers have javascript interpreters, but Chrome's V8 is really a Javascript compiler, that compiles and runs the javascript code as machine code, which is much faster than interpreting.

Thats the key. Faster javascript, means that you can run larger applications on your browser. If you have seen Google docs, you already have a spreadsheet, editor, and presentation software running on the browser. If you have faster javascript you could have software on your browser that can compete with standalone applications like Microsoft Office etc.

Think about this, over the years I have been using less of standalone applications, and more of applications on the browser. I use my browser for email, editing, spreadsheet and presentation. The only other applications I use is when I want to watch a movie or listen to music on my computer. Even that can be moved to the browser.

So there are many days that the only application I need to run on my computer is a browser. And Google has understood this for years now. That all your applications will eventually move to the browser. So Google docs and all other google apps are a step towards that goal.

Imagine the scenario. All the applications you need, now run on your browser. Does it matter what operating system you have in that case? No. Does it matter if you have an operating system at all? No. Your browser itself could be the OS too! So boot up your browser and shut down your browser! Far fetched or not, I bet this is already sending shivers down Microsoft's Spine.

Monday, September 01, 2008

Three Rules for Successful Software Development

In my years of software development I have been looking for that "holy grail" of software development methodology. There are many methodologies, Six Sigma, Agile, Extreme programming to mention a few. However these methodologies have concrete rules to be followed, and they seem to restrict the element of creativity in the process of development.

If you look at the successful software developed in recent years, especially the open source ones, none of them have been developed using the above methodologies. On the other hand they have evolved to what they are today over the years. It is in this context that I would like to set three rules for successful software development. Look at your software as an evolving organism.

1) Only software that can be partially usefull, when implemented partially, can succeed. You need your software to evolve into what your ultimate goal is. Start with a small release that you think will make it usefull for a small group of people or solves a small set of problems. Based on feedback let it evolve into the next stage, and so on until you have what you need.

2) Your software is never in sync with its current requirements. Because of the very nature of the evolutionary process, your software has adapted to the previous requirements, and is currently being adapted to the present requirements.

3) Finally Orgel's Rule, named after the evolutionary biologist Leslie Orgel. "Evolution is smarter than you are". Let your current users point out what is missing. Let them set your agenda. No one person or group of developers or analysts can point out what is missing better than your current users.

This blog was inspired by an excellant article called "In praise of evolvable systems" by Clay Shirkey.

Wednesday, March 14, 2007

Cross Browser Keyboard Handler

If you ever thought of writing a javascript editor you will know what I am talking about. Thats right. I am writing a cross browser WYSIWIG In Place javascript editor! Well this post is not about the editor itself. I may do that some time in the future.

The first problem you have to solve is cross browser keystroke handling because almost every browser has some quirks. I needed a keystroke handler that really was cross browser. Running a search on net, i didnt find a complete solution though there were a lot of interesting partial solutions. So i had to put these together and come up with my own.

Here is an excellant article on keystroke detection http://www.quirksmode.org/js/keys.html

As you should know by now, all browsers report the keypress, keydown, keyup events. And each of these events should report the KeyCode and Charcode. Of cource it doesnt allways happen. Ideally for each key event the browser should report the Charcode (the ascii character pressed) and the KeyCode (actual key). So if you hit "a" the Charcode would be 95 and the keycode 65.

For the sake of this discussion let us separate key events into character events and non character events, you will see why soon. Character events occur when a printable character is pressed and non character event occurs when a non printable character (navigation keys, backspace, delete etc) is pressed.

My Ideal browser (there is none) would for character events give the charcode and 0 for keycode. For non character events it would give keycodes and charcode 0. Your keystroke handler would be simple to write.

Mozilla comes closest to this ideal but it also reports keycodes for character events, like in the case of character "a" above. Even then it would be easy to write the keyboard handler if all the browsers behaved the same.

Why do we need keycodes and charcodes? Why not just one CODE? Well for starters there is an overlap of character event codes and non character event codes in the range 33 to 47. Character "%" has the same charcode 37, as the keycode 37 of non character "left arrow".

There are a lot of inconsistencies between browsers as highlighted in the link I gave above. However to solve my problem i needed to find the consistencies between browsers rather than the inconsistencies. Fortunately there are some consistencies.

For the keypress event all the browsers are consistent when reporting character events. Either the charcode or keycode will give you the right character code. But the keypress event is useless for non character events. In fact IE does not even fire the keypress event for non characters.

However for keydown event all browsers are consistent when reporting non character event codes. They correctly give you the right keycode. Even Safari which reports keycodes greater than 64000 for navigation keys on keypress, reports correctly for keydown.

KeyUp also works more or less like keyDown. Except on IE I wasnt able to capture the backspace key on KeyUp. It is important to capture the backspace key because the browser would otherwise resort to its default behaviour and mimick the back button and get off your page completely!

Here is the code.



document.onkeydown = function(e) {handleKeys(e)}
document.onkeypress = function(e) {handleKeys(e)}
var nonChar = false;

function handleKeys(e) {
var char;
var evt = (e) ? e : window.event; //IE reports window.event not arg
if (evt.type == "keydown") {
char = evt.keycode;
if (char < 16 || // non printables
(char > 16 && char < 32) || // avoid shift
(char > 32 && char < 41) || // navigation keys
char == 46) { // Delete Key (Add to these if you need)
handleNonChar(char); // function to handle non Characters
nonChar = true;
} else
nonChar = false;
} else { // This is keypress
if (nonChar) return; // Already Handled on keydown
char = (evt.charCode) ?
evt.charCode : evt.keyCode;
if (char > 31 && char < 256) // safari and opera
handleChar(char); //
}
if (e) // Non IE
Event.stop(evt); // Using prototype
else if (evt.keyCode == 8) // Catch IE backspace
evt.returnValue = false; // and stop it!
}