On the left “is an experimental plot that compares five popular browsers and which we hope to update as new stable versions of the browsers are released. We created this chart by running Sputnik in each of the five browsers and then plotting each browser such that the fewer tests a browser fails the closer it is to the center and the more failing tests two browsers have in common the closer they are placed to each other.
In this example, when running Sputnik on a Windows machine, we saw the following results: Opera 10.50: 78 failures, Safari 4: 159 failures, Chrome 4: 218 failures, Firefox 3.6: 259 failures and Internet Explorer 8: 463 failures,” Hansen stated.
Google has indeed tested the latest versions of all the top five browsers worldwide, but the Mountain View search giant was forced to compare recent releases of Chrome, Opera, Firefox and Safari with a copy of Internet Explorer, which is almost a year old. This, of course, because of the slow pace of new IE releases.
IE8 was released in March 2009, and has seen no major upgrades since that point in time. By contrast, Opera 10.50 was launched on March 2nd, Firefox 3.6 on January 21st and Chrome 4 on February 11th, 2010. Still, the third ECMA specification has been around sufficiently long for a one-year difference between browser releases to be largely irrelevant.
One aspect of the continuous browser development is the need to adapt to evolving web standards. More often than not, browser makers fail to synchronize when it comes down to tailoring their products to the latest versions of web standards generating incompatibility problems.
Specifically, websites that could render fine in one browser will offer an inferior experience to end users in another. This causes great pains to developers, which need to work extra in order to tailor their content to each browser in part, while only being able to dream of a “write once use across all browsers” scenario.