UPDATE – June 4, 2014: Added new results for Oracle and IBM to the JOPS/core table. However I have not done the cost comparison for those results as it takes time to get hardware quote from Oracle to calculate their cost.
There is a saying in Russia: “If you want to have your glass intact, do not throw stones into other people windows” (or its English version). I am sure other cultures have similar idioms (which I would love to see in the comments section below). At Oracle OpenWorld 2013 conference Oracle had thrown more than a handful of “stones” into IBM proverbial “windows” and I would like to clarify few performance related things in this post.
Oracle posted this press-release claiming WebLogic performance superiority over WebSphere (you did not you expect otherwise, did you?). In this press-release Oracle claims to have the highest virtualized performance. In sessions during the OpenWorld conference Oracle claimed to have higher performance per socket compared to IBM Power7+ chips. Elisabeth Stahl described her point of view quite nicely in her blog posts “Guns and Butter at OpenWorld” and “Born to run benchmarks“.
I agree with Elisabeth – people do not buy sockets or cores, they buy performance. Nor do people get free sockets or cores. All of these things cost money. Oracle comes up with all kinds of metrics to measure their software. Why did they stick with performance per socket? Why not pick performance per cubic foot, or per meter of copper wire? Are WebLogic Suite or Oracle SOA Suite or other enterprise products that Oracle sells priced per socket? No, they are not. Only Standard editions of handful of Oracle products have socket based pricing and even that with restrictions – many advanced features (clustering being one example) and not all add-on options are available on Standard Edition products. So why does Oracle likes the meaningless per socket performance so much?
Another flaw in Oracle performance claims is the comparison of old IBM results to latest Oracle results. Would it be fair to compare latest Galaxy S4 to iPhone 4s? Oracle is doing exactly this with their selective benchmark comparisons. They pick what they like and dismiss what does not fit their goal. Why not compare historical SPECjEnterprise2010 results mapped over time? I looked at my previous posts that I wrote in the past 2 years responding to Oracle press releases and copied all those results into these two simple tables. The SPECjEnterprise2010 spec has more results than these shown below, but I do not have unlimited time in my day to calculate costs for every hardware spec published by IBM and Oracle, so I had to use what I had collected so far, just added Oracle September 2013 result and arranged it into simple table format:
As you can see from tables above, IBM has mostly dominated SPECjEnterprise2010 in the past 2 years in performance per core and cost per JOPS. In 2010 IBM published 10 results and Oracle published only 1 – I did not bother to do cost calculation on those results and will leave it as an exercise to the reader. Oracle tends to publish big numbers by throwing large hardware into the test, but how many customers are actually running at that level? Majority of the market is not doing 50,000+ transactions per second. Majority of the market is interested in low to medium level scale and more important – cost per transaction when you count hardware and software together. My second table above is exactly this – comparing the cost of transaction per core considering hardware and software costs. After all – enterprise software is licensed by core and this is true for enterprise versions of WebSphere, WebLogic, JBoss, tc Server and others. (FYI – IBM does have other licensing options)
Taking it all together and combining two best IBM and two best Oracle results on x86, Power and SPARC respectively, here is what we get:
- IBM WebSphere delivers 80% more performance at almost half the cost on Power7+ compared to Oracle WebLogic on SPARC T5-8
- IBM WebSphere is 17% faster per core on Intel Sandy Bridge at less than half the cost compared to Oracle WebLogic
Please note that costs above are calculated using WAS ND and WLS Enterprise.
Perhaps this is why customers, such as PT Bank ANZ Indonesia (and others) are migrating from WebLogic to WebSphere?
You can read my earlier posts comparing IBM and Oracle SPECjEnterprise2010 results here: http://whywebsphere.com/?s=SPECj
Obligatory legal stuff:
SPEC and SPECjEnterprise2010 are registered trademarks of the Standard Performance Evaluation Corporation. Results from http://www.spec.org:
Oracle WebLogic 12c on SPARC T5-8, 36,571.36 SPECjEnterprise2010 EjOPS
Oracle WebLogic 11g on SUN SPARC T5-8 57,422.17 SPECjEnterprise2010 EjOPS
Oracle WebLogic 11g on SUN Fire X4170M3 8,310.19 SPECjEnterprise2010 EjOPS
Oracle WebLogic 11g on SUN Blade Server X6270 M2 5,427.42 SPECjEnterprise2010 EjOPS
Oracle WebLogic 11g on SUN SPARC T4-4 40,104.86 SPECjEnterprise2010 EjOPS
Oracle WebLogic 11g on Dell x86 11,946.60 SPECjEnterprise2010 EjOPS
IBM WebSphere 8.5 on Power730+ 13,161.07 SPECjEnterprise2010 EjOPS
IBM WebSphere 8.5 on Power780+ 10,902.30 SPECjEnterprise2010 EjOPS
IBM WebSphere 8.5 on System x3650 M4 Intel Sandy Bridge 9,696.43 EjOPS
IBM WebSphere 8.5 on IBM HS 22 Blade 6,295.46 SPECjEnterprise2010 EjOPS
IBM WebSphere 8.5 on IBM HS 22 Blade 3,694.35 SPECjEnterprise2010 EjOPS
IBM WebSphere 8.5 on IBM HS 22 Blade 2,341.12 SPECjEnterprise2010 EjOPS