Zhu said the code tweaks, claimed to reduce memory footprint and start-up time, will be reflected in the Edge browser in the upcoming Windows 10 Anniversary Update.
However, as with all benchmark tests, the results are subject to interpretation and can conflict with other measurements.
Perhaps a more important question is the value of these benchmark tests to real-world users or even Web developers. Do they really provide much value? Even the testers themselves are quick to quote caveats.
"Benchmarks are an incomplete representation of the performance of the engine and should only be used for a rough gauge," Red Hat warned. "The benchmark results may not be 100 percent accurate and may also vary from platforms to platforms."
Microsoft also offered a disclaimer.
Over time, of course, benchmark results and superiority claims will change.
"The JetStream benchmark, which focuses on modern Web applications, has a surprising winner: Edge," Digital Trends said. "Microsoft's been working hard on optimizing its new browser, and it shows. Safari, Chrome, and Vivaldi aren't too far behind, though.
Indeed, the market seems tight, the results seem close, and the question of relevance remains.
"A casual user probably won't notice a difference in the rendering speed between today's modern browsers," Digital Trends said. (Note: Digital Trends also reported on this week's kerfuffle between Microsoft and Opera about browser battery use).
The Microsoft research concluded "benchmarks are not representative of many real Web sites and that conclusions reached from measuring the benchmarks may be misleading."
(Smaller is better)
|Chrome 13.0.754.0 canary
|IE 9.0.8112.16421 64
|Safari 5.0.5 (7533.21.10)
Noting that the Microsoft research debunked the then-available benchmarks, Crawford said the study "showed that benchmarks are not representative of the behavior of real Web applications. But lacking credible benchmarks, engine developers are tuning to what they have. The danger is that the performance of the engines will be tuned to non-representative benchmarks, and then programming styles will be skewed to get the best performance from the mistuned engines."
So why do the benchmark tests keep getting published, caveats and all, some six years later?
I don't know, but it makes good copy for an end-of-the-week tech blog.
Posted by David Ramel on June 24, 2016 at 9:59 AM