In this week’s poll, we bring to the NextPit community yet another intense, internal debate from the editorial team. What is the relevance of benchmarks for audiences who follow our site? Do the tools used to measure differences in real-world use between devices actually matter or is it just to have a relative idea on the different smartphones that are sold in the open market?
Controversies involving benchmarks abound, ranging from component manufacturers who cheat on tests, to models that switch off their internal controls in order to adjust power consumption and temperature when they detect benchmark apps, all of these help us understand why some smartphones simply overheat and crash during benchmarks.
While the practice isn’t new, and one of the most symbolic cases, the infamous “quack3.exe”, recently turned 20 years old. Still, benchmarking tools are a constant presence in reviews and comparisons are made where processors, graphics cards, and, of course, smartphones and tablets are concerned.
All of which brings us to the first question:
Part of the popularity of benchmarks, including among your everyday consumers, is due to the ease of installation and execution without the need to define scripts to run apps as well as time their use. Of course, it also helps greatly that it is easy to compare the scores raked up with several publishing tools and even rankings via tests performed by the public.
Which benchmarks really matter
But for those who care about the numbers revealed by apps, which are the ones that really matter? With so many options available on the market, do you value any particular benchmark more when choosing a new smartphone or tablet?
Of course, benchmarks consist of only a small part of the reviews here at NextPit, but we do wonder if we should reduce (or increase) the amount of time spent performing tests and analyzing the scores. This is an even greater concern when considering how many of the apps on the market offer plenty of bang for your buck. After all, numerous devices basically use the same components underneath the hood, with slight variations that fall comfortably within the margin of error in performance benchmarks.
Feel free to weigh in, critique, and elaborate on your responses to this week’s poll. Would you like to see an article with a general explanation of how each test we use reflects on device usage? Please use the comments field.