Jump to content

new benchmarks that are more self-explaining


Recommended Posts

With new Geekbench 4, users can benchmark tablets, phones and computers and results can be compared to each other so you can see whether your phone or your notebook is faster.

A few months after that program has been released, over a million results of benchmarked devices is online - including upcoming tech like Kaby Lake.

I have the fear that AIDA64 cannot compete with that and it cannot compete with the more self-explaining benchmarks there so I want to suggest using

new benchmarks like LZMA, HTML5 parsing, PDF rendering, Speech recognition.. perhaps instead of Julia, SinJulia and Mandel benchmarks.

Even more important (from my point of view) are (GPU) Compute-Benchmarks, preferably in benchmark menu (where all other benchmarks are) rather than in Tools/GPGPU-benchmark window.

Those benchmarks could use OpenCL and could calculate histogram equalization, face detection, physics or camera raw developments/conversions instead of Julia and Mandel.

As people can cope better with 2 scores than many, I would suggest AIDA to report 2 scores after all benchmarks are done: one for CPU and one for GPU.

This is just brainstorming - please don't take that personal.

Link to comment
Share on other sites

1) We do not really compete with Geekbench or other similar benchmark software, mainly because AIDA64 has no online database where users could submit their scores. Until we have that, we don't really see the type of benchmarks we've got an issue.

2) We haven't added the OpenCL benchmarks to the regular page tree simply because those scores can vary greatly across not only different hardware, but different drivers with the same hardware too. So for example, by simply upgrading your video drivers may cause the scores to jump up or down 20 or 50 percent. Such inconsistency would never happen with native (x86/x64) CPU or FPU benchmarks. OpenCL is a very different beast, and so we have to treat it as a different beast too :) 

3) As for producing a single benchmark score, from time to time we reconsider that idea, but then we always get to the point of dropping the idea. It's because even though producing a single score may sound quite convenient, but it would represent basically nothing about the actual performance of the system or CPU. It's also very difficult to come up with reasonable weighting of scores. If you take all our existing benchmarks, add all the scores up as a percentage to a certain hardware (let's take Core 2 Extreme X6800 as a rererence of 100% performance for example), then it may sound a great solution ... as long as hardware accelerated AES, AVX and FMA don't distort the picture :) Not to mention what AVX-512 and hardware accelerated SHA would mean for next-generation processors...

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...