lab501: Hi Tamas, please tell us a few things about yourself. How did you start programming? How does it feel to be still in the bussines after all these years?
Tamas Miklos: I was not much interested about computers until the age of 10. It was when I switched secondary school, and in the new class we had computer studies. My first computer “encounter” was with Commodore 16 and Commodore +4 machines, and I was simply blown away by BASIC from day one. I just couldn’t get enough of writing small programs to do interaction with the user, to draw various graphs and mathematical graphics. I quickly learnt my way through BASIC, and switched to assembly to start learning the “internals” of Commodore computers. I think that was the point where the whole diagnostic software programming idea originates from, although it took me 6 more years to actually start working on my first sysinfo program called ASMDemo. It was a long time ago, and sometimes it’s hard to fathom the fact that I’m still in this business, after 15 very long years, and that I still enjoy every bit of developing diagnostic software.
lab501: From ASMDEMO in 1995 to AIDA64 in 2010, you had 15 years full of hard work and interesting software. What is the diference in the benchmark software market now, compared to the early years?
Tamas Miklos: It was so much simpler in the old days. The major difference comes from the operating system: 15 years ago Windows was not much popular, or at least a lot of users still preferred using DOS, mostly because games and applications were only available and fully functional under plain DOS. Without Windows and without multitasking, measuring system performance used to be a lot easier, since benchmark programs had full control over all the system resources. Today multitasking is common, and measuring accurate, stable benchmark scores is a huge challenge, especially on modern multi-core processors. Back in 1995 every home users had just one processor in their computer, with only one core, and no HyperThreading support of course. Developing benchmarks that would use just one CPU, one core, one thread is quite easy. Designing benchmarks that could utilize 4 or 8 processor cores is very difficult, especially if you want to use all available cores in any system (including server systems), and drive them to all to 100%. Fortunately the way cores are organized shall not change in the near future, so by already having multi-threaded benchmarks in AIDA64, all we have to do is make sure they can scale up to more and more cores, as more and more massive multi-core processors will soon be rolled out by AMD and Intel.
lab501: You have been working with Lavalys since 2003. What made you take a different path after 7 years of collaboration?
Tamas Miklos: It was fun working with Lavalys for all those years. I’ve gone through a lot, most importantly I’ve got used to developing software that is not freeware, but commercial software.
It’s much easier to provide support for free software: you can simply say that a particular software bug or incompatibility cannot be fixed, and users will understand. However, when those users are actually paying customers, telling them that a software issue cannot be fixed can trigger rather mixed responses, and sometimes such a response would simply upset the customer. It was nice to learn how to find my way through such difficulties, but it was still fun. However, since 2007 the joy in this work at Lavalys have slowly disappeared, and I’ve found myself in a constant struggle to try changing and improving how things are organized internally in the company. Both the Canadian half and the Hungarian half — me and my colleagues here in Budapest — of the company had very different ideas on the future of Lavalys, and the future of the software products we made. In September 2010 the internal arguments and fights caused the company to break into two. The Hungarian staff then formed a new company to continue developing diagnostic and benchmark software that we named AIDA64.
lab501: What was the reason for the complete rebranding of the product. Does Lavalys have any rights over the Everest name, or you just wanted a fresh start with AIDA64?
Tamas Miklos: Even in the Lavalys era our users still remembered the old days with AIDA16 and AIDA32, and even today they often connect those software names with a memory of “a very useful sysinfo software from the past”. I always considered AIDA16 and AIDA32 my personal software development project, while Everest was a project in cooperation with Lavalys. It seemed logical to somehow switch back to using the software brand “AIDA”, now that it became a personal project again — albeit with a whole software development and technical support team now, led by me. I never wanted to use the name Everest again, it wasn’t even considered.
lab501: Tell us a few words about FinalWire. The first software we see is AIDA64, with it’s different versions. Any plans for any other FinalWire software in the future?
Tamas Miklos: Even though FinalWire is a brand new company, it is formed by a group of very skillful Hungarian developers, each being an expert of their respective fields. I’ve been working together with them for over 10 years now, and they’re considered my best friends too. We wanted to start with AIDA64, with high hopes about making it commercially successful, and build up a customer base quickly. As soon as we feel comfortable about
working as a software publisher company, we’ll start to grow by hiring more developers, and start working on new software projects. One of the new software would be a fully automated network management solution, one that would automatically collect network inventory, verify and track hardware and software changes throughout a whole company network, and provide unique features not available in concurrent products. All that on a similarly intuitive, clean, easy-to-use interface that AIDA64 features.
lab501: Just like it’s predecesor, AIDA64 is a very complex analysis and benchmark tool. What are the main changes you implemented compared to the last available Everest version?
Tamas Miklos: The most important change is of course the implementation of 64-bit benchmarks and 64-bit systrem stress testing. Since the release of Windows 7, a lot of users decided to finally migrate to 64-bit Windows, so we felt this was the perfect time to re-introduce our existing benchmarks, but now fully ported to 64-bit. This way our users can compare the performance of their systems using the most advanced memory and CPU benchmarks. We of course kept the old 32-bit benchmarks as well, so users with a legacy (32-bit) Windows installation can keep using them.
We’ve also added SSD support, with unique features like identifying the SSD controller or the onboard SSD cache DRAM size, and controller-specific S.M.A.R.T. disk health status detection.
lab501: What are the main reasons behind the fact that we cannot compare results obtained with different versions of this type of benchmarking software? Is it mostly due to the fact that performance changes when you add support for new CPU’s, new chipsets and so on?
Tamas Miklos: The major factor is that unlike most benchmark software, AIDA64 doesn’t use similar benchmark code for all processors. We instead developed a whole series of benchmark methods, that are very differently designed. Some of the methods use optimizations like SSE, MMX, and soon AVX, while others are unoptimized for those CPU features so they can be used on legacy processors that e.g. doesn’t support SSE or MMX. We test each method on every applicable processor variants, and then assign the best performing method to the particular CPU core tested. For example, we may use a SSE3 optimized benchmark on Intel Core 2, but we use a SSE2 optimized benchmark variant on an AMD counterpart. Since we constantly extend the list of available optimized benchmark methods, the measured performance of a specific processor may change (improve) when upgrading from an old AIDA64 to a newer version.
The other factor is the constantly improving CPU architectures. Currently, we only have 3DNow!, MMX, SSE, SSE2, SSE3, SSSE3 and SSE4 optimized benchmarks, even for future processors like Intel Sandy Bridge. Sandy
Bridge will introduce AVX extensions, and even with the current series of benchmark methods we will be able to measure very high benchmark scores on that new processor. However, after developing e.g. AVX optimized fractal benchmark methods, the FPU Julia benchmark results in AIDA64 will improve on Sandy Bridge, since it will take less time to calculate the Julia fractal using AVX, than by using only old MMX or SSE optimizations. It would be unfair to compare a new AIDA64 benchmark score that reflects fractal calculation performance using AVX optimizations, with an old benchmark score that is obtained with an old benchmark that didn’t use AVX optimizations. Hence, it is very important to only compare benchmark scores produced by the same AIDA64 version.
lab501: How often do you think you will bring updates for AIDA64?
Tamas Miklos: With Lavalys we had not enough Everest updates, usually only 2 or 3 times a year. With AIDA64 we plan to accelerate development to make it possible to release a new stable version update at least 5 or 6 times a year. We will also release regular weekly beta updates, so those users who prefer to be at the “cutting edge” of diagnostic tools can constantly keep their AIDA64 installation up-to-date.
lab501: Any plans for introducing a 3D benchmarking module in AIDA64 in the future, for videocard performance analysis? What about a 3D stress tester?
Tamas Miklos: We don’t want to come up with a concurrent product to 3DMark for sure. Instead, we plan to ride the wave of GPGPU technology by introducing an OpenCL based GPGPU benchmark, as well as an OpenCL based GPGPU stress test. That way we can utilize all available video cards and GPUs, and also we can drive them all to 100% utilization to check whether the whole system is fully stable. We are planning to introduce those new GPGPU features about the same time AMD will introduce official support for OpenCL in their Catalyst video drivers.
lab501: Thank you for your time!
Tamas Miklos: Thank you