HOME | BENCHMARKS | FORUM| CONTACT |
 

Nuendo / SX Benchmarks :.

       

Politics , Prejudice , Objectivity and Truth - Part I :.

   

I am quite disturbed that I need to be even writing this overview of the ongoing efforts by a very small select group to cast as much shadow and doubt on this project.

What I originally set out to do was try and create an ongoing project that DAW builders , end users, audio manufacturers and the like , could contribute on a level playing field , for the benefit of all invloved. I have been very careful in presenting the project with as minimal self promotion as possible, as that was not the primary aim. I have been repeatedly targeted by the same sector, who have consistently had the loudest voice and opinion on what the variables being encountered actually represents to the end user, despite either never actually contributing.., or having a very strained and questionable participation , also any opinions I present are dismissed as subjective due to my prefered business model, etc, etc..

Here is just some of the stink in glorious detail.

 
 
Benchmarks are Flawed , Subjective, Unscientific, & Not representative of Real World Working Environments:

This is something thats is continually levelled at the Benchmarks by those that feel that simply scaling the sub systems, does not in any way correlate to Real Word working environments.

I can understand that point to some degree, especially for those who's working enviroments are more reliant on large numbers of I/O , than low latency ASIO performance.

It does not in any way negate the collated data in respect to the scaling capabilities/related issues that can be experienced running these sub systems at lower latencies, something that these Tests have brought to light.

In regards to the Session being flawed, well thats a whole can of worms that is directly related to the issue of the Performance Droop on Save / ReOpen , as well as the performance variables being experienced across respective systems.

This has been an extremely hard pill to swallow for some

The fact that end users were experiencing major issues running Blofelds on Opteron/Tyan/ RME systems, was dimissed and basically totally ignored by the select sector, who co-incidently have a little more than a slight slant towards that particular combination.

Amazingly , we then have an attempt by Scott and the Team at ADK to recreate a near identical version of the Blofelds DSP test, that claims to have "resolved" the "Performance Droop" aspect.

 

The collective cry of " Hallelujah, that proves that the session was flawed all along" was heard, which of course would dispell any and all data to the contrary that has been collated over the last 12-18 months.

There are also claims that the "new" test is more Objective, Scientific and Accurate ??

Not So Fast.. !!

Its the same bloody test, except that all the audio is not constantly streaming as in Blofelds i.e: some tracks are short comps.

That is a significant variable in itself.

What makes the above claims even more more ludicrous is that after screaming that this style of test was useless for testing the current crop of MultiCore systems, the team at ADK duplicate the whole methodology to a T, including the Magneto plugins that they bitched so heavily about when they were clutching at straws on what was causing the original performance variables, let alone Save/ReOpen ??

Not surprisingly , the rest of the select sector, who repeatedly dismissed the testing as irrelevant find an identical methodology, all of a sudden viable because someone else re-did the test and removed the offending issue, without stating exactly what they did to do it.. convenient ? !

While the Save/Reopen seems to be the most controversial issue, there is also the performance variables that were evident simply running the session, but of course no mention has been made of that.

 

Also the simple fact that the Save / Reopen issue has been repeated across multiple DAW applications, using 3rd part plugins totally negates the argument that the variables were caused by a "flaw" in the session, something that is conveniently ignored by those that continue to posture that opinion.

I am way past needing to defend the work that we have all put into this project over the last 18+ months, I have answered all of the queries multiple times , and no-one has been able to provide any evidence to the contrary.


Simply claiming that the variables presented were due to a "flaw" in the test session, without presenting the evidence to prove it, holds about as much water as a siv

I will continue to test and report on the newer architectures as they come online , the information given can then be weighed up by the end users in respect to how it translates to their individual Real World working environments.

I am done arguing with the select sector that seem to think that anything I do is somehow weighted one way or the other..

This is nothing more than personal prejudice, and I am more than fine with that, it at least clears the slate at where exactly everyone sits in all of this.

Stay Tuned for Part II.

         
   
Page 1 | 2 | 3 | 4 |5 |6 |7|8 |9 |10 |11 |

© AAVIMT 2006