HOME | BENCHMARKS | FORUM| CONTACT |
 

Nuendo / SX Benchmarks :.

       

Queries , Qualms and FAQ :.

   

I have had a few queries and qualms in regards to not only the methodology employed, but also general concerns with the variances being reported.

Over the course of the project, a lot have been discussed at length, however, the information is scattered thru the 20 or so pages at the Nuendo Forum, and is very difficult to find in most cases. I have decided to write a summary of the most frequently asked and discussed, to save picking thru the archive. I will also take the opportunity to update the summary with information relating to the latest technilogical advancements, and also clear the deck of some extra baggage that still persists..

 
Why the Methodology : CPU load v Progressive Load :    

I was motivated to developed this style of test as an alternative to the CPU Dyno styled benchmarks , largely due to what I felt was a glaring hole in the earlier methodology. Sure , they were a lot easier to run, took far less time, and gave us a quick referance to some aspects of system performance.

However, trying to get an accurate referance from a wildly fluctuating CPU meter , with all of the inherent

 

issues of how exactly the end user reported the fluctuations, was not something I was comfortable with.

Also, the question I posed myself was , What was the CPU reading actually showing or proving as an overall system performance indicator. Quick Example :

If you have a system returning a result of 40 % at 256 Samples , it may draw some initial gasps from the

 

gallery, but if that same system hit the wall at 60-70% , then the shine wears off rather quickly.

With the methodology of progressively loading the session until it breaks, the CPU meter reading is irrelevent, and we get a clearer picture of the actual performance of the systems at respective latency setting.

         
The Magnetos Are The Possible Cause Of The Variances :    

This has been brought up numerous times throughout the course of the testing , suspicion was originally cast due to the variance in Thonex I results that were caused only by the different Mag versions between Nuendo 2 and 3.

In that case, the results that were posted across different versions of Nuendo would greatly differ due to the added CPU loadings of the N3 version Magneto's.

 

It has absolutely no relevance on this test session, as the session iteslf will not load on any version of Nuendo prior to N3 , and also , as it was proved when running the test again using the Dynamics plugin, the results where identical .

If the Magneto's or any plugin were the cause of the variance, it would be across the board for both platforms, as well as the audio hardware if that was the case..

 

There has been further investigation in regards to the most controvesial of the variances being reported, the "Performance Droop" on Save/ReOpen using both Nuendo's standard and 3rd Party Plugins, as well as using other DAW applications.

Read More Here

Which now leads us to the next pearler.

Save Re-Open Performance Droop is caused by a Flaw in the Test Session :
This is the latest accusation leveled at the Project by the same group of individuals who also claimed that the Magneto's were the source of the variances. Lets stop beating around the bush, this accusation was made by Scott and the Team of ADK, who were on the original qualification team, but refused to participate simply because they didn't have an answer to the variables being presented.   What makes the situation even more ridiculous is that after screaming from the rooftops that the variables were caused by the Magnetos, the exact same individual who postured that view then "developed" a watered down version of the Blofelds Test, that he claimed resolved the Save -ReOpen issue, using Magneto PlugIns as the source for the progressive loading... ??  

I have written a detailed report Here.

I have repeatedly asked Scott and Co to qualify the accusation, and to deliver some tangible evidence to prove the thesis. Of course they would rather froth on that they have "resolved " the S/R issue, without having any idea of the mechanics involved, than actually make an effort to find the cause.

         
 Nuendo and /or Standard Settings Can Not Be Trusted as a Constant :

There has been discussion on whether Nuendo's standard settings can be relied on to accurately test respective hardware , and also whether the standard settings and default settings should be the same for all configs.

Standard settings are what Nuendo defaults to on install, Default settings actually differ ?

Confused.. ?

In the Advanced Audio Menu, the 3 setting on install for Multi Core / CPU systems, will have all 3 settings ticked , Lower Latency, Multiprocessing , Adjust for Record Latency.

 

Hitting the default actually unticks both Multiprocesssing and Adjust For Record Latency.., one being obviously skewed , the other has been the alledged possible cause of the variance's.

This curve ball was thrown up very late in the precedings when an end user reported that on his particular system, "Adjust For Record Latency" effected low latency performance.

That setting on multiple other systems tested , did not have any effect.

 

It has not been reported or repeated by anyone else .., except maybe the select sector using like systems , but they are not telling..

The performance with it unticked was still way below similar systems using different audio hardware, again , absolutely irrelevant in the context of collating the data, as it only effects very select configurations. i.e. Dual DualCore Opti /Tyan / RME HDSP :

Read Full Report Here

This select sector have been the most vocal in opposition to this test, and continue to be so.

 

         
The Variances Are Not Directly Related To The Audio Hardware Alone :    

True to an extent.We have covered quite a bit of ground on this one , inter- relationships with chipsets, memory bandwidth as well as the respective driver / buffer implementation have all played a part, but the largest variance on identical systems with identical settings, has been witnessed by simply swapping the audio cards.

 

There have been a number of us that have tested our respective systems with multiple audio hardware, that being the only variable, and consistantly, the results have reflected that the implementation of the respective driver had the most infuence on the end result.

 

This is an area that still continues to be a bone of contention, and heated debate in certain circles.

The evidence collated still strongley reflects the earlier analysis, and until there is evidnece to the contrary, still stand.

       
The Test Has Been Developed To Favour One Platform Over The Other :    

No Matter how objectively this project has been approached by all involved, it was inevitable that this type of B.S would be postured by some.

This ludicrous statement was leveled at me even before the individual actually ran the test.. enough said.

 

Considering I had numerous people collectively developing this from the get go, on both Intel and AMD systems, this is just plain ridiculous, but I thought I would share it to highlight some of the baggage I have had to deal with.

 

Sadly , there are those that continue to posture that view, despite the evidence to the contrary.

As always, we find our own truth..

         
The Now Infamous Lynx v RME Debate :    

This has been probably the most controversial aspect of this test all along, and its is something that I am sick to death of to be honest.

From the moment I discovered a possible issue on this test with the RME hardware, which was one of the chosen reference units decided on by all involved in the offline professional qualification, this thing has been a roller coaster ride of back room politics, bitching, back stabbing, slurs and counter slurs, I am over it.

It cost me the qualification project because the other parties involved were not willing to continue unless the issues were addressed, RME dismissed the issues as irrelevent , and continue to do so, my descision to try and go it alone with only the Lynx as a referance card was instantly attacked when I first proposed opening the testing to the public, etc, etc..

 

This was never meant to be a stress test specifically aimed at ASIO driver efficiency, that is just one of the area's of investigation that this test has thrown up.

It was never meant to be Lynx v RME , Hell , I actually freaked when I first discovered the issue, as I knew the political ramification that it could cause.

I was more interested in PCI v Firewire to be honest , but as soon as the curve ball was thrown, this area of investigation has continued to grow a mind of its own, and hopefully will continue to do so, if the end result is better and more efficient drivers, then we all benefit.

Obviously the issue is far more sensitive to some over others.. :-)

 

But I am over the politics, and I am way over walking on eggshells over this.

I will continue to report the numbers as they fall, those that have an issue with the results can quite easily qualify and challenge those results if need be, until now, we still have little more than hot air eminating from certain sectors who continue to dismiss the issues reported.

What it has brought to light for me is the total polar opposites in attitudes that respective audio manufacturers present to not only the end users, but also the people in the front lines, such as myself and the other vendors, who have to keep coming up with excuses , when promises and/or expectations fall short ..

         
Intel V AMD : Well I have left this one for last.., but not least.    

This project and accompanying thread(s) have actually set a record for not only remaining refreshingly civil, but for the most part , devoid of the usual Platform bickering that is so prevelant in the past.

However , there has still been some tension between the 2 camps , admittingly between myself and one of my long time online adversary who a little while ago during the Stage I portion of the project , pulled out the calculator and offered some in depth analysis on some preliminary D955 v FX 60 results that were posted by a Swedish Hardware site, which was using less than appropriate audio hardware .

  I had already said my piece about what I actually thought about comparable benchmarks of those 2 top tier chips, basically that they are more suited for the review sites and marketing monkeys . I had no qualms what so ever that the FX 60 would actually clean the floor with the D955. The clock discrepancy for the respective architectures was enough to ensure that.

I would rather focus on and collate results for comparable cross platform chips that the majority actually buy and use. i.e : X2 3800 v D930 , X2 4200/4400 v D940 , X2 4600/4800 v D950 :
 

So I boldly grabbed that same calculator out and re-did the analysis using the collated results from the Stage I portion of the testing.

I have added some further comment below in regards to Stage II to bring the analysis up to date.

 
         
Stage I : Standard Clocking Results :    
Using my collated results for the X2 / D900 / Lynx :

Quote: " First, as a percentile difference between the two CPUs at the same settings. Second, as a percentile difference for each CPU with itself as the buffer size is reduced. "

To me the second table is the one that is most reflective of comparable performance, but you can choose whatever you feel most appropriate.

 

Number of Magnetos over the 81 PlugIns in the Template :

512 Samples : AMD +10 % vs Intel :

256 Samples : AMD +11% vs Intel : Intel = 11 % Reduction in Plugs : AMD = 10% Reduction in plugs

128 Samples : AMD +3.5% vs Intel: Intel = 18% Reduction in Plugs :
AMD = 24 % Reduction in plugs

  Total Number of PlugIns including the 81 PlugIns in the Template :

512 Samples : AMD +3 % vs Intel :

256 Samples : AMD +3% vs Intel : Intel = 3.5 % Reduction in Plugs : AMD = 3.25 % Reduction in plugs

128 Samples : AMD +1% vs Intel: Intel = 4.5 % Reduction in Plugs : AMD = 7.5 % Reduction in plugs

         
Note : The point that was emphasised in the earlier analysis which is found on page 10 on the original Nuendo thread, was that as the buffers lowered, the delta in scaling proportionally increased due to the limitations of the architecture of one platform over the other.., Hmmmm, Well, here the results are reversed..so much for the superior architecture lecture we have had pumped into us over and over again..
     
Stage I : Over Clocking Results :    

Again using my collated results for the X2 / D900 / Lynx :

Lets run the same calculator across the Overclocked results.

Results are pretty consistant with the Standard Clocking, except the Intels scaling had improved at the lowest latency to actually edge ahead plugin wise as well as proportionally , again, totally at odds to the earlier analysis.. :-)

  Number of Magnetos over the 81 PlugIns in the Template :

512 Samples: AMD + 5.5 % vs Intel :

256 Samples: AMD + 4% vs Intel : Intel = 11.5 % Reduction in Plugs : AMD = 13 % Reduction in plugs

128 Samples : Intel + 2.5% vs AMD: Intel = 15.5 % Reduction in Plugs : AMD = 19 % Reduction in plugs

  Total Number of PlugIns including the 81 PlugIns in the Template :

512 Samples : AMD + 2.2 % vs Intel :

256 Samples : AMD + 1.5 % vs Intel : Intel = 4.5 % Reduction in Plugs : AMD = 5.2 % Reduction in plugs

128 Samples : Intel + 1% vs AMD : Intel = 4.5 % Reduction in Plugs : AMD = 7.7 % Reduction in plugs

 

Note: Of course back at the earlier analysis, I dared suggest that I wouldn't be making any conclusions until more data was available, and was instantly berated , now that we have a wider scope of data to form a detailed analysis, the earlier drumming doesn't ring true, and its all quiet on the Western front.., surprise.

Also , it should be noted that the above AMD X2 results were obtained using earlier AGP chipset motherboards, that delivered a reported 10% improvement over the later PCIe variants.

         
         
Stage II : Standard Clocking Results : Clockspeeds of 2.60/2.66    

Moving onto Stage II , the cross platform comparison dramatically changed with the release of the new Intel Core2 line of CPU's.

While AMD's architecture is still fundementally identical performance wise to the systems that were tested in Stage I , the new Intel chips have basically hit the ball out of the park.

To put it into perspective, using the % gain of the total number of plugins including the 81 in the template, where the variance was between 1-less than 3 % in the past.

  Intel Core2 V AMD X2 : DualCore :

Total PlugIns @ 256 Samples : 161 / 127 - C2D + 26 % performance gain :

Total PlugIns @ 128 Samples : 151 / 116 - CD2 + 30 % performance gain :

Total PlugIns @ 064 Samples : 141 / 089 - CD2 + 58 % performance gain :

If we take the new Single Quadcore into account the % delta's increase to anywhere from 88 % to 105 % between 256 and 064 samples.

Its obviously a very different ball game to 12 months ago.

  Intel Woodcrest V AMD Opteron : Dual DualCore :

Total PlugIns @ 256 Samples : 240 / 183 - Woodcrest + 31 % performance gain :

Total PlugIns @ 128 Samples : 211 / 159 - Woodcrest + 32 % performance gain :

Total PlugIns @ 064 Samples : 182 / 137 - Woodcrest + 32 % performance gain :

Total PlugIns @ 032 Samples : 146 / 000 - Woodcrest + ?? % performance gain :

         
Conclusions :    

Stage I:

The above figures show only that the performance delta between the 2 competing platforms was negligible, something that I had always maintained, and had always been confronted over.. , as always I let the numbers do the talking.. !!

The continuing AMD v Intel debate was boring, over blown and consistantly over stated, unless we we were talking Dual Dual Core arena, then there wass nothing in it, it came down to purely personal preferance and specific hardware compatibility issues, if any. While Intels are reletively clear on that front, the AMD X2 landscape was not so clear, so it basically comes down to horses for courses.. what ever floats your boat.

 

Stage II:

The above figures show the incredible increase in scaling that has been acheived by the new Intel Core2 line of Processors.

Until AMD actually respond to the new architecture, there is little to do in regards to comparable performance of the respective platforms, as the new Intels in regards to Native Horse Power are litterly in a calss of their own.

Compatibility issues with the newer chipsets have been minimal to non existent , so its going to be a Hard Road back for AMD ..

 

Quadcore and Beyond:

The move into Quadcore CPU's has been relatively smooth on the Single CPU platform, and less so in regards to the Dual CPU systems so far.

It is still very early days in that regard, so I will reserve judgement until some early kinks are ironed out.

Running OctoCores on the respective Applications and XP , is still extremely bleeding edge , so the next few months will definately be interesting in regards to navigating the minefield.

         
   
Page 1 | 2 | 3 | 4 |5 |6 |7|8 |9 |10 |11 |

© AAVIMT 2006