• After 15+ years, we've made a big change: Android Forums is now Early Bird Club. Learn more here.

GL benchmarks of the 3D's Processor...

haha nice one earlymon...definitely putting that in the quote book...and yes i know benchmarks arent completely accurate vs real world use...just pointing this out is all...nonetheless still excited as ever for the 3d ^^.
 
I plagiarized that one from sremick at the Captivate forum (remember: don't shield your eyes, simply plagiarize!).

I think it's going to be a screaming beast.

If I get time, I may invent the lame dog/screaming beast (LD/SB) benchmark.

Meanwhile - we'll be seeing a lot of benchmarks in the months to come.

I sure wouldn't mind some carhop service while they're running - I'm always up for a good malt. ;)
 
Drop tio 1.2GHz with v-sync on results will be simalar to Tegra 2/SGX540, probably just a little faster.

So bascially a year on and HTC finally release something with a GPU that's competitive with my Galaxy S.
 
Surpasses your Galaxy S, as the next gen will surpass this.

I'm not sure by how much though, Adreno 220 does look faster but it has a 1.5GHz Dual core CPU backing it up in these benchmarks which won't even be in the Evo 3D, that will be clocked at 1.2GHz.

The next generation is almost upon us, SGX543MP2 and Mali 400 MP, i've seen benchmarks for both showing simalar performance.

Yes I'm aware of the Anandtech results, that was not the final verison
 
I say that the Evo 3D will be at about 25-35. Next to the Optimus. Even if it's lower then that, it will be fairly fast. Because the original Evo is almost at the bottom, but to me, it's fairly fast.
 
Here it is:
AnandTech - Hands on and Benchmarks of two MSM8x60 Phones - HTC Sensation 4G and HTC EVO 3D

*Edit:
While benchmarks don't mean much in the real world, it's interesting to see (for me) anyways. Multiplying the Evo 3D's score by 1.35 to "normalize" the scores to an 800x480 display resolution, I get the following:

GLBenchmark 2.0 Egypt - 28.35fps
GLBenchmark 2.0 Pro - N/A (over the 60fps cap)

Does anyone have the numbers for these 2 benches for the Samsung GS2?

Note, the Evo 3D scores higher than the Atrix in both of these tests, but it's 2.3 vs 2.2, so I'm not sure how much of the performance advantage comes from the software.

*Edit 2
It seems that Anandtech will only be doing the review for the Sensation, and not the Evo 3D :(

*Edit 3
Now I really wish HTC had made the back of the Evo 3D like the Sensation's...
 
sadly no galaxy s2 benchmarks. Odd but the evo 3d was consistently a hair faster than the sensation. Last but least he noted the 3d hardware layering use a bit of power; wonder what this does for battery life under 2d.

It does have 256mb more RAM, that probably explains the difference.
 
Last but least he noted the 3d hardware layering use a bit of power; wonder what this does for battery life under 2d.

I interpreted his statement that the parallax barrier uses almost no power. "Single digit mA" means less than 10mA on a 1700+mAH battery; shouldn't be a big enough impact to explain the need for a larger battery alone. He ends with speculation that the larger battery is thus because of the WiMAX power usage.
 
I'll expect the SGS2 to do better in these particular benchmarks but we'll just have to see.

I base that on an interesting fps benchmark that Shocky noted elsewhere here in the forums - but it was for the Nenamark2 ... ah, here's the link:

http://androidforums.com/htc-sensat...ther-comparisons-impressions.html#post2762394

FWIW - The Anandtech article also did the normalizing in the opposite direction: scaling from previous results about 23 fps were expected, but 21 was measured.

In the Nenamark2 example, the Sensation got 26 fps on average, so it's right in that same ballpark - whereas the SGS2 is reported at 46 fps.

Look at the big difference in variability of results, tho - that could be interesting.
 
Shouldn't matter unless access speed is different or the application is memory constrained (not so likely I would think).

We have no clue how much Sense junk is auto-running and waking and sleeping in the background while these "tests" went on.

More memory very well _may_ mean less management cycles stolen by Android.

Likely, it was out of the box, install the app, run it, publish the numbers. I see no caveats about configuration control in the article, only that it's a first look.
 
Yea I saw those benchmarks a while ago what I don't know is how optimizaed the benchmark they are running for a particular architecture. As we know optimization can easily result in a factor of 2x or 3x improvement if it means the differences of using decided hardware that is otherwise not used or reordering pipelines (I really don't know much about these processors and their architecture. I.e, are they heavily dependent on pipeline and accurate branch prediction as well as having special hardward instructions for matrix operations and similar ????)

It would be also interesting to see the results of a galaxy s2 with a tegra processor thrown into the mix. Surprise we don't see them showing up in the benchmarks.

I'll expect the SGS2 to do better in these particular benchmarks but we'll just have to see.

I base that on an interesting fps benchmark that Shocky noted elsewhere here in the forums - but it was for the Nenamark2 ... ah, here's the link:

http://androidforums.com/htc-sensat...ther-comparisons-impressions.html#post2762394

FWIW - The Anandtech article also did the normalizing in the opposite direction: scaling from previous results about 23 fps were expected, but 21 was measured.

In the Nenamark2 example, the Sensation got 26 fps on average, so it's right in that same ballpark - whereas the SGS2 is reported at 46 fps.

Look at the big difference in variability of results, tho - that could be interesting.
 
At this point in the thread, I want to give the same caution I gave in the SGS2 and Sensation forums where claims of optimized or deficient benchmarks arise:

The benchmarks attempt to measure and show how hardware responds to a specific set of app calls to an OpenGL software library, usually made in some stressful way (if the benchmark is worth anything).

It's tempting to explain away unfavorable results, but in truth, if some app you need or want is coded in any way similarly to the benchmark in question, then that app is likely to run less well on your phone.

In the end, looking at all benchmarks is a good idea - but the best use of the graphics benchmarks are for app developers to choose which OpenGL calls to make to serve their audience - because there's more than one way to do about anything in graphics programming.

The way to not use the benchmarks is like results of a horse race.

There is no mystery whatsoever as to what the hardware can do. Sign a non-disclosure agreement with SoC maker as a recognized member of the hardware industry with a need to know and you can get the raw chip benchmarks straight from the horse's mouth. I absolutely promise that Qualcomm and Samsung and TI know precisely the performance of their graphics cores measured on bare metal.

At one time not long ago, they usefully published that in the open on the web. My favorite was the blog-published benchmark showing that Hummingbird could do more millions of triangles per second than Samsung measured and spec'd - by a wide margin. IOW - what the blogs reported was flatly unpossible for one particular measurement by one particular benchmark.

So - yep - it's a fine line. Look for benchmarks that exaggerate and through them out - but consider unfavorable benchmarks carefully because you might get an unfavorable app some day.

This whole rant goes back to my common claim - benchmarks have to correlate to the real world - and that ain't easy when you think about it. ;)

Anyways - I promise if I had the answers, I'd tell you. ;) ;)
 
Yea but in a way you didn't answer. What you said is true; but conversely the only thing required to improve the global graphics performance when using openGL is improve the openGL implementation for the device. So a converse question would be (and what we (or I) sort of want to know is how close are the results for the various devices with the NenMarks to what the devices are capable. It might very well be that the drivers are better optimized on the sensation/evo3d and the galaxy s2 is capable of doing significantly better :)

So do you know much about the raw hardware architecture that you can share (i.e, not under NDA) and how much room exist for improvement ?

At this point in the thread, I want to give the same caution I gave in the SGS2 and Sensation forums where claims of optimized or deficient benchmarks arise:

The benchmarks attempt to measure and show how hardware responds to a specific set of app calls to an OpenGL software library, usually made in some stressful way (if the benchmark is worth anything).

It's tempting to explain away unfavorable results, but in truth, if some app you need or want is coded in any way similarly to the benchmark in question, then that app is likely to run less well on your phone.

In the end, looking at all benchmarks is a good idea - but the best use of the graphics benchmarks are for app developers to choose which OpenGL calls to make to serve their audience - because there's more than one way to do about anything in graphics programming.

The way to not use the benchmarks is like results of a horse race.

There is no mystery whatsoever as to what the hardware can do. Sign a non-disclosure agreement with SoC maker as a recognized member of the hardware industry with a need to know and you can get the raw chip benchmarks straight from the horse's mouth. I absolutely promise that Qualcomm and Samsung and TI know precisely the performance of their graphics cores measured on bare metal.

At one time not long ago, they usefully published that in the open on the web. My favorite was the blog-published benchmark showing that Hummingbird could do more millions of triangles per second than Samsung measured and spec'd - by a wide margin. IOW - what the blogs reported was flatly unpossible for one particular measurement by one particular benchmark.

So - yep - it's a fine line. Look for benchmarks that exaggerate and through them out - but consider unfavorable benchmarks carefully because you might get an unfavorable app some day.

This whole rant goes back to my common claim - benchmarks have to correlate to the real world - and that ain't easy when you think about it. ;)

Anyways - I promise if I had the answers, I'd tell you. ;) ;)
 
Yea but in a way you didn't answer. What you said is true; but conversely the only thing required to improve the global graphics performance when using openGL is improve the openGL implementation for the device. So a converse question would be (and what we (or I) sort of want to know is how close are the results for the various devices with the NenMarks to what the devices are capable. It might very well be that the drivers are better optimized on the sensation/evo3d and the galaxy s2 is capable of doing significantly better :)

Exactly true on all points - in the end, it all comes down to final integration, because you don't run an SoC.

We run apps inside the Dalvik Virtual Machine that call underlying Linux services that thread through the kernel (and drivers/infrastructure code) that interfaces with a lot of important silicon on the motherboard (including the SoC) as well as the touchscreen display.

Any number of chip choices, board layouts, other hardware integrations and then - finally - kernel/infrastructure code could make or break letting the capabilities of the SoC shine through.

So do you know much about the raw hardware architecture that you can share (i.e, not under NDA) and how much room exist for improvement ?

I post SoC teardowns, functional block diagrams, manufacturer data sheets, and insider analyses wherever and whenever I find them on the open web. One of our members here is a handset maker and he commented often that I had published more data here than he was seeing anywhere else, and in many cases, even before he saw it through his means.

Sadly, even some of the links I published last year are dead now, with no replacements. Have no doubt - if they slip up and open the doors just a crack, I'll have it first, and you'll have it next.

I'm not some great insider with all of the answers, but I do know a few and I do my best to share what I can. ;)
 
Well the entire point of my post and question is that while evo3d is 1/2 the speed of the galaxy s2 for rendering; the software is very new for both platforms. It sounds like from your comment we have no clue if things will get better or are as good as they are going to get when updates to the software arrive for either platform (i.e, if huge room for improvements exist using the andriod/opengl layering). That's really all I'm trying to ask. It could very well be that the evo3d layer is highly optimized and there is very small room for improvement and the galaxy s2 is highly unoptimized and will show significant improvement as revisions are pushed to the device. Right ?

Exactly true on all points - in the end, it all comes down to final integration, because you don't run an SoC.

We run apps inside the Dalvik Virtual Machine that call underlying Linux services that thread through the kernel (and drivers/infrastructure code) that interfaces with a lot of important silicon on the motherboard (including the SoC) as well as the touchscreen display.

Any number of chip choices, board layouts, other hardware integrations and then - finally - kernel/infrastructure code could make or break letting the capabilities of the SoC shine through.



I post SoC teardowns, functional block diagrams, manufacturer data sheets, and insider analyses wherever and whenever I find them on the open web. One of our members here is a handset maker and he commented often that I had published more data here than he was seeing anywhere else, and in many cases, even before he saw it through his means.

Sadly, even some of the links I published last year are dead now, with no replacements. Have no doubt - if they slip up and open the doors just a crack, I'll have it first, and you'll have it next.

I'm not some great insider with all of the answers, but I do know a few and I do my best to share what I can. ;)
 
We run apps inside the Dalvik Virtual Machine that call underlying Linux services that thread through the kernel (and drivers/infrastructure code) that interfaces with a lot of important silicon on the motherboard (including the SoC) as well as the touchscreen display.

Small point: this benchmark, along with most non trivial 3D games, are not running through the Dalvik VM. They are running through the NDK which is as close as you can get to "coding on the metal" in Android.
 
Well the entire point of my post and question is that while evo3d is 1/2 the speed of the galaxy s2 for rendering; the software is very new for both platforms. It sounds like from your comment we have no clue if things will get better or are as good as they are going to get when updates to the software arrive for either platform (i.e, if huge room for improvements exist using the andriod/opengl layering). That's really all I'm trying to ask. It could very well be that the evo3d layer is highly optimized and there is very small room for improvement and the galaxy s2 is highly unoptimized and will show significant improvement as revisions are pushed to the device. Right ?

Yeah, I get it now - I wasn't trying to be evasive. I was perhaps unclear on questions before.

Almost.

My point of view -



  1. Yes, only time will tell if the 3vo is as good as it gets as to be delivered. The Qualcomm development platform for the 8660 suggests strongly there's still room for some improvement compared to the HTC delivery, maybe as much as around 10% at the present clock speed.
  2. Until more benchmarks are in, we won't know if the 3vo is 1/2 rendering speed of SGS2. So far, we believe the SGS2 is faster by some possibly significant amount.
  3. HTCs tend come out of the box sluggish until properly configured by the user.
  4. Independent kernel devs usually improve performance of any of the great phones, therefore establishing the history that neither the HTC nor Samsung kernels are likely performance optimized - if anything, we expect them to be over-cautiously designed and verified for stability as the prime directive. Note: kernel changes can make huge performance differences without changing voltages or clock speeds, and more if you add those in. We have no idea, the true upper-bound clocking rates for either device yet.
  5. The tug of war with OpenGL and the makers gpu preferences will never end and usually only improve - only Swammi knows if those changes are on the near horizon or if the maker/carriers will be involved for any updates on any of those components. My expectation again, because both maker claim to adhere to unlocked bootloaders, is that higher-performance roms will also come from the independent dev communities for both the SGS2 and 3vo.
  6. Unless we know that game apps will need these potential OpenGL changes, I'd care more about the right kernel and user configurations than anything like that.
  7. I am not a hardcore gamer but I do know that hardcore gamers complain at rates below 90 fps on PCs, so I don't know the true expectation for hardcore gamers on phones.


I guess I focused before on more of a different direction - everyone that touches either device is reporting great results regardless of benchmarks, except for that one naggingly-bad browser showing on the Sensation compared to the SGS2.

FWIW - one UK owner of a well-running Desire HD (kinda like a Thunderbolt) with Gingerbread noted his new Sensation ran like the previous phone, just a little bit smoother overall. Note again, the DHD running full Sense 2, while the Sensation pulled that off running full Sense 3.

Am I helping and answering or still babbling?? Please feedback.
 
I think the bottom line is only time will tell. As for games well my limit is somewhere between 24 and 30 fps; but there is a lot more than games. browsing is usually limited by javascript/flash/transfer speed as well as some complex rendering and is very subjective to layering; but video dependent on encoding can be rather cpu intensive so a fast device can help there (and I do use these devices for vod). Anyways I guess I should try to dig around for some architecture description if i care about that stuff.

Yeah, I get it now - I wasn't trying to be evasive. I was perhaps unclear on questions before.

Almost.

My point of view -



  1. Yes, only time will tell if the 3vo is as good as it gets as to be delivered. The Qualcomm development platform for the 8660 suggests strongly there's still room for some improvement compared to the HTC delivery, maybe as much as around 10% at the present clock speed.
  2. Until more benchmarks are in, we won't know if the 3vo is 1/2 rendering speed of SGS2. So far, we believe the SGS2 is faster by some possibly significant amount.
  3. HTCs tend come out of the box sluggish until properly configured by the user.
  4. Independent kernel devs usually improve performance of any of the great phones, therefore establishing the history that neither the HTC nor Samsung kernels are likely performance optimized - if anything, we expect them to be over-cautiously designed and verified for stability as the prime directive. Note: kernel changes can make huge performance differences without changing voltages or clock speeds, and more if you add those in. We have no idea, the true upper-bound clocking rates for either device yet.
  5. The tug of war with OpenGL and the makers gpu preferences will never end and usually only improve - only Swammi knows if those changes are on the near horizon or if the maker/carriers will be involved for any updates on any of those components. My expectation again, because both maker claim to adhere to unlocked bootloaders, is that higher-performance roms will also come from the independent dev communities for both the SGS2 and 3vo.
  6. Unless we know that game apps will need these potential OpenGL changes, I'd care more about the right kernel and user configurations than anything like that.
  7. I am not a hardcore gamer but I do know that hardcore gamers complain at rates below 90 fps on PCs, so I don't know the true expectation for hardcore gamers on phones.


I guess I focused before on more of a different direction - everyone that touches either device is reporting great results regardless of benchmarks, except for that one naggingly-bad browser showing on the Sensation compared to the SGS2.

FWIW - one UK owner of a well-running Desire HD (kinda like a Thunderbolt) with Gingerbread noted his new Sensation ran like the previous phone, just a little bit smoother overall. Note again, the DHD running full Sense 2, while the Sensation pulled that off running full Sense 3.

Am I helping and answering or still babbling?? Please feedback.
 
Two things about Anandtech's testing:

1) Device being tested is running on at a higher clock than the Evo 3D -- 1.5 Ghz versus 1.2 (everyone knows this)

2) Device being tested is running at a lower resolution than the Evo 3D -- WideVGA 800x480 instead of qHD 960x540.

The only question is if there have been any further hardware or software optimizations since the development of the MDP, but with the OMAP 4430/SGX 540, Tegra 2, and 1GHz Hummingbird doing as well as they do, it looks like the Evo 3D will only be as fast or slightly faster.
 
Back
Top Bottom