Nvidias solution with a companion core is interesting.
Yes BUT it really depends on how much multitasking you have on actually! If the system keeps limboing between these cores (Power->lowpower->power->lowpower) I guess it really makes no sense but if you only keep one application running then it would make sense.... But thats not what fandoids want to hear....
I guess this is why apple never did anything to get speedstep working on their chips (atleast production operatingsystems). So no speedstep even when idleing....
That's what caught my eye. Sounds like the low performance chip will be running most of the time and they'll only fire up the quad core for really CPU intensive stuff. That could save energy, although possibly at the expense of performance when in 'low speed' mode.
I believe that the A5 and A6 have on-demand cores - which means that they only need to run one core if the workload demands it. That achieves much of the same thing without requiring a separate chip. Time will tell which approach is better.
Quad core Tegra plus one companion (low speed) chip.
Yes, but history suggests that being skeptical is appropriate. The made all sorts of performance claims in the past that weren't true, so it makes sense to take their current claims with a grain of salt.
But IF Nvidia were able to offer a clear performance advantage, there would be nothing stopping Apple from switching.
Nvidia is not king in the embedded space next to ImgTec. They don't stack up.
As I said, give me custom case and motherboard fabrication and I could design a strong competitor for that segment using stock components for everything else.
Yes, the offer stands. Find any manufacturer willing to risk an afternoon meeting, and we'll see what we can do.
It's heartwarming that you're defending the companies you normally vilify, and I guess I should be grateful that you're merely calling names here rather than threatening physical violence as you've been known to do, so kindly allow me to return the favor:
Could you please explain how a MacBook Air isn't possible without a CPU other than the one it uses?
Hint: the MBA already exists, and it pretty much rocks, stock CPU and all.
I never said that a new CPU was needed. I simply pointed out that your assertion that you know more than the rest of the computer industry is absurd.
And I would appreciate it if you would stop spreading blatant lies about me. I never threatened physical violence to anyone. Here or anywhere else.
" ... The company claims that, when matched up against Apple's custom A5 chip, the Tegra 3 is two times faster at video transcoding and photo stitching. ..."
But is it faster for games than the A5. So how many polygons per second?
J.
Looked it up myself. It seems that the Tegra 3 is about 2.5 times as slow in MFLOPS.
I suspect this is the same for polygons per second.
My contention was that it's self-evident that making a computer that competes with the MBA does not require a different CPU than the one the MBA uses. That's the claim by those companies, and we both agree it's silly.
You can continue arguing about that if you like, and no doubt you will, but it won't make any more sense no matter how much you type.
If you had said simply that it didn't require a new CPU, you would have been correct and no one would have disagreed with you. But you said you could do a better job than Dell, HP, Acer, Asus, Intel, Lenovo, and everyone else. That is, obviously, BS.
Quote:
Originally Posted by MacRulez
Technically you may be right, since it was more of a fantasy of enjoying physical violence than a direct threat when you wrote that you "enjoy slapping you Android shills around".
But your willingness to retract your previous attraction to violence is heartwarming. Apology accepted.
Are you that stupid in real life? Or are you simply so uneducated that you've never heard the expression 'slapping around' to apply to defeating someone's arguments?
Nvidia's only chance here is in the GPU portion (their own creation) is demonstrably better than the competition. Considering that Imagination is mopping the competition at the embedded level with PowerVR I kind of doubt Nvidia can really make a beach head here.
Nvidia is simply using a MP A9 core. EVERYONE is going to have product with this same core and larger volume customers with in-house design teams like Apple will be able to tweak the cores to good effect.
As for the ARM 7 chip on the side. Bah. ARM has already announced their A15 will run in a big.LITTLE configuration with a A7 processor mated to a A15. Vendors won't even have to alter their software significantly.
I can't get all excited about knowing a quad core A9 is coming when the A15 is so much better being an deeper pipelined OoO SoC.
I think the A6 = MP A9. A7 = A15/A7 combo with Rogue graphics.
Fine. Put your money where your mouth is. The Ultrabook market is a multibillion dollar market. Go ahead and make one and release it to the market.
Or, why not offer your super-advanced expertise to one of the existing vendors for, say, $10,000,000?
Please stop making yourself look foolish. It really bugs me when the idiot trolls think that their ability to type a sentence on a forum like this instantly makes them an expert.
... kind of a "jack of no trades -- and master of all"
As a current owner of the Asus Transformer the Prime is high on my tech shopping list. I really enjoy using it, not sure why all the bad comments about Android, might not be as well polished as IOS but I've started using it a lot more then my iPad. No need for iTunes to install a program, Codec support is awesome, Flash, better multitasking and all the those cool custom roms make my Transformer a pretty neat little multimedia machine. Course the iPad 3 will also be high on my shopping list, gosh I love tablets.
As a current owner of the Asus Transformer the Prime is high on my tech shopping list. I really enjoy using it, not sure why all the bad comments about Android, might not be as well polished as IOS but I've started using it a lot more then my iPad. No need for iTunes to install a program, Codec support is awesome, Flash, better multitasking and all the those cool custom roms make my Transformer a pretty neat little multimedia machine. Course the iPad 3 will also be high on my shopping list, gosh I love tablets.
If you mean iTunes desktop. I rarely use it to install programs I'm downloading everything over the air.
Codec support - I use Azul and it plays everything I need it to
Flash - Dead
Multitasking - does what I need it to do.
I'm actually thankful for Android because it's delivering to all of us better and more competitive products regardless of platform.
I think the next battle is ecosystem. Google's web centrism versus Apple's iCloud vs Amazon's online store and Datacenter.
That's what caught my eye. Sounds like the low performance chip will be running most of the time and they'll only fire up the quad core for really CPU intensive stuff. That could save energy, although possibly at the expense of performance when in 'low speed' mode.
From the demo (i know i know, nVidia demos need to be taken with a hefty helping of salt), it seems pretty seamless:
I believe that the A5 and A6 have on-demand cores - which means that they only need to run one core if the workload demands it. That achieves much of the same thing without requiring a separate chip. Time will tell which approach is better.
I think Kal-El has this as well, unless I'm reading the anandtech article on it wrong:
Quote:
The A9s in Tegra 3 can run at a higher max frequency than those in Tegra 2. With 1 core active, the max clock is 1.4GHz (up from 1.0GHz in the original Tegra 2 SoC). With more than one core active however the max clock is 1.3GHz. Each core can be power gated in Tegra 3, which wasn't the case in Tegra 2. This should allow for lightly threaded workloads to execute on Tegra 3 in the same power envelope as Tegra 2. It's only in those applications that fully utilize more than two cores that you'll see Tegra 3 drawing more power than its predecessor.
Yes, but history suggests that being skeptical is appropriate. The made all sorts of performance claims in the past that weren't true, so it makes sense to take their current claims with a grain of salt.
nVidia has a reputation for over-promising and under-delivering, so skepticism and salt are warranted
That's what caught my eye. Sounds like the low performance chip will be running most of the time and they'll only fire up the quad core for really CPU intensive stuff. That could save energy, although possibly at the expense of performance when in 'low speed' mode.
I believe that the A5 and A6 have on-demand cores - which means that they only need to run one core if the workload demands it. That achieves much of the same thing without requiring a separate chip. Time will tell which approach is better.
Technical demo video from nVidia's Youtube page channel
Comments
Nvidias solution with a companion core is interesting.
Yes BUT it really depends on how much multitasking you have on actually! If the system keeps limboing between these cores (Power->lowpower->power->lowpower) I guess it really makes no sense but if you only keep one application running then it would make sense.... But thats not what fandoids want to hear....
I guess this is why apple never did anything to get speedstep working on their chips (atleast production operatingsystems). So no speedstep even when idleing....
Jeez, you didn't need to quote the whole thing. Kind of defeats my ignore list...
I suggest that you put anybody who dares quote me on the block list too.
When you walk outside, do you use a bicycle helmet?
That's what caught my eye. Sounds like the low performance chip will be running most of the time and they'll only fire up the quad core for really CPU intensive stuff. That could save energy, although possibly at the expense of performance when in 'low speed' mode.
I believe that the A5 and A6 have on-demand cores - which means that they only need to run one core if the workload demands it. That achieves much of the same thing without requiring a separate chip. Time will tell which approach is better.
Quad core Tegra plus one companion (low speed) chip.
Yes, but history suggests that being skeptical is appropriate. The made all sorts of performance claims in the past that weren't true, so it makes sense to take their current claims with a grain of salt.
But IF Nvidia were able to offer a clear performance advantage, there would be nothing stopping Apple from switching.
Nvidia is not king in the embedded space next to ImgTec. They don't stack up.
May I use this mock dialogue? in some other site?
Sure, go right ahead.
Hook me up and I'm there.
As I said, give me custom case and motherboard fabrication and I could design a strong competitor for that segment using stock components for everything else.
Yes, the offer stands. Find any manufacturer willing to risk an afternoon meeting, and we'll see what we can do.
It's heartwarming that you're defending the companies you normally vilify, and I guess I should be grateful that you're merely calling names here rather than threatening physical violence as you've been known to do, so kindly allow me to return the favor:
Could you please explain how a MacBook Air isn't possible without a CPU other than the one it uses?
Hint: the MBA already exists, and it pretty much rocks, stock CPU and all.
I never said that a new CPU was needed. I simply pointed out that your assertion that you know more than the rest of the computer industry is absurd.
And I would appreciate it if you would stop spreading blatant lies about me. I never threatened physical violence to anyone. Here or anywhere else.
Surely you mean "But is it faster for games than the PowerVR SGX543MP2"? That's what's pushing those polygons.
No, the A5 is a soc and the GPU is part of it.
J.
" ... The company claims that, when matched up against Apple's custom A5 chip, the Tegra 3 is two times faster at video transcoding and photo stitching. ..."
But is it faster for games than the A5. So how many polygons per second?
J.
Looked it up myself. It seems that the Tegra 3 is about 2.5 times as slow in MFLOPS.
I suspect this is the same for polygons per second.
J.
My contention was that it's self-evident that making a computer that competes with the MBA does not require a different CPU than the one the MBA uses. That's the claim by those companies, and we both agree it's silly.
You can continue arguing about that if you like, and no doubt you will, but it won't make any more sense no matter how much you type.
If you had said simply that it didn't require a new CPU, you would have been correct and no one would have disagreed with you. But you said you could do a better job than Dell, HP, Acer, Asus, Intel, Lenovo, and everyone else. That is, obviously, BS.
Technically you may be right, since it was more of a fantasy of enjoying physical violence than a direct threat when you wrote that you "enjoy slapping you Android shills around".
But your willingness to retract your previous attraction to violence is heartwarming. Apology accepted.
Are you that stupid in real life? Or are you simply so uneducated that you've never heard the expression 'slapping around' to apply to defeating someone's arguments?
Nvidia's only chance here is in the GPU portion (their own creation) is demonstrably better than the competition. Considering that Imagination is mopping the competition at the embedded level with PowerVR I kind of doubt Nvidia can really make a beach head here.
Nvidia is simply using a MP A9 core. EVERYONE is going to have product with this same core and larger volume customers with in-house design teams like Apple will be able to tweak the cores to good effect.
As for the ARM 7 chip on the side. Bah. ARM has already announced their A15 will run in a big.LITTLE configuration with a A7 processor mated to a A15. Vendors won't even have to alter their software significantly.
I can't get all excited about knowing a quad core A9 is coming when the A15 is so much better being an deeper pipelined OoO SoC.
I think the A6 = MP A9. A7 = A15/A7 combo with Rogue graphics.
Fine. Put your money where your mouth is. The Ultrabook market is a multibillion dollar market. Go ahead and make one and release it to the market.
Or, why not offer your super-advanced expertise to one of the existing vendors for, say, $10,000,000?
Please stop making yourself look foolish. It really bugs me when the idiot trolls think that their ability to type a sentence on a forum like this instantly makes them an expert.
... kind of a "jack of no trades -- and master of all"
Better yet:
"el que todo sabe y de nada entiende"
and
"an ocean of knowledge, an inch deep"
Hooray! It's the megahertz war all over again!
Please. That race ended a decade ago. The new race is about core stuffing.
Please. That race ended a decade ago. The new race is about core stuffing.
Don't say stuffing this close to Thanksgiving.
As a current owner of the Asus Transformer the Prime is high on my tech shopping list. I really enjoy using it, not sure why all the bad comments about Android, might not be as well polished as IOS but I've started using it a lot more then my iPad. No need for iTunes to install a program, Codec support is awesome, Flash, better multitasking and all the those cool custom roms make my Transformer a pretty neat little multimedia machine. Course the iPad 3 will also be high on my shopping list, gosh I love tablets.
If you mean iTunes desktop. I rarely use it to install programs I'm downloading everything over the air.
Codec support - I use Azul and it plays everything I need it to
Flash - Dead
Multitasking - does what I need it to do.
I'm actually thankful for Android because it's delivering to all of us better and more competitive products regardless of platform.
I think the next battle is ecosystem. Google's web centrism versus Apple's iCloud vs Amazon's online store and Datacenter.
That's what caught my eye. Sounds like the low performance chip will be running most of the time and they'll only fire up the quad core for really CPU intensive stuff. That could save energy, although possibly at the expense of performance when in 'low speed' mode.
From the demo (i know i know, nVidia demos need to be taken with a hefty helping of salt), it seems pretty seamless:
http://www.youtube.com/watch?v=R1qKd...layer_embedded
I believe that the A5 and A6 have on-demand cores - which means that they only need to run one core if the workload demands it. That achieves much of the same thing without requiring a separate chip. Time will tell which approach is better.
I think Kal-El has this as well, unless I'm reading the anandtech article on it wrong:
The A9s in Tegra 3 can run at a higher max frequency than those in Tegra 2. With 1 core active, the max clock is 1.4GHz (up from 1.0GHz in the original Tegra 2 SoC). With more than one core active however the max clock is 1.3GHz. Each core can be power gated in Tegra 3, which wasn't the case in Tegra 2. This should allow for lightly threaded workloads to execute on Tegra 3 in the same power envelope as Tegra 2. It's only in those applications that fully utilize more than two cores that you'll see Tegra 3 drawing more power than its predecessor.
http://www.anandtech.com/show/5072/n...cture-revealed
Yes, but history suggests that being skeptical is appropriate. The made all sorts of performance claims in the past that weren't true, so it makes sense to take their current claims with a grain of salt.
nVidia has a reputation for over-promising and under-delivering, so skepticism and salt are warranted
Hooray! It's the megahertz war all over again!
Get out from the 1990's.
This is 2011
It's not about "megahertz" war, but rather "number of cores".
That's what caught my eye. Sounds like the low performance chip will be running most of the time and they'll only fire up the quad core for really CPU intensive stuff. That could save energy, although possibly at the expense of performance when in 'low speed' mode.
I believe that the A5 and A6 have on-demand cores - which means that they only need to run one core if the workload demands it. That achieves much of the same thing without requiring a separate chip. Time will tell which approach is better.
Technical demo video from nVidia's Youtube page channel
http://www.youtube.com/watch?v=R1qKdBX4-jc
Oh, by the way, this is an nVidia PATENT PENDING technology.
That little bit of information seems relevant in this day and age. :P