Ah since when? Pioneer showed new plasma displays, Panasonic is very heavily invested in them as is Fujitsu, LG and Samsung. Plasmas aren't going anywhere. Plasmas have certain strengths and weaknesses, LCDs have some. Frankly I'd prefer a plasma over an LCD though. Much nicer colour and blacks it's just a shame about the sizes.
It is thought that plasma is on the way out except for very large displays.
This year we will see LCD displays equal in size all but the larger plasmas. We will also see LCD displays coming down in price so that they will actually be below that of plasma. We saw 42" models become fairly common. 45-50" models are beginning to trickle out, and will be here in good numbers before spring. 42-50" are also the most popular plasma sizes.
Also, LCD displays are more easily made as HD in less expensive models, whereas plasma HD units will always remain at about twice the price as ED models. We saw tier 1 32" LCD displays for $999 this holiday season, and 2nd tier models going for $800. I saw 42" models for under $2,500 (HD).
We are also going to see Canon and Toshiba's new display technology during the holiday season. Their micro-emitter technology has a higher display quality than either plasma or LCD, while being price competitive (though maybe not for the first models). We might also see a couple of other technologies later this year.
I do think that Apple should get into the consumer Tv monitor business. If they make a good unit, that also looks good, and is priced right, it would sell very well.
We can look to Hp as a model for this. They are doing very well with their consumer products such as Tv monitors and cameras. I have their 65" DLP 1080p model, and it is very good, one of the best units out there.
Apple can do well also. This is Apple's time. They should take advantage of it.
Remember our little comparison of the Mini running at 1.25GHz with that bus starved G4 vs. the 2.7 GHz G5 PM with the HT bus at 1.25 GHz?
Remember how the numbers scale simply with frequency?
The bus seems to contribute nothing at all here.
Maybe there is some difference for Intel vs. AMD. But there doesn't seem to be any advantage for our PPC line. Cache seems to make the contribution. That's why the dualcore 2.5 outperforms the dual processor 2.7 at many tasks. Even though we thought that it would get pounded because both cpu's go through the same bus.
Hmmm... there is a flaw in your reasoning. The processor performance could scale linearly because of the improved FSB. Look back at earlier G4s and you'll see that they do not scale linearly with frequency because they are being choked by the slower bus. Yes, cache helps alleviate that but for a broad class of applications whose working set exceeds the cache size, the bus is a critical factor.
Furthermore, bus and memory latency matters as well. It is possible to write software that is less senstive to latency and takes full advantage of bandwidth, but most developers don't... I won't get into why that is here.
The 2.5 outperforms the 2.7 for many things because its cache is bigger and its bus isn't faster. That says nothing about the importance of the bus, it just points out that cache is good (duh!).
BTW: The HT bus in the G5 is largely irrelevant to processor performance because it is not the main system bus like it is in AMD machines. What matters is the G5's FSB, which is quite a good bus IMO (and really, who are you going to listen to? ).
Hmmm... there is a flaw in your reasoning. The processor performance could scale linearly because of the improved FSB. Look back at earlier G4s and you'll see that they do not scale linearly with frequency because they are being choked by the slower bus. Yes, cache helps alleviate that but for a broad class of applications whose working set exceeds the cache size, the bus is a critical factor.
Furthermore, bus and memory latency matters as well. It is possible to write software that is less senstive to latency and takes full advantage of bandwidth, but most developers don't... I won't get into why that is here.
The 2.5 outperforms the 2.7 for many things because its cache is bigger and its bus isn't faster. That says nothing about the importance of the bus, it just points out that cache is good (duh!).
BTW: The HT bus in the G5 is largely irrelevant to processor performance because it is not the main system bus like it is in AMD machines. What matters is the G5's FSB, which is quite a good bus IMO (and really, who are you going to listen to? ).
I've done lots of measurements over the years on my machines. More often than not I've found a pretty good scaling effect on cpu intensive tasks. Even on my old 9500's, putting a cpu that was twice as fast into it resulted on a doubling of processing for many classes of PS work, and video editing rendering speeds. Sometimes even more than doulbing on some files. This is replacing one 603 with another. I've found the same thing often since then as well. But not always.
Insofar as the G5 goes, there has been planty of criticsm of Apple's memory controller. Apparently, that reduces memory throughput, so it doesn't help.
I recently learned Intel chips are not design to use HyperTransport at all.
And likely never will, since HT is an AMD technology. It's possible, but the odds are very low.
Quote:
The Ars article made it clear Intel's front side bus is inferior. Intel will run into data bottle necks in the near future, and has nothing to offer better than AMD's HT.
The Ars article is only relevant for enterprise server systems with main boards consisting of 4+ CPU sockets. In such systems, an on-die memory controller and a fast CPU interconnect is nice while shared memory systems such as Intel's Pentium systems will be memory starved. AMD will have an interconnect advantage for awhile, but it doesn't necessarilly mean that Intel can't respond in some fashion.
In regards to Apple and its lineup of laptops, desktops and cluster nodes, the article isn't really relevant. Ie, Apple isn't in the big iron enterprise server market. A shared memory system like Apple's G5 or Intel's is fine, where the FSB typically matches, or is higher than, the memory bandwidth of the memory system.
Quote:
Apple is a member of the HyperTransport Consortium so they obviously find it valuable. Its these little inconsistencies that really make me feel we only have bits of the full story.
I've done lots of measurements over the years on my machines. More often than not I've found a pretty good scaling effect on cpu intensive tasks. Even on my old 9500's, putting a cpu that was twice as fast into it resulted on a doubling of processing for many classes of PS work, and video editing rendering speeds. Sometimes even more than doulbing on some files. This is replacing one 603 with another. I've found the same thing often since then as well. But not always.
Insofar as the G5 goes, there has been planty of criticsm of Apple's memory controller. Apparently, that reduces memory throughput, so it doesn't help.
You mean 604. As I said, it really depends on the software you are running -- both what it is doing and how it was written. The bus bandwidth on the G4 was usually a limiting factor but latency was quite good. On the G5 bus speed is less of an issue that the memory & controller itself. The G5's bandwidth is quite good in most cases, but the latency is pretty terrible and most software is latency bound. When the latency can be masked properly and the issue becomes throughput, then the machine flies. Unless you are doing very detailed profiling of the software it is hard to distinguish the two and get an in-depth understanding of why it performs the way it does. Sadly, most of the things a typical Mac user does (and most of the software that does those things) is rather latency senstive.
The move to Intel will help reduce the memory latencies, albeit not to the levels that AMD currently enjoys.
You mean 604. As I said, it really depends on the software you are running -- both what it is doing and how it was written. The bus bandwidth on the G4 was usually a limiting factor but latency was quite good. On the G5 bus speed is less of an issue that the memory & controller itself. The G5's bandwidth is quite good in most cases, but the latency is pretty terrible and most software is latency bound. When the latency can be masked properly and the issue becomes throughput, then the machine flies. Unless you are doing very detailed profiling of the software it is hard to distinguish the two and get an in-depth understanding of why it performs the way it does. Sadly, most of the things a typical Mac user does (and most of the software that does those things) is rather latency senstive.
The move to Intel will help reduce the memory latencies, albeit not to the levels that AMD currently enjoys.
Plasma did dominate the flatscreen market. Pioneer and Panasonic have invested lots of money in plasma and essentially will ride out their investment.
Sony, Samsung, and LG actually have devoted billions into the manufacture of LCD. Sony is focused more on 3LCD and SXRD projection technology. These two technologies in particular combine the benefit of CRT and LCD without the drawbacks of plasma.
HyperTransport:
Quote:
The Ars article is only relevant for enterprise server systems with main boards consisting of 4+ CPU sockets. The Ars article is only relevant for enterprise server systems with main boards consisting of 4+ CPU sockets.
Ars does say this:
Quote:
This is bad news for those of us who're pumped about Merom/Conroe, because?as any Apple fan who uses a G4 can tell you?you can have the baddest processor on the market, but if you're starving it by sticking it on an outdated FSB then a lot of potential performance is going to waste. Furthermore, this problem gets worse rapidly as you increase the number of cores per socket.
I can't say if this is true or not much of it is outside my realm of knowledge.
Everyone who writes about HyperTransport has something different to say about its benefit or negligibility based on their own biases and allegiances. It seems those who claim its usefulness are AMD fans those who say its not important are Intel fans. Hannibal from Ars seems to be pretty agnostic though.
All I can say is with this transition Apple needs every performance and functionality advantage possible. If they don't use HyperTransport hopefully that means its not important.
This is bad news for those of us who're pumped about Merom/Conroe, because?as any Apple fan who uses a G4 can tell you?you can have the baddest processor on the market, but if you're starving it by sticking it on an outdated FSB then a lot of potential performance is going to waste. Furthermore, this problem gets worse rapidly as you increase the number of cores per socket.
I can't say if this is true or not much of it is outside my realm of knowledge.
Everyone who writes about HyperTransport has something different to say about its benefit or negligibility based on their own biases and allegiances. It seems those who claim its usefulness are AMD fans those who say its not important are Intel fans. Hannibal from Ars seems to be pretty agnostic though.
All I can say is with this transition Apple needs every performance and functionality advantage possible. If they don't use HyperTransport hopefully that means its not important. [/B]
He was talking about MORE than 4 processors. I don't think Apple will have to worry about that for awhile.
A snippit from a post about AMD current advantage with HyperTransport.
Quote:
In a situation where Merom and Athlon64 have very similar architechtures, AMD's advantage will be significant. The on-die controller gives them better latency, and Intel's going to be having trouble using DDR2-400 while AMD will get dual-DDR2 very soon and be able to use it effectively.
Since this thread is about Viiv technology, it seems as good a place as any to discuss it. Apple will use this in an Apple HDTV, if the rumor is true, in Kaleidoscope, the successor to the Mac mini, and possibly in new iBooks.
Viiv-powered HDTVs are a certainty, as Intel has stated. The question is, will Apple join in the fun and come out with its own version running OS X. If it doesn't, it'd be a huge mistake. Apple can't afford to lose its lead in the digital content delivery race. They've done quite will with music but now they have to succeed with video and the best way to view that is on a TV, not on a Mac. The video iPod is the portable solution.
With a Core Duo Viiv platform running OS X and Front Row coupled with a HDTV panel, you'd have access to all the best consumer apps Apple has right in your living room (front room in the UK). With built-in iSight, as in the iMac, you could do 4-way videoconferencing with friends and family scattered across the country. Of course, music and video downloads. Perhaps a feature length movie download service. Effortless H.264 decompression, of course. All the iLife apps at your command.
Control it all with an iPod-like remote with LCD. Access photos, movies, slideshows, and record TV shows with and easy to use DVR without a subscription. Add a wireless keyboard and mouse or trackpad in the keyboard and you have a complete home computer system plus HDTV.
I really want to see this happen with both a set top box and an AIO HDTV. If Apple doesn't do it, others will.
Sony, Samsung, and LG actually have devoted billions into the manufacture of LCD. Sony is focused more on 3LCD and SXRD projection technology. These two technologies in particular combine the benefit of CRT and LCD without the drawbacks of plasma.
Correct me if I am wrong, but there is nothing about either tech that has anything to do with CRT, one is simple LCD and the other is simple LCOS.
What the heck is 3LCD? From the 3LCD site, it only looks to be about having three LCD panels, what is so special about that? I thought that's how all LCD projection has been done save for the really old ones that used a 10" or so computer display panel and projected it.
What is special about SXRD technology? Isn't it just another goofy rebrand of LCOS, or do they actually a technological differentiation from other LCOS systems?
hypertransport for apple's laptop strategy will not be important for the first half of 2006. faster, cooler, widely available cpus for apple laptops is the key. and core duo is the answer. hopefully merom will address the bus issues to some degree.
remember that pentium M vs amd turion64 shootouts generally show intel in the lead. so while amd is so bloody obviously the better choice on the desktop vs. intel, when we are talking about the mobile space, for first half 2006, intel owns. and a g4 alternative is well, really, nowhere to be seen, no?
That Ars article is a little exaggerated. Conroe will have a 8.5-10.6 GB/s FSB. At that time, Athlons will have 10.6-12.8 GB/s of memory bandwidth. The difference isn't that large.
HDTVs are already expensive; I can't imagine cramming a $1,000 computer into one.
HDTVs are already expensive; I can't imagine cramming a $1,000 computer into one.
Who said anything about a $1000 computer? What does a Mac mini cost? How much would it cost with a Viiv chipset?
How can anyone say they can't imagine what Intel has stated will happen? They said quite clearly there will be Viiv/HDTV AIO sets. That's a given. What we don't know is whether Apple will be selling them. I mean, all Apple has to do is put its OS on it and it'd be cool if it could choose the type of flat panel and determine the design of the whole package. Add an LCD remote and Front Row and you've got one heckuva home entertainment system and digital hub. Apple definitely will make the set-top box version.
BTW, Intel says the dual core CPU in its Viiv platform is a 64-bit chip. Must be a desktop version of the Yonah. The portable version is a 32-bit chip. Seems like the Viiv CPU would have much of the capability of a dual core G5 but since it uses a 65nm process, it'll run much cooler.
Who said anything about a $1000 computer? What does a Mac mini cost? How much would it cost with a Viiv chipset?
That's kind of irrelevant since a Mac can't be VIIV by definition. VIIV PCs will be expensive since they have to include dual-core, Windows MCE, a TV tuner, and a remote. So I think all VIIV PCs will be over $1000.
Integrating a PC into a TV is bad for other reasons; the PC part will be obsolete in 2-3 years but the TV part won't be. It's like the iMac, but much more expensive.
Quote:
BTW, Intel says the dual core CPU in its Viiv platform is a 64-bit chip. Must be a desktop version of the Yonah. The portable version is a 32-bit chip. Seems like the Viiv CPU would have much of the capability of a dual core G5 but since it uses a 65nm process, it'll run much cooler.
VIIV can use either Pentium D (64-bit) or Yonah (32-bit). There is no 64-bit desktop Yonah -- the closest thing is Conroe which is 6-9 months away.
What the heck is 3LCD? From the 3LCD site, it only looks to be about having three LCD panels, what is so special about that? I thought that's how all LCD projection has been done save for the really old ones that used a 10" or so computer display panel and projected it.
Not every projection television uses LCD. 3LCD is an organization that promotes LCD projection in general versus other display tech (Plasma, DLP, etc).
Are you saying because LCD projection already exists there is no way these companies can pool their resources and improve it?
Quote:
What is special about SXRD technology? Isn't it just another goofy rebrand of LCOS, or do they actually a technological differentiation from other LCOS systems?
SXRD is a variation of LCOS. But considering no one else (including Intel) really got LCOS to work, Sony actually shipping a LCOS product is an achievement.
Yes SXRD is superior. SXRD chips have smaller spaces between pixels which eliminate screen door effect and allows more pixels to be packed on each chip. The liquid crystal layer is thinner which allows more light to pass through which produces better contrast and faster response time. Sony uses Vertically Aligned crystals vs the common Twisted Nematic crystals. Vertically Aligned crystals naturally display black while Twisted Nematic crystals naturally display white. Dispaly strong blacks vastly improves contrast.
The result of that technical crap is SXRD is able to project up to 4096x3112. A resolution no other electronic projection technology has yet been able to achieve.
Quote:
Correct me if I am wrong, but there is nothing about either tech that has anything to do with CRT, one is simple LCD and the other is simple LCOS.
I did not say they had anything to do with CRT. Because they are projection technology they provide some of the benefits of CRT. Creating a whole picture from three pure light sources is an advantage that Plasma and LCD TFT lack.
Conroe will have a 8.5-10.6 GB/s FSB. At that time, Athlons will have 10.6-12.8 GB/s of memory bandwidth. The difference isn't that large.
From what I understand HyperTransport allows the bus to double pump (double data rate) which continues to sends data on both the rising and falling edges of the clock signal.
So even at the same speed a bus with HT can send send more data than a bus without HT.
Comments
Originally posted by Telomar
Ah since when? Pioneer showed new plasma displays, Panasonic is very heavily invested in them as is Fujitsu, LG and Samsung. Plasmas aren't going anywhere. Plasmas have certain strengths and weaknesses, LCDs have some. Frankly I'd prefer a plasma over an LCD though. Much nicer colour and blacks it's just a shame about the sizes.
It is thought that plasma is on the way out except for very large displays.
This year we will see LCD displays equal in size all but the larger plasmas. We will also see LCD displays coming down in price so that they will actually be below that of plasma. We saw 42" models become fairly common. 45-50" models are beginning to trickle out, and will be here in good numbers before spring. 42-50" are also the most popular plasma sizes.
Also, LCD displays are more easily made as HD in less expensive models, whereas plasma HD units will always remain at about twice the price as ED models. We saw tier 1 32" LCD displays for $999 this holiday season, and 2nd tier models going for $800. I saw 42" models for under $2,500 (HD).
We are also going to see Canon and Toshiba's new display technology during the holiday season. Their micro-emitter technology has a higher display quality than either plasma or LCD, while being price competitive (though maybe not for the first models). We might also see a couple of other technologies later this year.
I do think that Apple should get into the consumer Tv monitor business. If they make a good unit, that also looks good, and is priced right, it would sell very well.
We can look to Hp as a model for this. They are doing very well with their consumer products such as Tv monitors and cameras. I have their 65" DLP 1080p model, and it is very good, one of the best units out there.
Apple can do well also. This is Apple's time. They should take advantage of it.
Originally posted by melgross
Remember our little comparison of the Mini running at 1.25GHz with that bus starved G4 vs. the 2.7 GHz G5 PM with the HT bus at 1.25 GHz?
Remember how the numbers scale simply with frequency?
The bus seems to contribute nothing at all here.
Maybe there is some difference for Intel vs. AMD. But there doesn't seem to be any advantage for our PPC line. Cache seems to make the contribution. That's why the dualcore 2.5 outperforms the dual processor 2.7 at many tasks. Even though we thought that it would get pounded because both cpu's go through the same bus.
Hmmm... there is a flaw in your reasoning. The processor performance could scale linearly because of the improved FSB. Look back at earlier G4s and you'll see that they do not scale linearly with frequency because they are being choked by the slower bus. Yes, cache helps alleviate that but for a broad class of applications whose working set exceeds the cache size, the bus is a critical factor.
Furthermore, bus and memory latency matters as well. It is possible to write software that is less senstive to latency and takes full advantage of bandwidth, but most developers don't... I won't get into why that is here.
The 2.5 outperforms the 2.7 for many things because its cache is bigger and its bus isn't faster. That says nothing about the importance of the bus, it just points out that cache is good (duh!).
BTW: The HT bus in the G5 is largely irrelevant to processor performance because it is not the main system bus like it is in AMD machines. What matters is the G5's FSB, which is quite a good bus IMO (and really, who are you going to listen to?
Originally posted by Programmer
Hmmm... there is a flaw in your reasoning. The processor performance could scale linearly because of the improved FSB. Look back at earlier G4s and you'll see that they do not scale linearly with frequency because they are being choked by the slower bus. Yes, cache helps alleviate that but for a broad class of applications whose working set exceeds the cache size, the bus is a critical factor.
Furthermore, bus and memory latency matters as well. It is possible to write software that is less senstive to latency and takes full advantage of bandwidth, but most developers don't... I won't get into why that is here.
The 2.5 outperforms the 2.7 for many things because its cache is bigger and its bus isn't faster. That says nothing about the importance of the bus, it just points out that cache is good (duh!).
BTW: The HT bus in the G5 is largely irrelevant to processor performance because it is not the main system bus like it is in AMD machines. What matters is the G5's FSB, which is quite a good bus IMO (and really, who are you going to listen to?
I've done lots of measurements over the years on my machines. More often than not I've found a pretty good scaling effect on cpu intensive tasks. Even on my old 9500's, putting a cpu that was twice as fast into it resulted on a doubling of processing for many classes of PS work, and video editing rendering speeds. Sometimes even more than doulbing on some files. This is replacing one 603 with another. I've found the same thing often since then as well. But not always.
Insofar as the G5 goes, there has been planty of criticsm of Apple's memory controller. Apparently, that reduces memory throughput, so it doesn't help.
Originally posted by TenoBell
I recently learned Intel chips are not design to use HyperTransport at all.
And likely never will, since HT is an AMD technology. It's possible, but the odds are very low.
The Ars article made it clear Intel's front side bus is inferior. Intel will run into data bottle necks in the near future, and has nothing to offer better than AMD's HT.
The Ars article is only relevant for enterprise server systems with main boards consisting of 4+ CPU sockets. In such systems, an on-die memory controller and a fast CPU interconnect is nice while shared memory systems such as Intel's Pentium systems will be memory starved. AMD will have an interconnect advantage for awhile, but it doesn't necessarilly mean that Intel can't respond in some fashion.
In regards to Apple and its lineup of laptops, desktops and cluster nodes, the article isn't really relevant. Ie, Apple isn't in the big iron enterprise server market. A shared memory system like Apple's G5 or Intel's is fine, where the FSB typically matches, or is higher than, the memory bandwidth of the memory system.
Apple is a member of the HyperTransport Consortium so they obviously find it valuable. Its these little inconsistencies that really make me feel we only have bits of the full story.
Forget HT. Apple isn't going to use it anymore.
Originally posted by melgross
I've done lots of measurements over the years on my machines. More often than not I've found a pretty good scaling effect on cpu intensive tasks. Even on my old 9500's, putting a cpu that was twice as fast into it resulted on a doubling of processing for many classes of PS work, and video editing rendering speeds. Sometimes even more than doulbing on some files. This is replacing one 603 with another. I've found the same thing often since then as well. But not always.
Insofar as the G5 goes, there has been planty of criticsm of Apple's memory controller. Apparently, that reduces memory throughput, so it doesn't help.
You mean 604. As I said, it really depends on the software you are running -- both what it is doing and how it was written. The bus bandwidth on the G4 was usually a limiting factor but latency was quite good. On the G5 bus speed is less of an issue that the memory & controller itself. The G5's bandwidth is quite good in most cases, but the latency is pretty terrible and most software is latency bound. When the latency can be masked properly and the issue becomes throughput, then the machine flies. Unless you are doing very detailed profiling of the software it is hard to distinguish the two and get an in-depth understanding of why it performs the way it does. Sadly, most of the things a typical Mac user does (and most of the software that does those things) is rather latency senstive.
The move to Intel will help reduce the memory latencies, albeit not to the levels that AMD currently enjoys.
Originally posted by Programmer
You mean 604. As I said, it really depends on the software you are running -- both what it is doing and how it was written. The bus bandwidth on the G4 was usually a limiting factor but latency was quite good. On the G5 bus speed is less of an issue that the memory & controller itself. The G5's bandwidth is quite good in most cases, but the latency is pretty terrible and most software is latency bound. When the latency can be masked properly and the issue becomes throughput, then the machine flies. Unless you are doing very detailed profiling of the software it is hard to distinguish the two and get an in-depth understanding of why it performs the way it does. Sadly, most of the things a typical Mac user does (and most of the software that does those things) is rather latency senstive.
The move to Intel will help reduce the memory latencies, albeit not to the levels that AMD currently enjoys.
Yeah, 604. I've had so many machines...
Sony, Samsung, and LG actually have devoted billions into the manufacture of LCD. Sony is focused more on 3LCD and SXRD projection technology. These two technologies in particular combine the benefit of CRT and LCD without the drawbacks of plasma.
HyperTransport:
The Ars article is only relevant for enterprise server systems with main boards consisting of 4+ CPU sockets. The Ars article is only relevant for enterprise server systems with main boards consisting of 4+ CPU sockets.
Ars does say this:
This is bad news for those of us who're pumped about Merom/Conroe, because?as any Apple fan who uses a G4 can tell you?you can have the baddest processor on the market, but if you're starving it by sticking it on an outdated FSB then a lot of potential performance is going to waste. Furthermore, this problem gets worse rapidly as you increase the number of cores per socket.
I can't say if this is true or not much of it is outside my realm of knowledge.
Everyone who writes about HyperTransport has something different to say about its benefit or negligibility based on their own biases and allegiances. It seems those who claim its usefulness are AMD fans those who say its not important are Intel fans. Hannibal from Ars seems to be pretty agnostic though.
All I can say is with this transition Apple needs every performance and functionality advantage possible. If they don't use HyperTransport hopefully that means its not important.
Originally posted by TenoBell
Ars does say this:
quote:
This is bad news for those of us who're pumped about Merom/Conroe, because?as any Apple fan who uses a G4 can tell you?you can have the baddest processor on the market, but if you're starving it by sticking it on an outdated FSB then a lot of potential performance is going to waste. Furthermore, this problem gets worse rapidly as you increase the number of cores per socket.
I can't say if this is true or not much of it is outside my realm of knowledge.
Everyone who writes about HyperTransport has something different to say about its benefit or negligibility based on their own biases and allegiances. It seems those who claim its usefulness are AMD fans those who say its not important are Intel fans. Hannibal from Ars seems to be pretty agnostic though.
All I can say is with this transition Apple needs every performance and functionality advantage possible. If they don't use HyperTransport hopefully that means its not important. [/B]
He was talking about MORE than 4 processors. I don't think Apple will have to worry about that for awhile.
In a situation where Merom and Athlon64 have very similar architechtures, AMD's advantage will be significant. The on-die controller gives them better latency, and Intel's going to be having trouble using DDR2-400 while AMD will get dual-DDR2 very soon and be able to use it effectively.
Viiv-powered HDTVs are a certainty, as Intel has stated. The question is, will Apple join in the fun and come out with its own version running OS X. If it doesn't, it'd be a huge mistake. Apple can't afford to lose its lead in the digital content delivery race. They've done quite will with music but now they have to succeed with video and the best way to view that is on a TV, not on a Mac. The video iPod is the portable solution.
With a Core Duo Viiv platform running OS X and Front Row coupled with a HDTV panel, you'd have access to all the best consumer apps Apple has right in your living room (front room in the UK). With built-in iSight, as in the iMac, you could do 4-way videoconferencing with friends and family scattered across the country. Of course, music and video downloads. Perhaps a feature length movie download service. Effortless H.264 decompression, of course. All the iLife apps at your command.
Control it all with an iPod-like remote with LCD. Access photos, movies, slideshows, and record TV shows with and easy to use DVR without a subscription. Add a wireless keyboard and mouse or trackpad in the keyboard and you have a complete home computer system plus HDTV.
I really want to see this happen with both a set top box and an AIO HDTV. If Apple doesn't do it, others will.
Originally posted by TenoBell
Sony, Samsung, and LG actually have devoted billions into the manufacture of LCD. Sony is focused more on 3LCD and SXRD projection technology. These two technologies in particular combine the benefit of CRT and LCD without the drawbacks of plasma.
Correct me if I am wrong, but there is nothing about either tech that has anything to do with CRT, one is simple LCD and the other is simple LCOS.
What the heck is 3LCD? From the 3LCD site, it only looks to be about having three LCD panels, what is so special about that? I thought that's how all LCD projection has been done save for the really old ones that used a 10" or so computer display panel and projected it.
What is special about SXRD technology? Isn't it just another goofy rebrand of LCOS, or do they actually a technological differentiation from other LCOS systems?
remember that pentium M vs amd turion64 shootouts generally show intel in the lead. so while amd is so bloody obviously the better choice on the desktop vs. intel, when we are talking about the mobile space, for first half 2006, intel owns. and a g4 alternative is well, really, nowhere to be seen, no?
HDTVs are already expensive; I can't imagine cramming a $1,000 computer into one.
Originally posted by wmf
HDTVs are already expensive; I can't imagine cramming a $1,000 computer into one.
Who said anything about a $1000 computer? What does a Mac mini cost? How much would it cost with a Viiv chipset?
How can anyone say they can't imagine what Intel has stated will happen? They said quite clearly there will be Viiv/HDTV AIO sets. That's a given. What we don't know is whether Apple will be selling them. I mean, all Apple has to do is put its OS on it and it'd be cool if it could choose the type of flat panel and determine the design of the whole package. Add an LCD remote and Front Row and you've got one heckuva home entertainment system and digital hub. Apple definitely will make the set-top box version.
New Mergers of TV and Computer Technology Explored
BTW, Intel says the dual core CPU in its Viiv platform is a 64-bit chip. Must be a desktop version of the Yonah. The portable version is a 32-bit chip. Seems like the Viiv CPU would have much of the capability of a dual core G5 but since it uses a 65nm process, it'll run much cooler.
Originally posted by Rolo
Who said anything about a $1000 computer? What does a Mac mini cost? How much would it cost with a Viiv chipset?
That's kind of irrelevant since a Mac can't be VIIV by definition. VIIV PCs will be expensive since they have to include dual-core, Windows MCE, a TV tuner, and a remote. So I think all VIIV PCs will be over $1000.
Integrating a PC into a TV is bad for other reasons; the PC part will be obsolete in 2-3 years but the TV part won't be. It's like the iMac, but much more expensive.
BTW, Intel says the dual core CPU in its Viiv platform is a 64-bit chip. Must be a desktop version of the Yonah. The portable version is a 32-bit chip. Seems like the Viiv CPU would have much of the capability of a dual core G5 but since it uses a 65nm process, it'll run much cooler.
VIIV can use either Pentium D (64-bit) or Yonah (32-bit). There is no 64-bit desktop Yonah -- the closest thing is Conroe which is 6-9 months away.
Originally posted by wmf
VIIV can use either Pentium D (64-bit) or Yonah (32-bit). There is no 64-bit desktop Yonah -- the closest thing is Conroe which is 6-9 months away.
You mean Merom.
Originally posted by Rolo
Who said anything about a $1000 computer? What does a Mac mini cost? How much would it cost with a Viiv chipset?
ViiV computers start at $900 so far according to Intel.
What the heck is 3LCD? From the 3LCD site, it only looks to be about having three LCD panels, what is so special about that? I thought that's how all LCD projection has been done save for the really old ones that used a 10" or so computer display panel and projected it.
Not every projection television uses LCD. 3LCD is an organization that promotes LCD projection in general versus other display tech (Plasma, DLP, etc).
Are you saying because LCD projection already exists there is no way these companies can pool their resources and improve it?
What is special about SXRD technology? Isn't it just another goofy rebrand of LCOS, or do they actually a technological differentiation from other LCOS systems?
SXRD is a variation of LCOS. But considering no one else (including Intel) really got LCOS to work, Sony actually shipping a LCOS product is an achievement.
Yes SXRD is superior. SXRD chips have smaller spaces between pixels which eliminate screen door effect and allows more pixels to be packed on each chip. The liquid crystal layer is thinner which allows more light to pass through which produces better contrast and faster response time. Sony uses Vertically Aligned crystals vs the common Twisted Nematic crystals. Vertically Aligned crystals naturally display black while Twisted Nematic crystals naturally display white. Dispaly strong blacks vastly improves contrast.
The result of that technical crap is SXRD is able to project up to 4096x3112. A resolution no other electronic projection technology has yet been able to achieve.
Correct me if I am wrong, but there is nothing about either tech that has anything to do with CRT, one is simple LCD and the other is simple LCOS.
I did not say they had anything to do with CRT. Because they are projection technology they provide some of the benefits of CRT. Creating a whole picture from three pure light sources is an advantage that Plasma and LCD TFT lack.
Conroe will have a 8.5-10.6 GB/s FSB. At that time, Athlons will have 10.6-12.8 GB/s of memory bandwidth. The difference isn't that large.
From what I understand HyperTransport allows the bus to double pump (double data rate) which continues to sends data on both the rising and falling edges of the clock signal.
So even at the same speed a bus with HT can send send more data than a bus without HT.
Mostly to me sounds like a marketing thing, more than distinctive product.
Intel seems to take elements that already exist combining them together and calling it Viiv.
I don't see this as something big Apple has to jump onto.