Well you got a poitn there but since I'm not into using those programs it is not a big deal for me. Hopefully these companies will port over far more optimized code than what we in general see on MACs today. Frankly I'm hoping that we see far more code beign ported to Macs once the intel hardware comes out.
Maybe that is reaching to far as there seems to be a big aversion to the platform currently with respect to the CAD and EDA vendors. But one can only hope that the dismissal of PPC will remove one of the obstructions to more specialized applications being ported.
That is what one side of me hopes for the other seems to be saying fat chance of that happening.
Dave
Quote:
Originally posted by aegisdesign
That'll depend on the applications. If Adobe, Macromedia and Apple don't release Intel natives versions of the software I use then there's no speed boost.
Maybe that is reaching to far as there seems to be a big aversion to the platform currently with respect to the CAD and EDA vendors. But one can only hope that the dismissal of PPC will remove one of the obstructions to more specialized applications being ported.
Nobody does CAD on a laptop so that's not the issue. OpenGL is the issue.
I disagree with the bit about 64 bit as I see that as very important part of the equation. That address space is very important for the future.
As to legacy support that is an issue. With modern process sizes though no a big deal.
Nothing to do with process sizes or address space. The IA32 architecture is creaky and needs replacing. The AMD 64bit architecture brought with it more registers, instructions and runs completely separately from the old IA32. On AMD chips at least, it's faster than 32bit but more importantly mixing 32bit and 64bit applications is significantly harder to do than with PowerPC which has always been a 64bit architecure with direct support for 32bit applications. There's no thunking to be done to get old apps working.
Apple going back to 32bit is a retrograde step in design, architecture and performance for short term commercial goals. That's why I say it's a road apple. It won't look like it now or even for the next year perhaps, but eventually it will be. A Yonah book will be the toilet seat iBook.
I'm not so sure that Yonah isn't a necessary step for Apple. It isn't the end of a line, but the beginning. Yonah is based on the architecture that Merom will be using. It's not part of the Netburst family which is going away. The "M" architecture is what is replacing it.
No doubt Merom will be a better chip, as well as being 64 bit. Do we need 64 bit laptops? But for now, a Yonah machine should be important. What matters is how Apple implements it.
None of the machines that are contemplated to come out in Jan are 64 bit machines now. Therefore, Apple won't be going backwards if these machines used IA-32 right away.
Which machines will be out (if any)? iBook, PB, Mini? All of them?
Will the iBook and Mini get a single core chip? If so, at what speed?
Will the PM get a dual core chip? Again, what what speed?
What other innovations will we see? Much better graphics? Less weight? Longer run time? Lower prices? The low end iBook is selling for $799 right now on Amazon!
Any of these improvements would be worthwhile to a group of people. Combinations of them will be worthwhile to large groups of people.
Merom will be high end when it ships most likely. I think Yonah is essential to any viable laptop lineup whether it be from Apple or a x86 Wintel vendor.
I'm looking forward to seeing a bevvy of dual core laptops by summer 2006.
I hope this isn't in response to my post. Nowhere did I state that Merom and Yonah are the same chip. I'm well aware of the differences as divulges thus far.
Thought thank you for clearing that up for people who may not understand the differences.
I hope this isn't in response to my post. Nowhere did I state that Merom and Yonah are the same chip. I'm well aware of the differences as divulges thus far.
Sorry, no, that was to melgross but I didn't want to requote his double post.
And I still think people are missing the obvious low end chip for the iBook and Mini - the Celeron M.
Single core, 1MB cache, cheap. Essentially today's Dothan. It's released the same week as the Yonah.
Apple has never been in the ultra-low-end business. I don't think they'll start in January unless the computers come bundled with a price nobody can refuse.
Apple has never been in the ultra-low-end business. I don't think they'll start in January unless the computers come bundled with a price nobody can refuse.
Nice guess though.
I really don't think a 1.7Ghz Pentium-M with 1MB cache *is* ultra-low-end but apart from that, you can buy celeron/pentium-M laptops for about half the price of the iBook these days. They've cut huge corners to get the Ghz number everyone looks at of course but I think Apple will have to come down market futher.
...but I think Apple will have to come down market futher.
I think people have been hoping for this for 10+ years...it hasn't happened yet. Although, if there was a first for anything, January 2006 would have to be the date.
But Apple has a better record than most. Even laptops that were hot ram without problems. Heat just can't be avoided at times. But as long as it's understood, it can be directed where it won't harm the computer.
Any engineers in the making that want to work heat transfer problems allow me to show you the stupidest statement you will ever see. This is not how you go into a heat management problem unless you want to melt something down (that reminds me of a funny blast furnace story but anyway...). You do not just direct heat within a system where it won't harm the computer because if you don't have decent heat flow out it will inevitably overheat.
Basic heat balance, whatever heat the computer produces must be removed once everything is running. Work out your temperature tolerances, calculate your heat production heat transfer potentials for a variety of ambient conditions and major areas and work out how you're going to remove it. If you solve that problem you rarely need to worry about the rest as you've generally solved it in the process.
Now you can use solutions to move the heat from high density places to areas where it is easier to remove but you sure as hell can never move it to an unused corner and ignore it. Thinking like that is where some monumental screw ups come from.
Apple's challenge is simply they aim for very tight tolerances in some cases where other manufacturers don't. Design work is never simple though and anybody who has ever been around any QA knows the time it takes and how difficult it is when done right.
Any engineers in the making that want to work heat transfer problems allow me to show you the stupidest statement you will ever see. This is not how you go into a heat management problem unless you want to melt something down (that reminds me of a funny blast furnace story but anyway...). You do not just direct heat within a system where it won't harm the computer because if you don't have decent heat flow out it will inevitably overheat.
Basic heat balance, whatever heat the computer produces must be removed once everything is running. Work out your temperature tolerances, calculate your heat production heat transfer potentials for a variety of ambient conditions and major areas and work out how you're going to remove it. If you solve that problem you rarely need to worry about the rest as you've generally solved it in the process.
Now you can use solutions to move the heat from high density places to areas where it is easier to remove but you sure as hell can never move it to an unused corner and ignore it. Thinking like that is where some monumental screw ups come from.
Apple's challenge is simply they aim for very tight tolerances in some cases where other manufacturers don't. Design work is never simple though and anybody who has ever been around any QA knows the time it takes and how difficult it is when done right.
Boy, that was a cruddy response.
Directing it so that it won't harm the computer means to direct it out. This should be obvious to anyone. But you're looking for a fight. Unnecessary.
Heat can't be avoided. It can be directed. Apple does this all of the time. Older laptops directed the heat downward and back, out of the unit. That's why they were hot on the bottom. Heat can't be directed upwards, or straight out the back because of the connectors.
All modern laptops produce much heat. Removing it is difficult, but doable. The engineering is well understood.
"Planned for introduction in the second half of 2006, the new microarchitecture will combine key strengths from Intel's current Intel® NetBurst® and Intel® Pentium® M microarchitectures, as well as new innovations to optimize power, performance, and scalability."
It's basically what I was saying. The Merom is an outgrowth of the "M" architecture, which Yonah also uses. They will use some features from Netburst as well, such as hyperthreading. But that is basically being put to rest.
I didn't say these were the same chips only faster. But they are based upon the same architecture. They have improvements and changes as well, of course. A different way of doing cache for example.
Intel doesn't call Merom the 'Next Generation Micro Architecture' for nothing. It's supposed to be a ground up new design and the basis of all their new chips. It's not the P6 core with the P7's bus.
Sure it builds on the 'strengths' of P6 and P7 by using the P6's low power and short pipeline and the P7's Netburst bandwidth and fast FSB but they've added quite a lot such as shared L2 caches that dynamically size themselves to what's needed to save power and increase performance and reading data before the current;y processed data is written.
The Intel Next Generation Microarchitecture is Intel's new processor architecture, to be released in the second half of 2006. It will replace the current NetBurst microarchitecture.
The architecture features low power usage, multiple cores, virtualization, 64-bit technologies, and hyperthreading.
The mobile computing version will be known as Merom, while the desktop version will be Conroe and the server version Woodcrest.
Comments
Maybe that is reaching to far as there seems to be a big aversion to the platform currently with respect to the CAD and EDA vendors. But one can only hope that the dismissal of PPC will remove one of the obstructions to more specialized applications being ported.
That is what one side of me hopes for the other seems to be saying fat chance of that happening.
Dave
Originally posted by aegisdesign
That'll depend on the applications. If Adobe, Macromedia and Apple don't release Intel natives versions of the software I use then there's no speed boost.
Originally posted by wizard69
Maybe that is reaching to far as there seems to be a big aversion to the platform currently with respect to the CAD and EDA vendors. But one can only hope that the dismissal of PPC will remove one of the obstructions to more specialized applications being ported.
Nobody does CAD on a laptop so that's not the issue. OpenGL is the issue.
Originally posted by wizard69
I disagree with the bit about 64 bit as I see that as very important part of the equation. That address space is very important for the future.
As to legacy support that is an issue. With modern process sizes though no a big deal.
Nothing to do with process sizes or address space. The IA32 architecture is creaky and needs replacing. The AMD 64bit architecture brought with it more registers, instructions and runs completely separately from the old IA32. On AMD chips at least, it's faster than 32bit but more importantly mixing 32bit and 64bit applications is significantly harder to do than with PowerPC which has always been a 64bit architecure with direct support for 32bit applications. There's no thunking to be done to get old apps working.
Apple going back to 32bit is a retrograde step in design, architecture and performance for short term commercial goals. That's why I say it's a road apple. It won't look like it now or even for the next year perhaps, but eventually it will be. A Yonah book will be the toilet seat iBook.
Originally posted by melgross
I'm not so sure that Yonah isn't a necessary step for Apple. It isn't the end of a line, but the beginning. Yonah is based on the architecture that Merom will be using. It's not part of the Netburst family which is going away. The "M" architecture is what is replacing it.
No doubt Merom will be a better chip, as well as being 64 bit. Do we need 64 bit laptops? But for now, a Yonah machine should be important. What matters is how Apple implements it.
None of the machines that are contemplated to come out in Jan are 64 bit machines now. Therefore, Apple won't be going backwards if these machines used IA-32 right away.
Which machines will be out (if any)? iBook, PB, Mini? All of them?
Will the iBook and Mini get a single core chip? If so, at what speed?
Will the PM get a dual core chip? Again, what what speed?
What other innovations will we see? Much better graphics? Less weight? Longer run time? Lower prices? The low end iBook is selling for $799 right now on Amazon!
Any of these improvements would be worthwhile to a group of people. Combinations of them will be worthwhile to large groups of people.
I'm looking forward to seeing a bevvy of dual core laptops by summer 2006.
Yonah and Sossaman are based on Dothan and the old Penitum M architecture. ie. the P6 core.
Merom/Conroe/Woodcrest are the new architecture. - P8.
Pentium 4 was the P7 core - Netburst.
http://www.intel.com/technology/computing/ngma/
Merom isn't the same architecture as Yonah.
I hope this isn't in response to my post. Nowhere did I state that Merom and Yonah are the same chip. I'm well aware of the differences as divulges thus far.
Thought thank you for clearing that up for people who may not understand the differences.
Originally posted by hmurchison
I hope this isn't in response to my post. Nowhere did I state that Merom and Yonah are the same chip. I'm well aware of the differences as divulges thus far.
Sorry, no, that was to melgross but I didn't want to requote his double post.
Single core, 1MB cache, cheap. Essentially today's Dothan. It's released the same week as the Yonah.
Originally posted by aegisdesign
And I still think people are missing the obvious low end chip for the iBook and Mini - the Celeron M.
Single core, 1MB cache, cheap. Essentially today's Dothan. It's released the same week as the Yonah.
Apple has never been in the ultra-low-end business. I don't think they'll start in January unless the computers come bundled with a price nobody can refuse.
Nice guess though.
Originally posted by aegisdesign
And I still think people are missing the obvious low end chip for the iBook and Mini - the Celeron M.
Single core, 1MB cache, cheap. Essentially today's Dothan. It's released the same week as the Yonah.
I agree. Hell think about how many computers Apple could sell if they simply expand their line.
Dreams of Mac mini's at $349 dance in my head. Apple really needs to beef up its software sales. Shipping lots of boxes is one way.
Originally posted by kim kap sol
Apple has never been in the ultra-low-end business. I don't think they'll start in January unless the computers come bundled with a price nobody can refuse.
Nice guess though.
I really don't think a 1.7Ghz Pentium-M with 1MB cache *is* ultra-low-end but apart from that, you can buy celeron/pentium-M laptops for about half the price of the iBook these days. They've cut huge corners to get the Ghz number everyone looks at of course but I think Apple will have to come down market futher.
Originally posted by aegisdesign
...but I think Apple will have to come down market futher.
I think people have been hoping for this for 10+ years...it hasn't happened yet. Although, if there was a first for anything, January 2006 would have to be the date.
The magic eightball says 'Not likely' though.
Originally posted by melgross
But Apple has a better record than most. Even laptops that were hot ram without problems. Heat just can't be avoided at times. But as long as it's understood, it can be directed where it won't harm the computer.
Any engineers in the making that want to work heat transfer problems allow me to show you the stupidest statement you will ever see. This is not how you go into a heat management problem unless you want to melt something down (that reminds me of a funny blast furnace story but anyway...). You do not just direct heat within a system where it won't harm the computer because if you don't have decent heat flow out it will inevitably overheat.
Basic heat balance, whatever heat the computer produces must be removed once everything is running. Work out your temperature tolerances, calculate your heat production heat transfer potentials for a variety of ambient conditions and major areas and work out how you're going to remove it. If you solve that problem you rarely need to worry about the rest as you've generally solved it in the process.
Now you can use solutions to move the heat from high density places to areas where it is easier to remove but you sure as hell can never move it to an unused corner and ignore it. Thinking like that is where some monumental screw ups come from.
Apple's challenge is simply they aim for very tight tolerances in some cases where other manufacturers don't. Design work is never simple though and anybody who has ever been around any QA knows the time it takes and how difficult it is when done right.
Originally posted by Telomar
Any engineers in the making that want to work heat transfer problems allow me to show you the stupidest statement you will ever see. This is not how you go into a heat management problem unless you want to melt something down (that reminds me of a funny blast furnace story but anyway...). You do not just direct heat within a system where it won't harm the computer because if you don't have decent heat flow out it will inevitably overheat.
Basic heat balance, whatever heat the computer produces must be removed once everything is running. Work out your temperature tolerances, calculate your heat production heat transfer potentials for a variety of ambient conditions and major areas and work out how you're going to remove it. If you solve that problem you rarely need to worry about the rest as you've generally solved it in the process.
Now you can use solutions to move the heat from high density places to areas where it is easier to remove but you sure as hell can never move it to an unused corner and ignore it. Thinking like that is where some monumental screw ups come from.
Apple's challenge is simply they aim for very tight tolerances in some cases where other manufacturers don't. Design work is never simple though and anybody who has ever been around any QA knows the time it takes and how difficult it is when done right.
Boy, that was a cruddy response.
Directing it so that it won't harm the computer means to direct it out. This should be obvious to anyone. But you're looking for a fight. Unnecessary.
Heat can't be avoided. It can be directed. Apple does this all of the time. Older laptops directed the heat downward and back, out of the unit. That's why they were hot on the bottom. Heat can't be directed upwards, or straight out the back because of the connectors.
All modern laptops produce much heat. Removing it is difficult, but doable. The engineering is well understood.
Originally posted by aegisdesign
Merom isn't the same architecture as Yonah.
Yonah and Sossaman are based on Dothan and the old Penitum M architecture. ie. the P6 core.
Merom/Conroe/Woodcrest are the new architecture. - P8.
Pentium 4 was the P7 core - Netburst.
http://www.intel.com/technology/computing/ngma/
What you are saying isn't entirely true.
The article itself says:
"Planned for introduction in the second half of 2006, the new microarchitecture will combine key strengths from Intel's current Intel® NetBurst® and Intel® Pentium® M microarchitectures, as well as new innovations to optimize power, performance, and scalability."
It's basically what I was saying. The Merom is an outgrowth of the "M" architecture, which Yonah also uses. They will use some features from Netburst as well, such as hyperthreading. But that is basically being put to rest.
I didn't say these were the same chips only faster. But they are based upon the same architecture. They have improvements and changes as well, of course. A different way of doing cache for example.
Originally posted by melgross
What you are saying isn't entirely true.
Is so. ;-)
Intel doesn't call Merom the 'Next Generation Micro Architecture' for nothing. It's supposed to be a ground up new design and the basis of all their new chips. It's not the P6 core with the P7's bus.
Sure it builds on the 'strengths' of P6 and P7 by using the P6's low power and short pipeline and the P7's Netburst bandwidth and fast FSB but they've added quite a lot such as shared L2 caches that dynamically size themselves to what's needed to save power and increase performance and reading data before the current;y processed data is written.
It's also 64bit unlike P6.
Merom won't have hyperthreading.
Originally posted by aegisdesign
Intel doesn't call Merom the 'Next Generation Micro Architecture' for nothing. [..] Merom won't have hyperthreading.
http://en.wikipedia.org/wiki/Merom
The Intel Next Generation Microarchitecture is Intel's new processor architecture, to be released in the second half of 2006. It will replace the current NetBurst microarchitecture.
The architecture features low power usage, multiple cores, virtualization, 64-bit technologies, and hyperthreading.
The mobile computing version will be known as Merom, while the desktop version will be Conroe and the server version Woodcrest.
Wikipedia disagrees with you.
Originally posted by Chucker
Wikipedia disagrees with you.
And it also disagrees with David Perlmutter, VP and General Manager of Intel's Mobility Group who said it wouldn't have HT.