Yonah hot n hungry?

2

Comments

  • Reply 21 of 55
    smalmsmalm Posts: 677member
    Quote:

    Originally posted by THT

    By 2H 2006, Intel will have Merom out.



    I hate Merom - it makes me think about buying a notebook instead of a desktop
     0Likes 0Dislikes 0Informatives
  • Reply 22 of 55
    akacakac Posts: 512member
    Quote:

    Originally posted by hmurchison

    A Pentium M is at best equal to a G4 sans taking into account the FSB. I doubt that a 1.5Ghz Yonah single core is going to be much of a attraction to people that need the advantages of a Powerbook.



    I own two Pentium M 2.1 Ghz laptops, a Dual 2Ghz G5, a 2Ghz iMac, and 2 1.25 Ghz G4 Powerbooks. I can tell you right now fact wise that a 1.5 Ghz Pentium M 1.5 Ghz CPU IS faster than a 1.67 Ghz G4 that's in the current laptops (my friend has one). Its not just the software - OS X runs very very nicely on that machine. But running similar software and even dissimilar software (Safari vs PC Firefox), the Intel procs run faster. Thinking anything less is just putting your head in the sand.



    I love the G5 architecture. I hate that IBM has piddled around with it.
     0Likes 0Dislikes 0Informatives
  • Reply 23 of 55
    chuckerchucker Posts: 5,089member
    Quote:

    Originally posted by Elixir

    ok, i love all the tech talk but i need to be sold on a potential laptop buy.



    what should i wait for?





    my activities usually run as so





    powerpoint/keynote/word/excel

    heavy internet browsing

    Reason or any equivalent musical

    program

    and an occasional photoshop



    A current PowerBook will do a great job handling all those.
     0Likes 0Dislikes 0Informatives
  • Reply 24 of 55
    Quote:

    Originally posted by Akac

    I love the G5 architecture. I hate that IBM has piddled around with it.



    Oh please.
     0Likes 0Dislikes 0Informatives
  • Reply 25 of 55
    thttht Posts: 5,913member
    Quote:

    Originally posted by Splinemodel

    I don't really want to get started with this, but from a theoretical/academic standpoint, Itanium[2] is very elegant and practical. It's just that we've seen that it takes a while for theory to trickle down until it's at the commodity stage, and that's what's happenning with Itanium. But it will happen, and the game industry looks like it will be leading the charge.



    The game console software companies are being forced to design multithreaded software to take advantage of simple, but multi-core CPUs. They are doing thread level paralellism.



    Itanium extracts parallelism out of one thread or one single instruction stream. They are doing instruction level parallelism.



    The two don't really meet each other. Intel bet that they could design a compiler to extract lots and lots of parallelism from an instruction stream through a 3 pronged attack: a new instruction set architecture with provisions to help a compiler extract parallelism, a compiler that can extract the parallelism, and programmers to design software so the compiler could do it. It's almost like superscalar and out-of-order execution taken to the extreme.



    Suffice it to say, they weren't that succesful. Itanium 2's big SPECfp2000 score (2500+) is really due to 2 FPU units with FMADDs with a whole lot of cache I think. Maybe in another few years, they can get the compiler to a point that real gains in ILP can be made.



    As for the PS3 and Xbox 360 programmers, they are in the process of designing multithreaded games for 2006+. We'll see how good their TLP kung fu is.



    Quote:

    One thing is for certain though: stick a fork in superscalar CPUs . . . they're done. If it weren't for a little thing called "installed base" they'd be gone already.



    Not sure what you're talking about, but superscalar aint it. Superscalar CPU design features will be in every single future CPU design I've seen. Heck, I'm not exactly sure, but I wouldn't be surprised if future PDA CPUs use superscalar designs.
     0Likes 0Dislikes 0Informatives
  • Reply 26 of 55
    mate, give me an iBook pentium M 745 (1.8ghz, 2mb L2 cache, 400mhz FSB, DD2-PC4200 RAM), and that will do just nice. i predict most macintel universal binaries will perform equivalent to a powerbook on this machine. but here's the catch: iApps will be nice and fluid, but adobe/macromedia/ms office will be running via rosetta, so performance will come in just under equivalent powerbook ppc apps. it will work out fine and i am really hoping for mobility radeon x600 64mb
     0Likes 0Dislikes 0Informatives
  • Reply 27 of 55
    Quote:

    the Intel procs run faster. Thinking anything less is just putting your head in the sand.



    As THT has pointed out Intel procs have better Integer performance thus the perception of speed is there but what you are seeing with the Pentium M is the effects of a good memory throughput system as well. Sadly even the G5 has a markedly less efficient memory subsystem when compared to the Pentium 4. The G4 is even worse in this regard. However there's nothing magical about the Pentium M processor. The G4+ has 7 pipeline stages compared to the 13 or so for the Pentium M thus it's harder to clock. I believe the G4 has 4 execution units compared to a PMs 3 so it must clock a tad higher.



    If you could strap a 7448 G4 proc at 1.8ghz with a 450Mhz FSB and a 1MB cache into a laptop I'd take that over a 2Ghz single core Pentium M.



    The G4 is a nice core that simply wasn't pushed forward enough. I don't want to overhype the pentium m to people. A 1.5Ghz single core Pent M isn't going to blow your mind. It'll be serviceable but the fun really doesn't start until you go dual core.
     0Likes 0Dislikes 0Informatives
  • Reply 28 of 55
    Originally posted by hmurchison

    ....If you could strap a 7448 G4 proc at 1.8ghz with a 450Mhz FSB and a 1MB cache into a laptop I'd take that over a 2Ghz single core Pentium M.....




    totally. 1.8ghz g4, 450mhz fsb and 1MB onboard CACHE with altivec/"ppc legacy" applications would be beautiful. unfortunately, this will *never* be a reality for the iBook
     0Likes 0Dislikes 0Informatives
  • Reply 29 of 55
    chuckerchucker Posts: 5,089member
    Quote:

    Originally posted by hmurchison

    If you could strap a 7448 G4 proc at 1.8ghz with a 450Mhz FSB and a 1MB cache into a laptop I'd take that over a 2Ghz single core Pentium M.



    They don't say whether it's a 7448, but it clocks at 1.92 GHz If it's a 7448, that means 1 MB cache and 200 MHz FSB. Not quite 450... but it's a start.
     0Likes 0Dislikes 0Informatives
  • Reply 30 of 55
    Quote:

    Originally posted by Powerdoc

    TDP is not power consumption. It's a specification delivered for the OEM engineers in order to make the right cooling system.



    That's said, a merom will be way more impressive in those aera.




    Bah, guess that settles it. Im waiting for rev B.
     0Likes 0Dislikes 0Informatives
  • Reply 31 of 55
    I'm totally sceptical on all these claims because I remember how phenominal the G5 was supposed to be. Claims of a clock-for-clock 2x - 4x improvement were quoted. A 1.4GHz G5 will be like a 3.2GHz G4, low power low heat etc etc. All of which ended up being less amazing than hoped for.



    So until these things are real shipping products I'm inclined to believe we won't be blown away. Same goes for Merom.
     0Likes 0Dislikes 0Informatives
  • Reply 32 of 55
    well, that's what the guinea pig early adopters are for. of which they won't be any scarcity of \
     0Likes 0Dislikes 0Informatives
  • Reply 33 of 55
    wmfwmf Posts: 1,164member
     0Likes 0Dislikes 0Informatives
  • Reply 34 of 55
    Quote:

    Originally posted by Roadmap

    Oh please.



    Yes, yes, oh, yeeeees.
     0Likes 0Dislikes 0Informatives
  • Reply 35 of 55
    Quote:

    Originally posted by wmf

    http://pc.watch.impress.co.jp/docs/2.../kaigai225.htm

    Some pretty interesting graphs.




    indeed. if these charts are accurate, then, imo, the L2400 proves the register is out to lunch.



    1.67 mhz, dual-core, 667 FSB, 2 MB L2 cache, all at 15 W.



    i'll take one! well, maybe rev 2.
     0Likes 0Dislikes 0Informatives
  • Reply 36 of 55
    Should I bother pointing out that Apple's decision to move to Intel is based on long term potential? Long term is not 6 months, or 1 year, or even 2 years. The bet is that Intel is going to do better in the 3..20 year time frame. Until 65 and 45 nm parts arrive there isn't much reason to think that Intel can do enormously better than Moto's 90 nm part (7448, which Apple hasn't used). And IBM hasn't really tried for a laptop processor, and it doesn't look like they will since nobody is paying them to do so.





    As for the comparison to Itanium, all processors these days have instruction level parallelism, this is not strictly an Itanium thing. The main difference is that in the Itanium it is explicitly encoded in the instruction stream, whereas other architectures attempt to find it dynamically at runtime. The Itanium approach hasn't worked out as well as Intel/HP promised, partly because the compilers haven't measured up, and partly because a lot of the potential parallelism is truly dynamic and not available for compile time optimization. We starting to see a few processors with simpler cores, dropping some of the expensive dynamic ILP finding in favor of higher clock rates and putting the burden back on the programmers. I expect this trend to continue, and in some sense it validates the Itanium approach... in another sense it kinda makes you wonder how the Itanium guys could have gotten such poor results after spending so much money trying it.
     0Likes 0Dislikes 0Informatives
  • Reply 37 of 55
    programmer, much respects, but a dualcore 65nm yonah may very well be in a shipping apple product by the middle of 2006. just over 7 months away. in this short time of space, a dualcore 65mm(edit: 65nm i mean) yonah could be all apple really needs for all its consumer offerings, again, in this short time of the next 1-7 months.



    that said, yes, definitely the intel move is a decision that will have very long term impacts, i guess like the salivating dogs we are, we can't wait to reap some of the benefits like, now. edit: particularly if the yonah truly is the next evolutionary step of the wildly proven and successful pentium M... with osX86, it means that singlecore pentium Ms with the cache and FSB to give an edge over the 7448 *if it ever comes out* and single/dualcore yonahs are all very viable options within the next 6 months.
     0Likes 0Dislikes 0Informatives
  • Reply 38 of 55
    Quote:

    Originally posted by Akac

    I can tell you right now fact wise that a 1.5 Ghz Pentium M 1.5 Ghz CPU IS faster than a 1.67 Ghz G4 that's in the current laptops.



    I find it kind of amusing that no matter which Apple fan site I go to the word is 'PPC rocks! x86 sucks! Talk to the hand'



    My experience with my 1.42 ghz Mac Mini with 1GB RAM is that the G4 seems sluggish.



    As we all know the G4 can't run 720p H264 video at full frame rate. My Mini runs them at about 50% or 66%. My brother's girlfriend's 1.7 ghz Dell laptop which cost her slightly less than an entry 12" iBook, runs 720p H264 video at full frame rate no problem.



    A while ago I ran the Cinebench benchmark on two PCs and my Mini. The last test which is the scene render took the following:



    Mac Mini 1.42ghz 1GB - 201 seconds.

    Pentium-M 1.4ghz 256MB - 141 seconds.

    Pentium 4 3Ghz 2GB - 80 seconds.



    At some point I'm hoping to do a Lightwave 8.5 render and Vue 5 render to see the differences there.



    I've also tried Warcraft III. Ran smoothly on a 1.1ghz Athlon with 768MB RAM and a Radeon 8500. But does not on my Mini. But that may down to running it at 1280x1024 on the Mini instead of 1024x768.



    Folding@home. Takes my Mini about 6-7 days to process a work unit. My P4 3ghz takes about 2 days.



    I've yet to do any tests with rendering audio or video but I'm still expecting the G4 to lose.



    OK. I'm not expecting a G4 to beat a P4 but you guys make it sound like x86 sucks period.



    You might say but a dual-processor/dual-core G5 kicks a P4's arse in this benchmark or that benchmark. It might do. But the machine costs twice as much as a P4 based machine but it doesn't give twice the performance.



    And take World of Warcraft. I run it on my P4 3ghz with 6800GT with all settings full on. It always runs incredibly smoothly. From what I've read, even if you've got a Dual Processor 2.5ghz G5 with a 6800 Ultra you have to pull the draw distance back. And Doom 3 doesn't perform too good on a Mac. Again, silky smooth on my setup - even with vsync on.



    When Steve confirmed the move to Intel, I was happy. I'll be buying myself a Intel based Mac laptop next year without any worries. I think you guys should give up slagging off Intel. Its not Intel that sucks, its Microsoft's Windows O/S that does.
     0Likes 0Dislikes 0Informatives
  • Reply 39 of 55
    cubistcubist Posts: 954member
    Quote:

    Originally posted by Programmer

    ... putting the burden back on the programmers. I expect this trend to continue, and in some sense it validates the Itanium approach... in another sense it kinda makes you wonder how the Itanium guys could have gotten such poor results after spending so much money trying it.



    I disagree with you re Apple's decision being based on "long term potential". That's just rationalization of the fact that it seems a poor near-term decision. None of us know what Apple's decision was really based on.



    But you are right on regarding the Itanium. I have been wondering for some time exactly what you said. The best engineers and programmers in the industry couldn't get it to perform near expectations, despite near-infinite expenditures.



    You know, much the same could be said for the G4, too. Sure, it's crippled by its FSB, but it never seems to have approached its potential either.



    I wonder if the whole problem is that compiler technology is lagging processor design? And it's been pointed out, too, that C is a difficult language to optimize, due to all kinds of restrictions placed on it by ANSI and portability requirements. Fortran compiler writers never had to be concerned about type and length of intermediate results.
     0Likes 0Dislikes 0Informatives
  • Reply 40 of 55
    Quote:

    Originally posted by cubist

    I disagree with you re Apple's decision being based on "long term potential". That's just rationalization of the fact that it seems a poor near-term decision. None of us know what Apple's decision was really based on.



    Okay, I confess that Mr. Jobs didn't call me up and tell me his real reason. On the other hand, it seems pretty obvious to me. Intel has always led on process, and they are one of the few who can afford to continue investing in it at the level required, and they are (by definition) the only company that is 100% guaranteed to be competing with Intel in 10-20 years. Their processor design is also focused on server, desktop, and laptop (unlike IBM).



    Quote:

    But you are right on regarding the Itanium. I have been wondering for some time exactly what you said. The best engineers and programmers in the industry couldn't get it to perform near expectations, despite near-infinite expenditures.



    Not sure what you are asking / stating here...?



    Quote:

    You know, much the same could be said for the G4, too. Sure, it's crippled by its FSB, but it never seems to have approached its potential either.



    I disagree -- I think there is actually quite a bit of code out there which pushes the G4 about as hard as it is possible to push it. Sure there is plenty that doesn't, but my little machine sings pretty good for a mere 1 GHz w/ slow FSB. The short pipelines make it a bit easier to get closer to peak performance, even though it lacks the OoOE capabilities of the G5 (and the G5 doesn't really sing until you throw a lot of floating point math at it, and do so in a bandwidth-centric fashion).



    Quote:

    I wonder if the whole problem is that compiler technology is lagging processor design? And it's been pointed out, too, that C is a difficult language to optimize, due to all kinds of restrictions placed on it by ANSI and portability requirements. Fortran compiler writers never had to be concerned about type and length of intermediate results.



    The problem isn't the compilers, that is a symptom. The problem (as you astutely point out) is the languages, and this is much deeper than just the type/length of intermediates. This is going to become more obvious going forward.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.