or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › First Intel Macs on track for January
New Posts  All Forums:Forum Nav:

First Intel Macs on track for January - Page 7

post #241 of 452
Quote:
Originally posted by BenRoethig
"ive installed tiger on an inspiron 6000

no wifi, no scrolling, no power managemnt, no external mouse, overall pretty bad and useless.

as for osx itself, totally not worth the hassle of getting it installed. functionally provides nothing over xp, and wont be compatible with many of your apps."

This was a post at a message board I frequent from a PC user. This is exactly why Apple needs to provide a stable supported version for non-Apple computers. An unstable pirated version is exactly what the PC industry will see. They will tell everyone they know about their experiences with the hacked version and expect it to work just as badly on Apple's own computers. Casual Mac buyers will hear this and not buy the Mactels as a result. Apple cannot enjoy the same proprietary advantages on x86. They either need to go all the way or stay with an ISA where this is not a problem.



I must eat crow on that one. In fact it appears that OS heX runs on many PC boxen (as I've found out since my previous post).

On your negative argument, ain't going to happen, not as long as Apple's in the HW biz! Word of mouth from Joe hackerz to Joe average (Who doesn't even know that Apple currently uses PowerPC's?)? No, the word of mouth will be "If you want a good OS experience, get OS X ON an Apple computer!" Otherwise, "If you want a POS experience, get WinDOS ON a WalMart computer!"

PS - Found this on the extremetech website, and I quote, "Link removed. Please do not post links to copyrighted material. We do not sanction piracy or the violation of anyone's copyright. Thanks." Like I said ROTFLMAO!

Every eye fixed itself upon him; with parted lips and bated breath the audience hung upon his words, taking no note of time, rapt in the ghastly fascinations of the tale. NOT!
Reply
Every eye fixed itself upon him; with parted lips and bated breath the audience hung upon his words, taking no note of time, rapt in the ghastly fascinations of the tale. NOT!
Reply
post #242 of 452
Quote:
Originally posted by RazzFazz
This is just the writer's being sloppy (which does seem to be rather common these days). As is clearly stated in the PPC 970MP user manual, the processor bus is double-pumped.

OK I believe you on word.
It's a shame that Apple's writers are so sloppy.
post #243 of 452
Quote:
Originally posted by RazzFazz
Well, no offense, but after the whole "my cordless phone works at several GHz" argument, I tend to think that that statement actually applies to you...




Similar maybe, but the complexity differs by orders of magnitude.




Of course the basic issues are the same; we're talking about physics, after all. It's just that the complexity of coping with them varies enormously depending on what exactly you try to do.

As an example, your typical run-of-the-mill serial bus uses LVDS with embedded clocks. As such, you can by definition never have bits coming in early or late, and don't have to worry about piecing them back together in the proper way -- which just so happens to be a huge deal for high-speed parallel buses.




Yeah. But just because it's feasible to hand-tune every trace on a simple MMIC doesn't mean the same applies to the complex, multi-layer PCBs found in computers. It's a totally different order of complexity.

In any case, whether you believe me or not, the point that nobody in the industry uses GHz-speed off-chip parallel buses should kinda tip you off to the fact that it's a lot less trivial than you seem to think.




Well, if I'm not mistaken, it was you who brought up the "my phone runs at GHz speeds" point...

First of all, I never said it would easy, like the old "rolling off the log" saying. But it isn't THAT impossible either. If Apple wanted to implement it they could. Would the cost go up slightly; sure.

Very few engineers would equate a cell's radio frequency circuitry with that of a magnetron in a microwave.

8 layer boards were developed for two purposes. One was to allow traces to cross each other in complex circuits. Two was to allow for shielding planes for high frequency uses. This is not new technology.

What you are also forgetting is that the G5 was designed originally to scale past 3GHz. The problems that beset the chip industry, which have little in common in our discussion here, has put an end to that, for the time being.

IBM had stated a couple of years ago that the successors to the 970 would get to 6GHz and beyond. With no reason to believe that the memory bus would change its relationship to the chip clock speed, we could be seeing the very 3GHz memory bus you are denying the possibility of.

It was just four years ago that the Express but, which runs at vastly higher speeds than the PCI bus, was thought of as being much too expensive to implement. That was one of the reasons why PCI X was developed (the other reason was opposition to another Intel standard).

As Gateway released a series of machines recently using the Express bus starting at $649, we can see how that prediction held up.

Your lack of confidence in modern manufacturing and engineering knowhow reminds me of the definitiveness of some in the industry over PCI X and Express.

Unfortunately, with Apple leaving IBM, we will never find out which one of us would be right.
post #244 of 452
Quote:
Originally posted by Gene Clean
You mentioned different versions, which implied that there are non-Linux versions. There aren't.



Hardly, seeing as Crossover is just an implementation of WINE with a GUI.



Of course they are, as the benefit of getting thousands of small Windows apps to run would be minimal. People want IE to test their pages on, Word for that file OpenOffice can't open (rarely) and they want to run Photoshop and perhaps iTunes (in Linux). Nobody cares for the entire library of Windows-compatible apps because there are very good replacements for them.

And CrossOver = WINE with a GUI slapped on it. I don't see how that is "more useful" in running those 75% of other applications, unless you mean that people can use a mouse now instead of typing words for programs to start. Yep, useful as hell.

I'm amazed sometimes how people can parse words so finely that the concept of what is being said is lost.

There is a version for the PPC Mac. If you went to the web site, you would have seen that for yourself.

Is it ready for distribution? Not at all! I didn't say it was. But I've played around with it myself, and could see that it had promise.

They have been working on the x86 version for several months. This is even further behind. When it will it be usable, I have no idea. But it most likely will be.

I'm not advocating its use. Nor am I stating that it will be the best way to go. If you bothered to read my posts on this, you would have seen that I'm somewhat skeptical about its general purpose replacement for Windows, for those who want an easy, generally useful Windows experience.

But the version for Linux works well enough for those willing to go through the hassle of setting it up and installing the programs that do work.

Crossover makes this process very easy. Like it or not, for what it's intended, it works well. Read the article I posted, and you can see that for yourself. He is, by no means, the only writer to publish on this issue.

There are those who buy A Mac these days to use the Unix environment rather than the GUI that Apple prefers we use. does that make those users wrong because many Mac programs wont work? No, it doesn't.
post #245 of 452
Quote:
Originally posted by melgross
The mechanical clock (for want of a better way of saying it) is 1/4 speed. The bus is then double pumped to 1/2 speed. Then each directions' traffic is 1/4 speed. Add them together and you again have 1/2 speed.

Could you explain that "each directions' traffic is 1/4 speed"?
post #246 of 452
Quote:
Originally posted by melgross
Intel will never make a 64 bit Yonah. that is strictly a 32 bit design. It will be replaced mid to late 2006 with the Merom, which WILL be a 64 bit design.

Intel stated at the last idf that Yonah will coexist to Merom till Meroms transition to 45nm process in 2007.
post #247 of 452
Quote:
Originally posted by melgross
First of all, I never said it would easy, like the old "rolling off the log" saying. But it isn't THAT impossible either. If Apple wanted to implement it they could. Would the cost go up slightly; sure.

Very few engineers would equate a cell's radio frequency circuitry with that of a magnetron in a microwave.

Which was exactly my point -- equally few engineers would equate a cell's (or cordless phone's) RF circuitry to a high-speed processor interconnect.

(Oh, and for the record, yeah, having tubes in your cell would kinda ruin the whole operating-on-battery experience. )


Quote:
8 layer boards were developed for two purposes. One was to allow traces to cross each other in complex circuits. Two was to allow for shielding planes for high frequency uses. This is not new technology.

Of course it's not -- and yet nobody seems to be running their FSBs at several GHz. Doesn't that seem kindo odd to you, given that you seem to think it's an almost trivial thing to implement?


Quote:

What you are also forgetting is that the G5 was designed originally to scale past 3GHz. The problems that beset the chip industry, which have little in common in our discussion here, has put an end to that, for the time being.

I wholeheartedly agree that this has little to do with the discussion at hand.


Quote:

IBM had stated a couple of years ago that the successors to the 970 would get to 6GHz and beyond. With no reason to believe that the memory bus would change its relationship to the chip clock speed, we could be seeing the very 3GHz memory bus you are denying the possibility of.

What exactly makes you think the CPU to bus ratio would remain constant? Just take a look at how the clock frequencies of CPU cores and front side buses have been scaling for the past decade or so. Following that argument, we'd have been running our FSBs at a 1:1 ratio all along ever since the 8086... (Note that I'm talking about actual clock frequencies, not "it's DDR so we'll just go ahead and multiply the frequency by two in all our marketing brochures".)

Also, note that I'm not saying a 3 GHz FSB is somehow physically impossible to implement (that would be a rather stupid statement to make); it's just not feasible with current technology, and there would be little point for Apple to integrate such a capability in their northbridge when it's not likely to ever actually get used during the life cycle of said north bridge.


Quote:

It was just four years ago that the Express but, which runs at vastly higher speeds than the PCI bus, was thought of as being much too expensive to implement. That was one of the reasons why PCI X was developed (the other reason was opposition to another Intel standard).

To my knowledge, the major reason for PCI-X's coming into being was that, unlike PCIe, the former actually allowed for backwards compatibility with many existing PCI cards; I don't think anybody ever seriously expected it to become mainstream (and a quick glance at just what types of cards and motherboards are available as PCI-X versions kinda supports that notion...).

Also, as far as I'm aware, a single PCIe lane is actually cheaper to implement than PCI-X, while still offering much better performance than traditional PCI. No way to reuse your old PCI cards, tho.


Quote:

As Gateway released a series of machines recently using the Express bus starting at $649, we can see how that prediction held up.

Your lack of confidence in modern manufacturing and engineering knowhow reminds me of the definitiveness of some in the industry over PCI X and Express.

Well, see above for my take on that.


Quote:

Unfortunately, with Apple leaving IBM, we will never find out which one of us would be right.

True, I guess. Unless you somehow happen to run into that document again which you quoted earlier.
post #248 of 452
Quote:
Originally posted by smalM
Could you explain that "each directions' traffic is 1/4 speed"?

The bus is split in half. One goes from the cpu out to memory, and the other half goes from memory back in to the cpu. Intel's chips have to negotiate every time memory access is being done.. Does the cpu have priority or does the memory? The advantage to that, though, it that there is a higher peak memory speed in either direction possible.
post #249 of 452
Quote:
Originally posted by RazzFazz
[B
(Oh, and for the record, yeah, having tubes in your cell would kinda ruin the whole operating-on-battery experience. )

I don't even like them in my hi end audio equipment.



Quote:
Of course it's not -- and yet nobody seems to be running their FSBs at several GHz. Doesn't that seem kindo odd to you, given that you seem to think it's an almost trivial thing to implement?

I didn't say it's trivial. I said that it could be done. Lack of desire doesn't always mean lack of capability.

Apple just went to DDR2 533. Why? It offers little advantage. Even cheap PC mobo's are using 667 speeds. There are even 800 speed boards coming out.

Are you going to say that Apple didn't want to use that because it was so expensive and difficult to impliment? Because it isn't. They just chose not to.


Quote:
What exactly makes you think the CPU to bus ratio would remain constant? Just take a look at how the clock frequencies of CPU cores and front side buses have been scaling for the past decade or so. Following that argument, we'd have been running our FSBs at a 1:1 ratio all along ever since the 8086... (Note that I'm talking about actual clock frequencies, not "it's DDR so we'll just go ahead and multiply the frequency by two in all our marketing brochures".)

What makes you think that they wouldn't?

Quote:
Also, note that I'm not saying a 3 GHz FSB is somehow physically impossible to implement (that would be a rather stupid statement to make); it's just not feasible with current technology, and there would be little point for Apple to integrate such a capability in their northbridge when it's not likely to ever actually get used during the life cycle of said north bridge.

My statements here are just that it would have been put to a good use for dual core chips. Each core doesn't need the full speed, but dual cores running over the same line could use a higher bandwidth.

Quote:
To my knowledge, the major reason for PCI-X's coming into being was that, unlike PCIe, the former actually allowed for backwards compatibility with many existing PCI cards; I don't think anybody ever seriously expected it to become mainstream (and a quick glance at just what types of cards and motherboards are available as PCI-X versions kinda supports that notion...).

That was just another attempt to break Intel's stronghold over mobo standards. Give the customer another reason not to go Intel. Only a few boards have ever been made with the dual power ability, and with PCI X dying out quickly now, it's unlikely that too many new boards will continue to be available. It was being pushed for mainstream use. But with Intel making around 50% of all mobo's out there, it didn't catch on. Only workstation and server manufacturers adopted it.

Quote:
Also, as far as I'm aware, a single PCIe lane is actually cheaper to implement than PCI-X, while still offering much better performance than traditional PCI. No way to reuse your old PCI cards, tho.

If X had become mainstream, then the costs would have taken a dive. But too many manufacturers were afraid to take the plunge.

And you are right about one thing here. PC users are ever the cheapest people around. If they could figure out a way to use their 33k ISA modems today, they would do it!
post #250 of 452
Yeah, but will they ship with Mighty Mice as opposed to standard mice, that's the real question
post #251 of 452
Excuse me to interrupt, but there are some news (perhaps a new thread is needed):

EXCLUSIVE: Apple Planning Intel-Ready iBook Debut for January
post #252 of 452
Quote:
Originally posted by melgross
Lack of desire doesn't always mean lack of capability.

Apple just went to DDR2 533. Why? It offers little advantage. Even cheap PC mobo's are using 667 speeds. There are even 800 speed boards coming out.

Are you going to say that Apple didn't want to use that because it was so expensive and difficult to impliment? Because it isn't. They just chose not to.

I don't quite see how this really relates to our argument -- it appears Apple felt it didn't make business sense for them to go with the higher speed DDR2. That, however, doesn't in any way imply that all their decisions are made purely for political / business reasons.


Quote:

What makes you think that they wouldn't?

Do you even read what I write? Again, take a look at how the average core-to-bus frequency ratio has been developing in the past decade. At least until the great brick wall was hit, on-chip frequencies increased a lot faster than off-chip ones did.


Quote:

My statements here are just that it would have been put to a good use for dual core chips. Each core doesn't need the full speed, but dual cores running over the same line could use a higher bandwidth.

Absolutely. I fully agree the FSB is a weak point of the 970MP's design -- especially considering that all communication, even between cores on the same chip, has to go through the north bridge.


Quote:

That was just another attempt to break Intel's stronghold over mobo standards. Give the customer another reason not to go Intel. Only a few boards have ever been made with the dual power ability, and with PCI X dying out quickly now, it's unlikely that too many new boards will continue to be available.

What exactly are you referring to by "dual power ability"?


Quote:
It was being pushed for mainstream use. But with Intel making around 50% of all mobo's out there, it didn't catch on. Only workstation and server manufacturers adopted it.

If they we're really trying to push it for mainstream use, they were doing a rather lousy job. There's just not much point in PCI-X for Joe Average: For graphics, PCI-X didn't really provide any advantage over AGPx8. Most high-speed peripherals a consumer would be likely to encounter have a pretty good chance of already being integrated into the chipset, and for low-bandwidth stuff such as TV, Sound or WLAN cards, PCI-X doesn't provide any advantage at all. That leaves you with (10)GigE, FibreChannel and other workstation- and server-type hardware -- which is a) pretty far from mainstream at this point, and b) happens to be exactly what's available as PCI-X cards.


Quote:
If X had become mainstream, then the costs would have taken a dive. But too many manufacturers were afraid to take the plunge.

Sure, it would have gotten cheaper, but that equally applies to PCIe. And, as I said, PCIe x1 is cheaper to implement than PCI-X, and "good enough" for most uses.


Quote:

And you are right about one thing here. PC users are ever the cheapest people around. If they could figure out a way to use their 33k ISA modems today, they would do it!

Glad we agree on something after all.
post #253 of 452
Quote:
Originally posted by RazzFazz
Absolutely. I fully agree the FSB is a weak point of the 970MP's design -- especially considering that all communication, even between cores on the same chip, has to go through the north bridge.

That is a falsehood. Inter-core communication has to pass through the FSB control unit on the chip, but it does not have to actually travel out the FSB and back. This makes a huge difference to its performance.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #254 of 452
Quote:
Originally posted by Programmer
That is a falsehood. Inter-core communication has to pass through the FSB control unit on the chip, but it does not have to actually travel out the FSB and back. This makes a huge difference to its performance.

The documentation I've seen explicitly states that this is not the case. (Which kinda makes sense, seeing as the cache coherency protocol relies on the north bridge to reflect coherency traffic as necessary...)
post #255 of 452
Quote:
Originally posted by PB
Excuse me to interrupt, but there are some news (perhaps a new thread is needed):

EXCLUSIVE: Apple Planning Intel-Ready iBook Debut for January

No kidding. Someone start a thread. If it has a dual core, I'm there.
Hard-Core.
Reply
Hard-Core.
Reply
post #256 of 452
Quote:
Originally posted by PB
Excuse me to interrupt, but there are some news (perhaps a new thread is needed):

EXCLUSIVE: Apple Planning Intel-Ready iBook Debut for January

That's more like it. That makes sense.
post #257 of 452
Quote:
Originally posted by vinney57
That's more like it. That makes sense.

Exactly. Now that I am thinking again about this, it makes much more sense to start the transition with the entry level consumer machines (iBook, Mac mini), and not with machines like the Powerbook (pro) or the iMac (consumer but high end). It is a low-risk approach, and at the same time Apple shows it is determined and the transition is not a joke.
post #258 of 452
Quote:
Originally posted by RazzFazz
[B]I don't quite see how this really relates to our argument -- it appears Apple felt it didn't make business sense for them to go with the higher speed DDR2. That, however, doesn't in any way imply that all their decisions are made purely for political / business reasons.

DDR 2 speeds relate to the argument because they point up an area in which Apple could have easily made a necessary improvement, but didn't. There would have been practically zero cost differential. Neither of us knows why they didn't implement this one either.



Quote:
Do you even read what I write? Again, take a look at how the average core-to-bus frequency ratio has been developing in the past decade. At least until the great brick wall was hit, on-chip frequencies increased a lot faster than off-chip ones did.

Of course I read what you wrote. But you are wrong. If you look at APPLE's bus speeds on the G5 PM's over the years, (I'm not interested in what Intel has been doing because Apple has a completely different bus and controller design), you will see that they scale perfectly with the cpu speeds. You make the assumption, with no reason to, that at some point in time there will be some disconnect between bus and cpu speeds. There has been no evidence for that. If anything the evidence is quite the opposite.



Quote:
Absolutely. I fully agree the FSB is a weak point of the 970MP's design -- especially considering that all communication, even between cores on the same chip, has to go through the north bridge.

And that's why I'm suggesting a solution.


Quote:
What exactly are you referring to by "dual power ability"?

PCI is a 5 volt logic powered board, and PCI X is either a 3.3 or 3.5 (I forget which right now) volt logic powered board. 5 volt boards won't work on PCI X, and 3+ volt boards won't work on PCI. Some boards can use either .


Quote:
If they we're really trying to push it for mainstream use, they were doing a rather lousy job. There's just not much point in PCI-X for Joe Average: For graphics, PCI-X didn't really provide any advantage over AGPx8. Most high-speed peripherals a consumer would be likely to encounter have a pretty good chance of already being integrated into the chipset, and for low-bandwidth stuff such as TV, Sound or WLAN cards, PCI-X doesn't provide any advantage at all. That leaves you with (10)GigE, FibreChannel and other workstation- and server-type hardware -- which is a) pretty far from mainstream at this point, and b) happens to be exactly what's available as PCI-X cards.

I was very disappointed when Apple went for PCI X, but they had no choice. PCI isn't up to running the boards that would take full advantage of the G5 and its fast memory bus. Since Express wasn't ready when Apple came out with the G5, they went with that.

PCI X isn't just about AGP. Express serves no purpose for the majority of users either. The 16x slot is needed only by those buying the highest performing boards, and the rest of the slots are at least twice as fast as the slots in PCI X, which in turn are twice as fast as those in PCI.

Like it or not, technology moves on, but yes, they did a lousy job of pushing it.


Quote:
Sure, it would have gotten cheaper, but that equally applies to PCIe. And, as I said, PCIe x1 is cheaper to implement than PCI-X, and "good enough" for most uses.

It's odd that you would say that, because the opinion stated many times over the years leading up to the introduction, was that Express would be more expensive than PCI X. That was one of the reasons PCI X was developed, and one of the reasons given in public statements.

Everything electronic gets cheaper over the years, and sometimes by the time something is introduced, the price has dropped past expectations. Even Intel admitted that Express would be more expensive than PCI X. And I'll state again that the reason PCI X wasn't adopted wasn't because of its expense, but because Intel makes at least 50% of the mobo's used by manufacturers, and they were afraid of getting stuck with something that Intel's boards would swamp. It was politics, and the fact that the PCI X manufacturers had no clear upgrade path from version one.
post #259 of 452
Quote:
Originally posted by PB
Exactly. Now that I am thinking again about this, it makes much more sense to start the transition with the entry level consumer machines (iBook, Mac mini), and not with machines like the Powerbook (pro) or the iMac (consumer but high end). It is a low-risk approach, and at the same time Apple shows it is determined and the transition is not a joke.

Now the interesting thing is that these machines will have to compete price wise (ballpark; Macs should always be a little more expensive) and they may well 'rock' in terms of performance AND looks. If Apple get this right they could have a hit on their hands. It doesn't matter if they out-perform PowerBooks in the short term (as they probably will). Its a transition phase and hit product would be more important for Apple.
post #260 of 452
Just out of curiosity, what is the general impression about the relative speed of the two major OS's that will be competing on the same chips?

e.g. Say Apple and some hack-job PC maker *cough*Dell*cough* both make laptops using the same Intel CPU.

Will X, assuming it is fully optimized for the chip, run at a par at the OS level (e.g. copying, finder-tasks, etc.)? Third-party software will, it seems to me, be a grab-bag; but the OS's are what I am thinking about here....

I am just an ideologue; I have no idea what I am talking about--- but it seems to me that Windows is awfully bloated to run at a par with X on the same hardware, but I could be totally wrong.

Geniuses... chime in?

Hope Springs Eternal,

Mandricard
AppleOutsider



Post Scriptum: ( OR, conversely, will the chips that Intel gives Apple not be available to Dell?)
Hope Springs Eternal,
Mandricard
AppleOutsider
Reply
Hope Springs Eternal,
Mandricard
AppleOutsider
Reply
post #261 of 452
Quote:
Originally posted by Mandricard
Just out of curiosity, what is the general impression about the relative speed of the two major OS's that will be competing on the same chips?

e.g. Say Apple and some hack-job PC maker *cough*Dell*cough* both make laptops using the same Intel CPU.

Will X, assuming it is fully optimized for the chip, run at a par at the OS level (e.g. copying, finder-tasks, etc.)? Third-party software will, it seems to me, be a grab-bag; but the OS's are what I am thinking about here....

I am just an ideologue; I have no idea what I am talking about--- but it seems to me that Windows is awfully bloated to run at a par with X on the same hardware, but I could be totally wrong.

Geniuses... chime in?

Hope Springs Eternal,

Mandricard
AppleOutsider



Post Scriptum: ( OR, conversely, will the chips that Intel gives Apple not be available to Dell?)

Well, I would NEVER perform anything of questionable legality, but if I happened to have played around with OS X on an Intel chipset already, I guess I would say that the two are surprisingly similar. Now, if I happened to have tried the original 10.4.1 dev version of OSx86 on a Celeron M 1.3GHz laptop with 512 megs of RAM (fairly similar to the base iBook), I'd say that neither XP nor OSx86 had a clear advantage. But note that this is hacked beta software on unoptimized hardware, nowhere near what it will be like on the actual Intel Macs. I've got my money on it running slightly faster on equivalent CPUs, but not very much.

All speculative, of course... *cough cough*

post #262 of 452
Quote:
Originally posted by Mandricard

Will X, assuming it is fully optimized for the chip, run at a par at the OS level (e.g. copying, finder-tasks, etc.)? Third-party software will, it seems to me, be a grab-bag; but the OS's are what I am thinking about here....

That's not exactly taxing on the CPU though. The gripe with OSX is usually it's 3D graphics implementation be it the drivers or OpenGL. It'd be quite embarrassing for Apple if, for example, DOOM3 or Cinebench still ran much faster on Windows than on the exact same hardware (or as near as) under OSX.
post #263 of 452
Quote:
Originally posted by melgross
DDR 2 speeds relate to the argument because they point up an area in which Apple could have easily made a necessary improvement, but didn't. There would have been practically zero cost differential. Neither of us knows why they didn't implement this one either.

What i've been arguing all along is that, in the case of the G5 FSB, Apple could not easily have made such an improvement.


Quote:

Of course I read what you wrote. But you are wrong. If you look at APPLE's bus speeds on the G5 PM's over the years, (I'm not interested in what Intel has been doing because Apple has a completely different bus and controller design), you will see that they scale perfectly with the cpu speeds.

It's not a matter of "what Intel has been doing", it's a matter of what the trend is for the entire industry -- or at least for those parts of the industry which experienced significant clock rate increases at all.


Quote:

You make the assumption, with no reason to, that at some point in time there will be some disconnect between bus and cpu speeds. There has been no evidence for that. If anything the evidence is quite the opposite.

There has been no evidence if you restrict your point of view exclusively to the few iterations of the G5 that have shipped so far (and went from 2 GHz top to 2.7 GHz tops in the process). What makes you think that Apple / IBM can somehow magically just scale the FSB speed as they please when everybody else is unable to do so?


Quote:

PCI is a 5 volt logic powered board, and PCI X is either a 3.3 or 3.5 (I forget which right now) volt logic powered board. 5 volt boards won't work on PCI X, and 3+ volt boards won't work on PCI. Some boards can use either.

It goes like this:

PCI-X cards and slots use 3.3V signalling.

For plain PCI, cards can use 3.3V signalling, 5V signalling or they can be "universal" (i.e. support both). The variants can easily be distinguished by looking at the position of the notch(es) in the PCI connector. PCI slots, on the other hand, can either support 5V signalling (most PC boards) or 3.3V (newer boards, and also slots running at 66MHz), but not both. The slots are also coded such that only cards that support the signalling voltage used by the slot physically fit.


Quote:

PCI X isn't just about AGP.

Right. In fact, I don't think there are PCI-X graphics cards at all (or at least they're extremely rare).


Quote:
Express serves no purpose for the majority of users either. The 16x slot is needed only by those buying the highest performing boards, and the rest of the slots are at least twice as fast as the slots in PCI X, which in turn are twice as fast as those in PCI.

Actually, compared to the PCI slots used in almost all PC motherboards (32 bit, 33 MHz) or on G4 Macs (64 bit, 33 MHz), PCI-X (64 bit, up to 133 MHz) is four or eight times as fast, respectively. (Of course, that's just comparing theoretical maximum numbers, not real-world throughput.)


Quote:

It's odd that you would say that, because the opinion stated many times over the years leading up to the introduction, was that Express would be more expensive than PCI X. That was one of the reasons PCI X was developed, and one of the reasons given in public statements.

It's more expensive in that, due to lack of physical backwards compatibility, it obsoletes nearly all the expansion cards out there, forcing users to buy new ones. Since this is not something that's going to go too well especially with the PC world, you need to provide some PCI slots as well for backwards comptibility, and thus need a bridge chip, additional PCB real estate, etc. -- which is not the case for PCI-X. (Also, note that I was talking about x1 slots, not x4/x8/x16 ones.)


Quote:
Everything electronic gets cheaper over the years, and sometimes by the time something is introduced, the price has dropped past expectations. Even Intel admitted that Express would be more expensive than PCI X. And I'll state again that the reason PCI X wasn't adopted wasn't because of its expense, but because Intel makes at least 50% of the mobo's used by manufacturers, and they were afraid of getting stuck with something that Intel's boards would swamp. It was politics, and the fact that the PCI X manufacturers had no clear upgrade path from version one.

There's nothing that would prevent motherboard manufacturers from offering PCI-X and PCIe slots on the same board (like they do right now with PCIe and plain old PCI) -- well, nothing except there's really not much of a business case for doing so. Also, in the areas that see actual benefit from PCI-X, i.e. the server and high-end workstation market, PCI-X was adopted.
post #264 of 452
Quote:
Originally posted by melgross
Apple just went to DDR2 533. Why? It offers little advantage. Even cheap PC mobo's are using 667 speeds. There are even 800 speed boards coming out.

Quote:
Originally posted by RazzFazz
I don't quite see how this really relates to our argument -- it appears Apple felt it didn't make business sense for them to go with the higher speed DDR2. That, however, doesn't in any way imply that all their decisions are made purely for political / business reasons.

I assume the memory controller cannot handle the 4 DIMMs at the higher speed. In PCs the memory controller can often handle 2 DIMMs at a lower speed and only 1 at higher speed. Or has to use registered DIMMs instead.
(Of course this is always number of DIMMs per memory bank).
post #265 of 452
Quote:
Originally posted by RazzFazz
There's nothing that would prevent motherboard manufacturers from offering PCI-X and PCIe slots on the same board (like they do right now with PCIe and plain old PCI) -- well, nothing except there's really not much of a business case for doing so. Also, in the areas that see actual benefit from PCI-X, i.e. the server and high-end workstation market, PCI-X was adopted.

Actually, the way I recall it, the MSI Neo4 Platinum board I purchased a few months ago had one PCI x16, x4 and x1 port each, three PCI ports and finally one PCI-X port. And this is a fairly mainstream choice and not too costly either. SO I think that combining PCI, PCIe and PCI-X is in fact done by quite a few manufacturers.
post #266 of 452
Quote:
Originally posted by RazzFazz
What i've been arguing all along is that, in the case of the G5 FSB, Apple could not easily have made such an improvement.

It's not a matter of "what Intel has been doing", it's a matter of what the trend is for the entire industry -- or at least for those parts of the industry which experienced significant clock rate increases at all.

There has been no evidence if you restrict your point of view exclusively to the few iterations of the G5 that have shipped so far (and went from 2 GHz top to 2.7 GHz tops in the process). What makes you think that Apple / IBM can somehow magically just scale the FSB speed as they please when everybody else is unable to do so?




I think I realise what the problem here is.

You keep talking about multiple GHz bus's. But that's not the question at all.

Ok, I quess we do have to bring Intel into this.

Intel's memory bus's have gone from 400MHz to 1.066GHz in 2 1/2 years. The processor speeds have gone from slightly under 3GHZ to 3.8 GHz, and back down to 3.4 GHz during that same time period.

So, bus speeds have gone up vastly faster than chip speeds have. The opposite of what you said earlier.

So let's see what the situation really is.

Intel has a bus that is bidirectional, but can only pass a signal one way at a time. But when the signal does pass through, it does at the full 1.066GHZ speed on their newest bus.

Apple has a bus that at the present time runs at 1.25GHz, down from 1.35GHz on the dual 2.7.

BUT, this bus consists of two seperate bus's. Each one is unidirectional. And each one operates at half the speed of the overall bus speed. The Apple bus at 1.25GHz can pass more information over the bus at one time than Intel's 1.066GHz bus can.

But, each bus is only running at 625MHz, much slower than the 1.066GHZ that the traffic runs at in Intel's bus in either direction.

If Apple went to a 2GHz bus for the dual core 2GHZ machine, each unidirectional bus would only be running at 1GHz. Slightly slower than Intel's.

If Apple used it on the dual core 2.3GHz, it would be running at 1.15GHz, slightly faster than Intel's.

If they ran it at 2.5 GHz on the quad, each side of the bus would be running at 1.25GHz. That's just under 200MHz faster than Intel's.

None of Apple's bus's would be running at multiples of 1GHz as you've been saying, and been so concerned about.

So we are not talking about these high speeds that you are thinking about. Just slightly faster than what Intel has now.



Quote:
It goes like this:

PCI-X cards and slots use 3.3V signalling.

For plain PCI, cards can use 3.3V signalling, 5V signalling or they can be "universal" (i.e. support both). The variants can easily be distinguished by looking at the position of the notch(es) in the PCI connector. PCI slots, on the other hand, can either support 5V signalling (most PC boards) or 3.3V (newer boards, and also slots running at 66MHz), but not both. The slots are also coded such that only cards that support the signalling voltage used by the slot physically fit.

Sorry, but PCI boards are 5 volt, and PCI X boards are 3.3 volt.

If a card supports both, then that card can run in either because the card senses which voltage the bus is using.



Quote:
Right. In fact, I don't think there are PCI-X graphics cards at all (or at least they're extremely rare).

I quess it's sad to know that all of the G5 PM's have been running without graphics cards. We should all run out and tell ATI and Nvidia, so that they can make us some. Here, you are just guessing. I can't imagine how you came up with this one. We are talking 8x AGP in either PCI or PCI X. Same thing electrically.

Quote:
Actually, compared to the PCI slots used in almost all PC motherboards (32 bit, 33 MHz) or on G4 Macs (64 bit, 33 MHz), PCI-X (64 bit, up to 133 MHz) is four or eight times as fast, respectively. (Of course, that's just comparing theoretical maximum numbers, not real-world throughput.)

Yes, that's what I said, PCI X is up to twice as fast as PCI, and Express is up to twice as fast as PCI X.

Quote:
It's more expensive in that, due to lack of physical backwards compatibility, it obsoletes nearly all the expansion cards out there, forcing users to buy new ones. Since this is not something that's going to go too well especially with the PC world, you need to provide some PCI slots as well for backwards comptibility, and thus need a bridge chip, additional PCB real estate, etc. -- which is not the case for PCI-X. (Also, note that I was talking about x1 slots, not x4/x8/x16 ones.)

Well, that's what I was saying. You were saying that it was LESS expensive to implement Express.

Quote:
There's nothing that would prevent motherboard manufacturers from offering PCI-X and PCIe slots on the same board (like they do right now with PCIe and plain old PCI) -- well, nothing except there's really not much of a business case for doing so. Also, in the areas that see actual benefit from PCI-X, i.e. the server and high-end workstation market, PCI-X was adopted.

That would be a baaddd idea. Keeping ISA, and now PCI on new machines just holds developement back. It was the ISA boards, for the most part that prevented Plug-n-Play from working.

A couple of quotes from Anand, from Anandtech:

"It is irritating that Apple didn't move to DDR2-667 yet, especially on their highest end configuration (and especially because it can use the bandwidth), but given Apple's relatively conservative nature whenever it comes to memory speeds it isn't a huge surprise."

" In my opinion, the biggest improvement to the new G5s is the move to PCI Express. And here's one thing I really do like about Apple, when they move to a new technology, they really move to it.

There isn't a single parallel PCI slot in the new G5s,..."
post #267 of 452
Quote:
Originally posted by smalM
I assume the memory controller cannot handle the 4 DIMMs at the higher speed. In PCs the memory controller can often handle 2 DIMMs at a lower speed and only 1 at higher speed. Or has to use registered DIMMs instead.
(Of course this is always number of DIMMs per memory bank).

The controller can handle it. Read the quotes at the end of my above post.
post #268 of 452
while we're on the topic.. (wait, what was the topic again??) just a comment on DDR vs DDR2 to muse over (go nuts everyone)

DDR2 for powerbooks and 2006 macintels makes sense as intel is standardising to DDR2 support, and DDR2 in portables show reduced power consumption (by how much, well, that's a matter for debate).

according to most PC sites lower-latency but slower speed DDR amd64s vs faster speed but higher-latency DDR2 intels results in very neck and neck comparisons, wrt to memory specific synthetic benchmarks (sandrasoft)

i wonder in the case of the quad g5, if the memory controller is "teh bomb", then in this case DDR with decently tight timings will do fine, DDR2 may not offer any advantage, we may not see powermac g5s with DDR2 until middle of 2006.
post #269 of 452
Quote:
Originally posted by sunilraman
while we're on the topic.. (wait, what was the topic again??) just a comment on DDR vs DDR2 to muse over (go nuts everyone)

DDR2 for powerbooks and 2006 macintels makes sense as intel is standardising to DDR2 support, and DDR2 in portables show reduced power consumption (by how much, well, that's a matter for debate).

according to most PC sites lower-latency but slower speed DDR amd64s vs faster speed but higher-latency DDR2 intels results in very neck and neck comparisons, wrt to memory specific synthetic benchmarks (sandrasoft)

i wonder in the case of the quad g5, if the memory controller is "teh bomb", then in this case DDR with decently tight timings will do fine, DDR2 may not offer any advantage, we may not see powermac g5s with DDR2 until middle of 2006.

Sunil, I really hate to break this to you , but the new PMG5's Express machines use DDR2 now.
post #270 of 452
Quote:
Originally posted by melgross
Sunil, I really hate to break this to you , but the new PMG5's Express machines use DDR2 now.

Yes that was a good one.

BTW DDR 2 has an higher latency than DDR, but the memory bandwitch is sligthy superior.
In conjonction or because of a greater L2 cache, the memory performance of the new G5 are superior than the older.
post #271 of 452
Quote:
Originally posted by melgross
Sunil, I really hate to break this to you , but the new PMG5's Express machines use DDR2 now.

lol WTF i am stuck in AMD64 land
post #272 of 452
Quote:
Originally posted by franksargent

Now if I were a l33t piratez, I just might keep my mouth shut UNTIL the production MacTel's are released, then and only then would I get the word out. Basic human nature, people talk, loose lips sink ships!

Meh, at some point you would have to release your work at which point Apple would change their anti=piracy strategy and we would all be back to square 1.

There's really no point in delaying releases of such hacks. All you do is delay how long people will be able to use the current versions. The next version will always be incompatible and we start the whole process over again.

PSP ROM 2.0 anyone?
post #273 of 452
Quote:
Originally posted by melgross
I think I realise what the problem here is.

You keep talking about multiple GHz bus's. But that's not the question at all.

Ok, I quess we do have to bring Intel into this.

Intel's memory bus's have gone from 400MHz to 1.066GHz in 2 1/2 years. The processor speeds have gone from slightly under 3GHZ to 3.8 GHz, and back down to 3.4 GHz during that same time period.

I do think I put the "before the great brick wall was hit" qualifier in my statement? I was talking about the long run.

Quote:

So, bus speeds have gone up vastly faster than chip speeds have. The opposite of what you said earlier.

Up to and including the 486DX, processor and FSB were at 1:1. Then came the DX2, DX4, the Socket 7 processors, whatnot. With each generation, the FSB multiplier grew on average. It wasn't until recently that CPU frequency just kinda stopped scaling.


Quote:

So let's see what the situation really is.

Intel has a bus that is bidirectional, but can only pass a signal one way at a time. But when the signal does pass through, it does at the full 1.066GHZ speed on their newest bus.

No, it does not. Intel's FSB is quad-pumped, the actual clock rate is 266MHz.


Quote:

Apple has a bus that at the present time runs at 1.25GHz, down from 1.35GHz on the dual 2.7.

The G5's EI is double-pumped, so the actual clock rate is half of what you state.


Quote:

BUT, this bus consists of two seperate bus's. Each one is unidirectional. And each one operates at half the speed of the overall bus speed. The Apple bus at 1.25GHz can pass more information over the bus at one time than Intel's 1.066GHz bus can.

But, each bus is only running at 625MHz, much slower than the 1.066GHZ that the traffic runs at in Intel's bus in either direction.

If Apple went to a 2GHz bus for the dual core 2GHZ machine, each unidirectional bus would only be running at 1GHz. Slightly slower than Intel's.

You're really confusing something here. "Operates at half the speed" does not mean "each direction operates at half the clock rate" (that doesn't even make sense). A "2 GHz bus" runs at, well, 2 GHz. Not "1 GHz per direction" or some such. The thing that's halved per-direction is the bus's width.

The math for the current incarnation of the G5's FSB goes like this:

You have two unidirectional 32-bit buses, clocked at 4:1 but double-pumped so it's effectively 2:1. That gives you about 750MHz x 2 (DDR) x 32 bits/direction x 2 directions = 12GBps aggregate throughput.

Intel's current FSB is a bidirectional 64-bit bus that runs at 266 MHz. It's quad-pumped, so the effective clock rate is 1066 MHz. Thus, you get 266 MHz x 4 (QDR) x 64 bits = 8.5GBps aggregate throughput.


Quote:

If Apple used it on the dual core 2.3GHz, it would be running at 1.15GHz, slightly faster than Intel's.

If they ran it at 2.5 GHz on the quad, each side of the bus would be running at 1.25GHz. That's just under 200MHz faster than Intel's.

You should really drop the whole "each side runs at clock rate x, so the whole bus runs at twice that clock rate" argument. It's the throughput that doubles, not the clock rate. If one lane runs at 1.15GHz, then two such lanes still run at the same clock speed, the whole bus is just wider now.


[QUOTE]
Sorry, but PCI boards are 5 volt, and PCI X boards are 3.3 volt.

If a card supports both, then that card can run in either because the card senses which voltage the bus is using.
[/QUOTE

This is getting fairly annoying. The PCI spec does allow for 3.3V and 5V slots, and 3.3V, 5V and "universal" cards. Go ask Google, Wikipedia or whatever other source you like.


Quote:

I quess it's sad to know that all of the G5 PM's have been running without graphics cards. We should all run out and tell ATI and Nvidia, so that they can make us some. Here, you are just guessing. I can't imagine how you came up with this one. We are talking 8x AGP in either PCI or PCI X. Same thing electrically.

AGP in PCI/PCI-X? What are you even talking about? So, for you, AGP is just a different physical slot form factor and otherwise just PCI(-X)? If that really is what you're suggesting, I suggest you consult any kind of source you'd like, and you'll see that AGP uses different voltages, uses different signal speeds, doesn't have muxed A/D lines, etc., pp.

Also, make sure you take a good look at the Apple hardware tech docs -- the AGP port is attached directly to U3 (i.e. the north bridge), whereas PCI-X is provided by a HT-to-PCI-X-Tunnel that's attached to the HyperTransport port. There isn't even a direct connection between the two.


Quote:
Well, that's what I was saying. You were saying that it was LESS expensive to implement Express.

I'm saying a PCIe x1 slot is cheaper to implement than a PCI-X slot.


Quote:

That would be a baaddd idea. Keeping ISA, and now PCI on new machines just holds developement back. It was the ISA boards, for the most part that prevented Plug-n-Play from working.

a) I wasn't advocating it, I was just saying board makers wouldn't necesarrily have been "stuck" with PCI-X and PCI-X alone.

b) From the software's point of view, PCI-X and PCIe are both just "really fast PCI", so the ISA analogy is flawed.

In any case, this is getting rather tiresome, and I'm about to head off & enjoy my weekend. Guess we can ust agree to disagree...
post #274 of 452
I give up as well. Everytime I say something, you wander off into another country entirely.

This has mostly been useless, we are obviously talking past each other.

I was just going to bed myself.

Have a nice weekend.
post #275 of 452
Quote:
Originally posted by strobe
Meh, at some point you would have to release your work at which point Apple would change their anti=piracy strategy and we would all be back to square 1.

There's really no point in delaying releases of such hacks. All you do is delay how long people will be able to use the current versions. The next version will always be incompatible and we start the whole process over again.

PSP ROM 2.0 anyone?



EXACTLY!

Apple, of course, knows this (i. e. "release, revise, repeat as necessary"). The hackerz think their playing their own game, when in fact their playing right into Apple's game plan! The OSX86 hackerz are really in fact just iToolz. You know the hackerz should be really doing this on some kinda darknet, maybe they are? But, by the time Apple releases production MacTels, my gut tells me that the hackerz will need to take a soldering gun to a generic PC motherboard to get their OS'z, appz, and warez to work. Yeah, they may get things to work, but will it ever be mainstream? That's my whole point, and I think it will be Apple's also.

In fact, what's to stop Intel from making "slightly tweaked" versions of their x86 CPU's? Intel's making literally 100's now? Devote a few 1000's transistors on these Apple specific CPU's that REQUIRE an Apple custom IC (they would of course be working closely together). Apple has a sole supplier of said custom IC's (top secret and with severe penalties for any contract violations), and it's GAME OVER!

And to fuel this futile hackerz discussion further, the SW dev cycle and Apple's HW/SW cycle will go hand-in-hand. Introduce low end HW (i. e. iBook) WITH low end native SW. See what happens. Revise said HW/SW (Mac mini released also), release, repeat as necessary. Release PB several months later with this learning curve incorporated and with a FEW high end apps. See what happens. Several months later, release PM's (with Leopard?), revised high end apps (and perhaps a few more), and see what happens. Now at this stage major 3rd parties begin to release their high end SW. So by 2007 we'll have secure HW and SW and the transition hits critical mass, and in 2008 Microsoft files for Chapter 11 .

In fact, I'm feeling so good about this scenario (except for the very last part), that instead of getting that quad PM, I think I'll invest in APPL. After all, I'd rather make mo money than that quad PM will ever give me!

Every eye fixed itself upon him; with parted lips and bated breath the audience hung upon his words, taking no note of time, rapt in the ghastly fascinations of the tale. NOT!
Reply
Every eye fixed itself upon him; with parted lips and bated breath the audience hung upon his words, taking no note of time, rapt in the ghastly fascinations of the tale. NOT!
Reply
post #276 of 452
Quote:
Originally posted by franksargent
...But, by the time Apple releases production MacTels, my gut tells me that the hackerz will need to take a soldering gun to a generic PC motherboard to get their OS'z, appz, and warez to work. Yeah, they may get things to work, but will it ever be mainstream?...

wouldn't be the first time it has been done, I remember that you could buy boards to turn an Amega into a Mac, but you had to provide the ROM's to make it work. With computers at a much lower cost, and Refirbs and used computers more prevalent I could see a day when some company could offer an "altered" motherboard with Mac ROM's (or whatever) presoldered to them, or even making a "card" that holds them. If Apple is slow to adopt faster chips or leaves a product sector blank then I could definatly see this happening, and upgrade manufacturers have found some very inventive ways to get new buisness from "non'upgradable" Macs in the past.
post #277 of 452
Quote:
Originally posted by JCG
wouldn't be the first time it has been done, I remember that you could buy boards to turn an Amega into a Mac, but you had to provide the ROM's to make it work. With computers at a much lower cost, and Refirbs and used computers more prevalent I could see a day when some company could offer an "altered" motherboard with Mac ROM's (or whatever) presoldered to them, or even making a "card" that holds them. If Apple is slow to adopt faster chips or leaves a product sector blank then I could definatly see this happening, and upgrade manufacturers have found some very inventive ways to get new buisness from "non'upgradable" Macs in the past.



Ain't going to happen! Just because something has happened in the past, doesn't automatically imply that things will be the same going forward. This is (figuratively) just like starting over, with all of the previous history KNOWN! Now maybe this could happen in say the PRC where this kind of thing is SOP. But Apple legal would be so all over this kind of scenario, that it'll never happen here (or much of the "western" world for that matter). You know "The rule of law."

Every eye fixed itself upon him; with parted lips and bated breath the audience hung upon his words, taking no note of time, rapt in the ghastly fascinations of the tale. NOT!
Reply
Every eye fixed itself upon him; with parted lips and bated breath the audience hung upon his words, taking no note of time, rapt in the ghastly fascinations of the tale. NOT!
Reply
post #278 of 452
Quote:
Originally posted by RazzFazz
AGP in PCI/PCI-X? What are you even talking about? ...
In any case, this is getting rather tiresome, and I'm about to head off & enjoy my weekend. Guess we can ust agree to disagree...

Don't worry, RazzFazz, you're not the first one who's given up arguing with melgross.
post #279 of 452
Quote:
Originally posted by bikertwin
Don't worry, RazzFazz, you're not the first one who's given up arguing with melgross.

Melgross is relentless. He's a messageboard terminator T3000 liquid poly superbit alloy. Better bring your hard hat.

I'm really hoping that an Intel Mac mini is announced. For once in my life I'm willing to buy a 1st gen product. Hell all I need is school stuff. Then I'll be looking at a Powerbook or Powermac next. Cool stuff.
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
He's a mod so he has a few extra vBulletin privileges. That doesn't mean he should stop posting or should start acting like Digital Jesus.
- SolipsismX
Reply
post #280 of 452
I bought a 1st gen G4 desktop the moment I saw Steve Jobs display it on a pedestal, like some kind of shining crystal. I got the pre-downgrade price on the 450Mhz model. Didn't regret it one bit.

Apple will have less wiggle room for 1st gen disasters given the architecture is basically Intel's. How could they screw that up?
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › First Intel Macs on track for January