Apple's unveils new Mac Pro desktop with up to 12 processing cores

1567810

Comments

  • Reply 181 of 210
    hirohiro Posts: 2,663member
    Quote:
    Originally Posted by gariba View Post


    no nVidia cards no CUDA GPU processing...

    Bad for imaging...



    Use cross card OpenCL and avoid only being good on Nvidia, you get ATi for free too.



    You wouldn't believe some of the public beatings the Nvidia guys took on their CUDA this past week either. While they didn't say they were changing it, the uncomfortable looking-at-shoes-while-mumbling-something-along-the-lines-of ~the CUDA architecture isn't the only way to get more out of our GPU, we are looking at garble/mumble~ said CUDA has some issues and Nvidia recognizes it.



    One thing I took away from that talk was nobody was willing to bet on what exactly low level GPU programming was going to look like five years from now, but the abstraction languages like OpenCL would be where the mass of programers ended up, and whatever low level resulted would end up supporting the higher level APIs.
  • Reply 182 of 210
    hirohiro Posts: 2,663member
    Quote:
    Originally Posted by DocNo42 View Post


    Unless you use a drive like OWC Mercury Extreme which does the garbage collection internally by utilizing some internal extra capacity, negating the need for hacks such as TRIM.



    http://forums.appleinsider.com/showp...8&postcount=10



    All current SSDs use over-specification, the chips themselves have it built in at the silicon level, and that only helps until the extra blocks are written to. Those blocks also are provided to act as fallbacks when blocks die, they just blacklist the bad block and put one of the extras into the normal rotation -- almost the same as rotating HDs.



    Those processes have nothing to do with Garbage Collection, despite what the post you link to says. GC can be done just using the few MB of RAM cache on the controller chip, no extra hidden flash necessary at all. I do agree that these smarter controllers eliminate the need for TRIM or software based GC.
  • Reply 183 of 210
    hirohiro Posts: 2,663member
    Quote:
    Originally Posted by melgross View Post


    I read the article, and it's clearly wrong. OS X needs trim just as much as Windows and Linux. There's nothing in OS X that gets around the problem, which is why Apple is working on trim now.



    Nothing in the OS, but the controllers are getting smarter about it, and hiding that from the OS altogether. SandForce calls their implementation Recycler, and it is part of the firmware, not in an OS driver.
  • Reply 184 of 210
    docno42docno42 Posts: 3,755member
    Quote:
    Originally Posted by Hiro View Post


    All current SSDs use over-specification, the chips themselves have it built in at the silicon level, and that only helps until the extra blocks are written to. Those blocks also are provided to act as fallbacks when blocks die, they just blacklist the bad block and put one of the extras into the normal rotation -- almost the same as rotating HDs.



    No, there is a definite difference in some drives vs. others. It costs money to put in enough extra capacity to allow for enough extra cells to ensure there is a significant amount of totally empty flash cells available to ensure fast performance.



    Quote:

    Those processes have nothing to do with Garbage Collection



    Your wrong - they do garbage collection. The OWC Mercury Extreme drive I am referring to is absolutely doing garbage collection with the extra space it has. Otherwise, why would they go through the expense of having storage that is otherwise useless? Otherwise your saying they are overly pessimistic about the flash they use that they need to have 7%-23% more "just in case"? That makes no sense!



    Quote:

    GC can be done just using the few MB of RAM cache on the controller chip, no extra hidden flash necessary at all. I do agree that these smarter controllers eliminate the need for TRIM or software based GC.



    If you don't have extra capacity, and your drive fills, your not going to sustain speed. Where are the empty free cells going to be? How are you going to ensure you always have totally empty and free cells? Garbage collection with a few megabytes won't get you much in sustained speed. The links I provide below will demonstrate this pretty effectively.



    It's pretty simple. TRIM exists so that manufacturers don't have to add extra flash that is not useable as extra capacity. It's a cost saving measure and a pretty piss poor compromise for performance. The amount you are talking about for over-subscription for error correction is not enough to allow for maximum performance. It's trivial to the amount of extra capacities in drives like the OWC Mercury Extreme.



    You have to specifically get drives that are engineered to do internal garbage collection, and that have the extra over-subscription capacity to do it effectively. Otherwise, on a Mac, you will hit a performance wall as your free cells disappear.



    But don't take my word for it - here is someone who specializes in high end photoshop work who has done extensive testing with SSDs:



    http://macperformanceguide.com/SSD-RealWorld.html

    http://macperformanceguide.com/Revie...y_Extreme.html



    Bottom line, even Apple's SSD's are a bad deal at this point in time. You need to ensure you get an SSD that will maintain peak performance - at least until Mac OS X gains TRIM support. And right now, for me, the only SSD I'll consider is the OWC Mercury Extreme. This is based on my personal experience in owning both flavors of SSD and seeing how they run over time.



    And even if Mac OS X does get TRIM, I'll probably stick with SSDs like OWC's because they work with any OS in any circumstance. No OS specific dependancies like you see with hacks like TRIM. And even with TRIM, as your drive fills your performance is going to go down as the controller on the drive struggles to juggle enough stuff around to ensure you have totally open cells to write to. If you can't fill your SSD up and maintain peak performance, the arguments for TRIM vs. over-subscription become kind of moot.
  • Reply 185 of 210
    docno42docno42 Posts: 3,755member
    Quote:
    Originally Posted by melgross View Post


    That does not solve the problem, it just makes it less of a problem.



    It does exactly the same thing TRIM does. It's no more or more less of a problem than a drive using TRIM. Instead of relying on the OS to tell it what blocks are OK to destroy the data that is within them, the controller ensures there are plenty of free cells available by having enough extra capacity to use for consolidation - even if the disk is full of data.



    Quote:

    And that method that Sandforce uses is considered to be less safe, as it's easier to lose data.



    It shuffles data around exactly the same way TRIM does, but without a reliance on the OS telling it via TRIM what blocks to destroy.



    Actually, you are less likely to loose data than with TRIM. Since the drive isn't destroying any data at all, but just shuffling it around, you will be able to un-erase a file just like on a magnetic disk. Good luck if you are using TRIM.



    Again, don't just take my word for it: http://techgage.com/article/too_trim...s_impossible/3
  • Reply 186 of 210
    docno42docno42 Posts: 3,755member
    Look, it really is pretty simple. Let me try an analogy using something that hopefully everyone has played with at one time or another - a sliding number puzzle



    If you have a sliding number puzzle that is 100% full of tiles and all the numbers are jumbled up, you need a hole to give you space to un-jumble the numbers.



    You can do that by getting rid of a number you no longer need (TRIM) or by making the overall puzzle bigger (over subscription like the OWC Mercury Extreme drives).



    Either method produces a hole so you can shuffle the tiles to order the numbers.



    Not a direct analogy, but hopefully close enough to give people a different way to think about what is going on at a low level, below the file system and even blocks in these drives at the flash memory cell level.
  • Reply 187 of 210
    docno42docno42 Posts: 3,755member
    Quote:
    Originally Posted by melgross View Post


    He said "I would milk the Mac for all it's worth, and then go on to the next big thing".



    It sure seems as though that's exactly what he's doing!



    How exactly is Apple "Milking the Macintosh"?



    Sure, they haven't kicked out an update to the Mac Pro in a while, but seriously - other than graphics cards (which you don't need a whole new computer for, and Apple still sucks at providing new kit for) what is there to keep updating that delivers earth shattering performance differences?



    Routine and steady updates have been issued - but what more do you really want from them. Computers as desktops and/or laptops have been out for decades and at this point are pretty stable. Of course compared to a new platform with lots of innovation like is happening with the iOS the traditional Mac is going to look pretty boring.



    But to construe that into "Apple is milking the Macintosh" is a little silly, IMNSHO.



    And Apple isn't going back to clones any time soon. The whole "secret sauce" to Apple is user experience. And if you don't control 100% of the product chain, you can't control the user experience.



    It's against their DNA. It's not gonna happen. I listened to Alex Lindsey on TWIT or MacBreak (I forget which now) go on and on about this too - and although I really like Alex, he totally glossed over this core principle to get where HE wanted to be. It's not where Apple wants now, or ever, to be. You and Alex will be waiting a LONG time...
  • Reply 188 of 210
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by Hiro View Post


    Nothing in the OS, but the controllers are getting smarter about it, and hiding that from the OS altogether. SandForce calls their implementation Recycler, and it is part of the firmware, not in an OS driver.



    That's true, but it's interesting that they have two basic versions of their controller. One for commercial use, and one for, I guess we can say, home use. The industrial one requires as much as a 25% larger amount of memory, because of the unreliability of these controllers. That is, they're moving info around on the drive like de-fraggers. There are other possible problems that make their controllers more likely to have data loss. That's why so much extra memory (and expense). This doesn't mean that a Sandforce drive is LIKELY to lose data, just that it's more likely than other, more conventional controllers.



    I've seen no evidence that over the long term, these drives won't slow down, though not as much as regular SSDs.
  • Reply 189 of 210
    docno42docno42 Posts: 3,755member
    Quote:
    Originally Posted by melgross View Post


    I've seen no evidence that over the long term, these drives won't slow down, though not as much as regular SSDs.



    Here's some evidence for you:



    http://macperformanceguide.com/SSD-R...evereDuty.html



    And my experience matches his testing.





    EDIT: I glossed over it in the review, but they are using the sandforce controller. I thought I read another article that linked to another controller from another manufacturer (Samsung?) that does the same thing but I can't find it now - drat!
  • Reply 190 of 210
    docno42docno42 Posts: 3,755member
    Quote:
    Originally Posted by melgross View Post


    his doesn't mean that a Sandforce drive is LIKELY to lose data, just that it's more likely than other, more conventional controllers.



    Wait, I thought all drives shuffled data to support wear leveling?



    If shuffling data is bad, aren't they all just as likely to loose data?



    I must be missing something - what's special about the sandforce controller that makes it "more likely" to loose data?
  • Reply 191 of 210
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by DocNo42 View Post


    It does exactly the same thing TRIM does. It's no more or more less of a problem than a drive using TRIM. Instead of relying on the OS to tell it what blocks are OK to destroy the data that is within them, the controller ensures there are plenty of free cells available by having enough extra capacity to use for consolidation - even if the disk is full of data.



    Heh! Even you're own explanation shows that it's NOT doing the same thing. Sandforce controllers are considered to be risky as to info loss over time, which is why they use the extra memory. They do not do what trim does.



    One thing they do is to compress the data. That's right, they don't pass all the bits to the SSD. It's compressed first. This whole thing is totally different from trim. This is one of the things that gives improved specs.



    Quote:

    It shuffles data around exactly the same way TRIM does, but without a reliance on the OS telling it via TRIM what blocks to destroy.



    Actually, you are less likely to loose data than with TRIM. Since the drive isn't destroying any data at all, but just shuffling it around, you will be able to un-erase a file just like on a magnetic disk. Good luck if you are using TRIM.



    Again, don't just take my word for it: http://techgage.com/article/too_trim...s_impossible/3[/QUOTE]



    Trim doesn't compress data the way Sandforce drives do. If you consider that to be safer, good for you. But performance falls, just, as I said, not as much.



    Here's from their site. remember that it's marketing speak, so they will make it look better than it really is. At the bottom, you can see their (ideal) chart:



    http://www.sandforce.com/index.php?id=146&parentId=34
  • Reply 192 of 210
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by DocNo42 View Post


    How exactly is Apple "Milking the Macintosh"?



    Sure, they haven't kicked out an update to the Mac Pro in a while, but seriously - other than graphics cards (which you don't need a whole new computer for, and Apple still sucks at providing new kit for) what is there to keep updating that delivers earth shattering performance differences?



    Routine and steady updates have been issued - but what more do you really want from them. Computers as desktops and/or laptops have been out for decades and at this point are pretty stable. Of course compared to a new platform with lots of innovation like is happening with the iOS the traditional Mac is going to look pretty boring.



    But to construe that into "Apple is milking the Macintosh" is a little silly, IMNSHO.



    And Apple isn't going back to clones any time soon. The whole "secret sauce" to Apple is user experience. And if you don't control 100% of the product chain, you can't control the user experience.



    It's against their DNA. It's not gonna happen. I listened to Alex Lindsey on TWIT or MacBreak (I forget which now) go on and on about this too - and although I really like Alex, he totally glossed over this core principle to get where HE wanted to be. It's not where Apple wants now, or ever, to be. You and Alex will be waiting a LONG time...



    They're milking it by taking the profits from that, and putting it into other areas, which have now exceeded by a good margin, the sales of the computer division itself. They could have come out with the less expensive computer systems so many people say they want, and increased sales of them by 100%, possibly a lot more. But that's not what they're doing. Computers are becoming a smaller part of the company as time goes on.



    This expression may be silly to you, but Jobs said it, I'm merely repeating his statement. So if you think he's silly, then go right ahead.



    Go send him an e-mail, and argue with him that he doesn't know how to run Apple, but that you do.



    Here's the exact quote, and the link to where it can be found:



    Quote:

    "If I were running Apple, I would milk the Macintosh for all it's worth -- and get busy on the next great thing. The PC wars are over. Done. Microsoft won a long time ago."

    -- Fortune, Feb. 19, 1996



    Read More http://www.wired.com/gadgets/mac/com...#ixzz0vIEwOcrp



    link to site:



    http://www.wired.com/gadgets/mac/com.../2006/03/70512



    Don't say what's in their DNA, because you have no idea.
  • Reply 193 of 210
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by DocNo42 View Post


    Here's some evidence for you:



    http://macperformanceguide.com/SSD-R...evereDuty.html



    And my experience matches his testing.





    EDIT: I glossed over it in the review, but they are using the sandforce controller. I thought I read another article that linked to another controller from another manufacturer (Samsung?) that does the same thing but I can't find it now - drat!



    The Samsung controllers suck. They are some of the worst performing controllers around, and Apple's been criticized for using them. But they must be reliable because big business often specs them.
  • Reply 194 of 210
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by DocNo42 View Post


    Wait, I thought all drives shuffled data to support wear leveling?



    If shuffling data is bad, aren't they all just as likely to loose data?



    I must be missing something - what's special about the sandforce controller that makes it "more likely" to loose data?



    As i've been saying, it's the compression. While Sandforce likes to talk about reliability of their controllers, the amount of lost bits over time and ops, they don't mention specifically what affect the compression will have.
  • Reply 195 of 210
    hirohiro Posts: 2,663member
    Quote:
    Originally Posted by DocNo42 View Post


    Wait, I thought all drives shuffled data to support wear leveling?



    If shuffling data is bad, aren't they all just as likely to loose data?



    I must be missing something - what's special about the sandforce controller that makes it "more likely" to loose data?



    Shuffling data is the last thing an SSD controller wants to do, that just uses up lifetime cycles. The controllers don't shuffle the data as much as they shuffle which blocks can be used at any point in time. So if a block is used it doesn't get shuffled just to wear-level, but when the block needs to have some sectors erased and is written out, it may or may not go back onto the assignable list depending on where the block sits on the wear list.



    I wouldn't say that auto GC is any more likely to lose data than TRIM would be. Both ways you pull off a partially deleted block to DRAM someplace, slick the old block and mark it writable, update the usage tables and then write the logically "cleaned" block from DRAM back to a NVRAM block. With SandForce's GC that is all done in the controller, with TRIM the OS is calling the shots. Both ways suffer the same potential problems with power failures in the middle of the block cleaning process too.



    Ignore DocNo's technical ramblings, he is really off in left field.
  • Reply 196 of 210
    hirohiro Posts: 2,663member
    Quote:
    Originally Posted by DocNo42 View Post


    No, there is a definite difference in some drives vs. others. It costs money to put in enough extra capacity to allow for enough extra cells to ensure there is a significant amount of totally empty flash cells available to ensure fast performance.



    No, you have no idea what you are talking about. SSD NVRAM is fabbed with over specification built-in, it always has been, it isn't added after the fact. Different drive manufactures just decide differently how they are going to expose the over specification. It isn't an extra cost because you cant get NVRAM for SSDs without it.



    Quote:

    Your wrong - they do garbage collection. The OWC Mercury Extreme drive I am referring to is absolutely doing garbage collection with the extra space it has. Otherwise, why would they go through the expense of having storage that is otherwise useless? Otherwise your saying they are overly pessimistic about the flash they use that they need to have 7%-23% more "just in case"? That makes no sense!



    Glad you aren't a project engineer then. You really are out of your element here.



    Quote:

    If you don't have extra capacity, and your drive fills, your not going to sustain speed. Where are the empty free cells going to be? How are you going to ensure you always have totally empty and free cells? Garbage collection with a few megabytes won't get you much in sustained speed. The links I provide below will demonstrate this pretty effectively.



    ZOMG!!! Ohes Noes!! But, but you're all wrong...



    Quote:

    It's pretty simple. TRIM exists so that manufacturers don't have to add extra flash that is not useable as extra capacity. It's a cost saving measure and a pretty piss poor compromise for performance. The amount you are talking about for over-subscription for error correction is not enough to allow for maximum performance. It's trivial to the amount of extra capacities in drives like the OWC Mercury Extreme.



    You have to specifically get drives that are engineered to do internal garbage collection, and that have the extra over-subscription capacity to do it effectively. Otherwise, on a Mac, you will hit a performance wall as your free cells disappear.



    Over subscription has absolutely NOTHING to do with GC. Period. Ever. It serves exactly two engineering purposes: 1) Avoid having to do write-to-block-cleaning during a write because you can just select one of those extra unused blocks. 2) Bad-block replacement, which happens to be the primary reason. Should you be working with 1) and happen to have used all the extra unwritten-to blocks, then you get the drive slowdowns because now the drive is forced to write-to-block-clean on the fly.



    Quote:

    But don't take my word for it - here is someone who specializes in high end photoshop work who has done extensive testing with SSDs:



    Trust me, I won't and advise everyone else to steer clear too.





    Reasonable tests, I have no quibble with them at all. They don't support your technical comments in the least though.



    Quote:

    Bottom line, even Apple's SSD's are a bad deal at this point in time. You need to ensure you get an SSD that will maintain peak performance - at least until Mac OS X gains TRIM support. And right now, for me, the only SSD I'll consider is the OWC Mercury Extreme. This is based on my personal experience in owning both flavors of SSD and seeing how they run over time.



    And even if Mac OS X does get TRIM, I'll probably stick with SSDs like OWC's because they work with any OS in any circumstance. No OS specific dependancies like you see with hacks like TRIM. And even with TRIM, as your drive fills your performance is going to go down as the controller on the drive struggles to juggle enough stuff around to ensure you have totally open cells to write to. If you can't fill your SSD up and maintain peak performance, the arguments for TRIM vs. over-subscription become kind of moot.



    Even my old Crucial SSD still running FW18xx, with no auto GC or TRIM support runs rings around a rotating drive on read performance, it makes a four year old MacbookPro almost as fast as a 2009 version. It won't win any write races, but until I build something the write performance really doesn't mean anything. I suggest you get a clue before you try to tell the world how stuff works and why to buy or not buy it. Read performance is king unless you are in some very specific write throughput computations, and there SSDs don't generally have enough capacity anyway, go use a good RAID. So even when SSD write performance degrades, the majority of users will still be amazed by how responsive it makes the machine feel.





    Quote:
    Originally Posted by DocNo42 View Post


    Look, it really is pretty simple. Let me try an analogy using something that hopefully everyone has played with at one time or another - a sliding number puzzle



    If you have a sliding number puzzle that is 100% full of tiles and all the numbers are jumbled up, you need a hole to give you space to un-jumble the numbers.



    You can do that by getting rid of a number you no longer need (TRIM) or by making the overall puzzle bigger (over subscription like the OWC Mercury Extreme drives).



    Either method produces a hole so you can shuffle the tiles to order the numbers.



    Not a direct analogy, but hopefully close enough to give people a different way to think about what is going on at a low level, below the file system and even blocks in these drives at the flash memory cell level.



    No. That's so broken an analogy as to be uncorrectable.
  • Reply 197 of 210
    hirohiro Posts: 2,663member
    Quote:
    Originally Posted by melgross View Post


    Heh! Even you're own explanation shows that it's NOT doing the same thing. Sandforce controllers are considered to be risky as to info loss over time, which is why they use the extra memory. They do not do what trim does.



    One thing they do is to compress the data. That's right, they don't pass all the bits to the SSD. It's compressed first. This whole thing is totally different from trim. This is one of the things that gives improved specs.



    The over specification and the data compression are independent even though both are used for overall drive performance. SandForce uses the compression so they can use fewer blocks per write and so have longer life for the same amount of written data coming into the controller. Completely orthogonal to using the over specification to avoid write-to-block cleaning.



    I won't argue a whit that they need to prove long term their compression/decompression is perfectly safe.
  • Reply 198 of 210
    melgrossmelgross Posts: 33,510member
    Quote:
    Originally Posted by Hiro View Post


    The over specification and the data compression are independent even though both are used for overall drive performance. SandForce uses the compression so they can use fewer blocks per write and so have longer life for the same amount of written data coming into the controller. Completely orthogonal to using the over specification to avoid write-to-block cleaning.



    I didn't say they had anything to do with each other. But they are two methods they use that makes their drives different from others.



    Quote:

    I won't argue a whit that they need to prove long term their compression/decompression is perfectly safe.



    No on the fly compression/decompression scheme is completely safe.
  • Reply 199 of 210
    hirohiro Posts: 2,663member
    Quote:
    Originally Posted by melgross View Post


    I didn't say they had anything to do with each other. But they are two methods they use that makes their drives different from others.







    No on the fly compression/decompression scheme is completely safe.



    There are formal methods mathematical techniques that can be used to verify the algorithm and then that can be cross checked against the firmware commands to make sure they match. If that all checks out then the only other thing that could go wrong is the omnipresent cosmic ray flip. And there, if it hits the firmware in a wrong spot, it could create holy havoc. But even that is more likely to crash the firmware than allow it to run in an incorrect manner.



    So IF SandForce has done the formal methods analysis and mapped that against the firmware, then they should be VERY safe, where VERY means at least just as safe as any other computation on the machine. Controllers like this are far smaller of a verification problem than a CPU or run of the mill program because of the limited code size and restricted functionality.



    I agree it is not completely safe, but no computation in a computer is because of external factors and unknown bugs. So the operative question becomes do you think they shipped the controllers without doing the above analysis? Or that they are shipping known incorrect firmware?
  • Reply 200 of 210
    Quote:
    Originally Posted by melgross View Post


    That's an interesting question. Dare we even think they will ever get to 20%, or even close?



    There's one paraphrased quote I like to use from Jobs before he came back to the company, when he was asked what he would do about Apple if he did come back.



    He said "I would milk the Mac for all it's worth, and then go on to the next big thing".



    It sure seems as though that's exactly what he's doing!



    There's something that's heresy, but I believe actually has a chance now that Apple is such a different company, and is doing so well, and is now the most valuable brand in the world. If the Mac lines go below 20% of sales of the company, and average margins on them, particularly the consumer lines, are below the rest of the main lines Apple offers, they may be interested in doing something that's long been thought impossible again. That would be to allow clones.



    Yes, as I said, it's heresy. But if Apple did it right, it could work. If they allowed just a small number of companies to do this, say Hp, maybe Dell, perhaps one other, and this time spec'ed carefully what could be done, Apple could do very well.



    Unlike before, when Apple was floundering, and clones were thought to be required for business to take them up again, it was a gamble, that wasn't thought out carefully enough. But now, Apple is THE hottest brand on the planet. If Apple came up with reference designs that these companies would be required to follow, and Apple had approval, then it could work. These companies could manufacture OS X clones that fit into Apple line. Less expensive machines that Apple doesn't want to make. Gamer machines perhaps too. If Apple loses 25% of their computer sales, that would only be, at most 5% of total sales. Not much, and made up with other faster selling items.



    But, this could double, and possibly eventually even triple Apple's market share. Who knows, it could go even higher. Then Apple would be selling tens of millions of OS X licenses, and making 80% profit margins off that. It would add to their bottom line greatly. Their overall margins could come to 50%.



    This would solve a lot of problems, and allow companies to add machines from the big vendors they know and trust, while moving away from MS, which seems to be happening in a small way now. three or four years ago, 2% of large businesses had 250 or more Macs. Last year, that number rose to 7%. If Hp or Dell sold them, that could rise more quickly still.



    If eventually, Apple sold 50 million licenses a year at $60 a license, not far off from what MS gets on average, though a bit less, that would total $3 billion in a year, almost all profit. Far more profit than they could get from selling much more in computers. Right now, they have to sell $20 billion of computers to make $3 billion in profit. Would they give up one for the other? They might.



    What do you think would be the advantage or disadvantage [ to Apple] of spinning off

    the design and manufacture of Macs (and only Macs) to a wholly owned subsidiary?

    Would there be advantages or disadvantages to Mac buyers if Apple did so?
Sign In or Register to comment.