Apple files hint at re-engineered iMac and Mac Pro models, potentially without optical drives

179111213

Comments

  • Reply 161 of 257
    nhtnht Posts: 4,522member

    Quote:

    Originally Posted by sippincider View Post


    They should’ve put “TV40” in the .plist, just to watch people go crazy.  image



     


    That would have been hilarious.  TV40, TV55 and TV70.

  • Reply 162 of 257
    vandilvandil Posts: 187member


    If Apple phases out optical drives for the rest of their hardware lineup, that will certainly shake up the industry.  Of course people who need optical drives can grab Apple's USB SuperDrive or use one of a wide variety of third-party optical drives for DVD and Blu-Ray read/writing.  It's one heck of a "dongle" to add to a desktop, but it is what it is.


     


    Perhaps Apple will release first-party, user-friendly tools for "converting your DVDs to iTunes" and "converting your data CD/DVDs to disc images" allowing you to rip your optical media into MP4s and ISO/DMGs.


     


    Perhaps Apple will extend the "Remote Disc" functionality seen in MacBook Airs to any PC/Mac running a host application to share its drive.


     


    People peed themselves when Apple got rid of the Floppy.  We survived.  There were even third-party USB Floppy drives as temporary solution. (a fun "dongle")


     


    We also survived Apple's killing of:


    the ADB interface. (fix: buy a dongle)


    Firewire 400 (and realistically 800, too). (fix: buy a dongle)


    PCMCIA and ExpressCard slots on laptops. (fix: use the SD card slot or a USB-based card reader)


    VGA-out and DVI-out (fix: buy a dongle)


    Modems. (fix: buy a dongle)


    Ethernet on select models (fix: buy a dongle)


     


    We'll survive.

  • Reply 163 of 257
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Tallest Skil View Post


     


    Really? I love the idea of plug-and-speed computers. Don't have to get rid of your old one when you update, just set it on top of the new one and connect 'em both for an even faster machine.


     


     



    There aren't protocols in place. Find me even one HPC solution that relies on a point to point PCI bridge system. Beyond that this wouldn't solve any issues of obsolescence. Consider that Apple drops current OS support for any given machine typically within 3-5 years. Their vintage policy dictates that all hardware service options cease between 5 and 7 years as per their vintage policy. On thunderbolt itself, the cabling and chips used will change over time. There are too many ways this could break in an OSX setting. If they wanted to set something up with a true clustering solution, there are better ways of doing it than thunderbolt. Given that Apple pronounced XGrid completely dead and did not offer post mortum solutions for licensing source code, I would suggest they've shown little interest in this market. The problem I see here at times is the lack of reconciliation between what can be imagined and what would make a practical working solution.  For anything of that sort, it would be imperative that it's done well. Whenever a project is floundering, it gets little attention from Apple. On that basis I don't think it's a good idea unless they can transform it into something highly profitable in Apple terms. A half supported solution like this would just be bad.


     


    Quote:

    Originally Posted by vandil View Post


    If Apple phases out optical drives for the rest of their hardware lineup, that will certainly shake up the industry.  Of course people who need optical drives can grab Apple's USB SuperDrive or use one of a wide variety of third-party optical drives for DVD and Blu-Ray read/writing.  It's one heck of a "dongle" to add to a desktop, but it is what it is.


     


    Perhaps Apple will release first-party, user-friendly tools for "converting your DVDs to iTunes" and "converting your data CD/DVDs to disc images" allowing you to rip your optical media into MP4s and ISO/DMGs.


     


    Perhaps Apple will extend the "Remote Disc" functionality seen in MacBook Airs to any PC/Mac running a host application to share its drive.


     


    People peed themselves when Apple got rid of the Floppy.  We survived.  There were even third-party USB Floppy drives as temporary solution. (a fun "dongle")


     


    We also survived Apple's killing of:


    the ADB interface. (fix: buy a dongle)


    Firewire 400 (and realistically 800, too). (fix: buy a dongle)


    PCMCIA and ExpressCard slots on laptops. (fix: use the SD card slot or a USB-based card reader)


    VGA-out and DVI-out (fix: buy a dongle)


    Modems. (fix: buy a dongle)


    Ethernet on select models (fix: buy a dongle)


     


    We'll survive.



    You're being overly dramatic. Dongles aren't a good solution. They're merely a stop-gap solution at best. Express slots can actually provide faster transfer rates than TB at lower costs.  In terms of shaking up the industry, Apple removing an optical drive will not have any real effect on the others unless they're looking for an excuse to trim their costs. The silliest of these is the ethernet dongle concept. You really should look up the purpose and features of ethernet.

  • Reply 164 of 257
    MarvinMarvin Posts: 15,324moderator
    nht wrote:
    The Mac Pro has never been a personal supercomputer so how would be one again?


    [VIDEO]


    http://www.sfgate.com/business/article/Apple-Unveils-Personal-Supercomputer-2909963.php
    nht wrote:
    Fine although MICs may never go anywhere outside of...supercomputers/HPCs.

    "The first Intel MIC products target segments and applications that use highly parallel processing, including:

    High Performance Computing (HPC)
    Workstation
    Data Center"

    http://www.intel.com/content/www/us/en/architecture-and-technology/many-integrated-core/intel-many-integrated-core-architecture.html
    nht wrote:
    No. I could elaborate but we've been through this ad nauseum.

    I know and I still think it's the way forward. Once you let go of the idea that in order to be a professional, you have to own a 40lb workstation, it should be clear.
    nht wrote:
    Nifty but could be done with mini servers too for an even smaller footprint.

    Yeah but they wouldn't have 2TFLOPs of computer power. Performance per dollar would be better with the Mac Pro. Right now, they are about even.
    nht wrote:
    Which nobody really gives a shit about other than some desire for it to be rack mountable without using a hacksaw.

    The size reduction isn't necessarily a motivational element, merely a positive by-product.
    nht wrote:
    most importantly, not **** up the Mac Pro for users that actually need a workstation.

    How does increasing performance by an order of magnitude, reducing the price, making parallel computing simple and doubling available expansion ports **** up the Mac Pro? Ah you mean because it's not exactly the same design as last year's model with a fractional improvement in performance. To me, that would be ****ing up the Mac Pro. If you want the Mac Pro to die out, fine, keep hoping for that same design and a 40% performance jump after 3 years.
  • Reply 165 of 257
    MarvinMarvin Posts: 15,324moderator
    hmm wrote:
    There aren't protocols in place. Find me even one HPC solution that relies on a point to point PCI bridge system.

    "Intel® MIC Architecture software environment includes a highly functional, Linux* OS running on the co-processor with:
    – A familiar interactive shell
    – IP Addressability [headless node]
    – A local file system with subdirectories, file reads, writes, etc
    – standard i/o including printf
    – Virtual memory management
    – Process, thread management & scheduling
    – Interrupt and exception handling
    – Semaphores, mutexes, etc...

    What does this mean?
    – A large majority of existing code even with OS oriented calls like
    fork() can port with a simple recompile
    – Intel MIC Architecture natively supports parallel coding models like Intel® CilkTM Plus, Intel® Threading Building Blocks, pThreads*, OpenMP*"

    http://www.olcf.ornl.gov/wp-content/training/electronic-structure-2012/ORNL_Elec_Struct_WS_02062012.pdf

    Apple can allow OS X to be used in the same way over PCI. Even just addressing the MIC would be enough.
    hmm wrote:
    Their vintage policy dictates that all hardware service options cease between 5 and 7 years as per their vintage policy. On thunderbolt itself, the cabling and chips used will change over time. There are too many ways this could break in an OSX setting.

    Ah, the old 'I want to buy one Mac Pro in my lifetime' argument - compelling for Apple to take interest in it I'm sure. Is it really too much to ask that you buy a new computer every 7 years? Anyway, this is just more fearmongering. Firewire has been around for 17 years so you can't tell what obsolescence will occur in what time-frame.
  • Reply 166 of 257
    tipootipoo Posts: 1,142member


    I'm all for ditching optical drives in laptops. In desktops? Meh. 



    I hope both iMac sizes get retina-ized. 

  • Reply 167 of 257
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    "Intel® MIC Architecture software environment includes a highly functional, Linux* OS running on the co-processor with:

    – A familiar interactive shell

    – IP Addressability [headless node]

    – A local file system with subdirectories, file reads, writes, etc

    – standard i/o including printf

    – Virtual memory management

    – Process, thread management & scheduling

    – Interrupt and exception handling

    – Semaphores, mutexes, etc...

    What does this mean?

    – A large majority of existing code even with OS oriented calls like

    fork() can port with a simple recompile

    – Intel MIC Architecture natively supports parallel coding models like Intel® CilkTM Plus, Intel® Threading Building Blocks, pThreads*, OpenMP*"

    http://www.olcf.ornl.gov/wp-content/training/electronic-structure-2012/ORNL_Elec_Struct_WS_02062012.pdf

    Apple can allow OS X to be used in the same way over PCI. Even just addressing the MIC would be enough.

     


     


    I still don't see this going via thunderbolt, nor do I see thunderbolt taking many available lanes to include that many as one chip doesn't support that number of ports. It would be configured to consume the same number of lanes with PCI 3 whether it takes advantage of the increased bandwidth or not. We have yet to see any chip versions from Intel that support more than 2 thunderbolt ports per chip, and i can't find any specs for multiple chips. It's also not designed as an interconnect. I would suggest I'll eventually be right on this one.


    Quote:

    Originally Posted by Marvin View Post







    Ah, the old 'I want to buy one Mac Pro in my lifetime' argument - compelling for Apple to take interest in it I'm sure. Is it really too much to ask that you buy a new computer every 7 years? Anyway, this is just more fearmongering. Firewire has been around for 17 years so you can't tell what obsolescence will occur in what time-frame. 


    Are you even reading what I said? Tallest suggested he wants to be able to add on to his old computer when it goes out of date. I said that won't happen anyway and per their vintage policy, hardware service would eventually be unattainable anyway. It's not designed to run forever. My entire point was that his suggestion wouldn't work. You really shouldn't assign me an opinion based on that. I'd usually suggest if something is beyond 3 years old, you generally don't want to run the latest thing. Machines can perform their tasks fine if you keep it to tasks they were designed to run until you're ready to upgrade, and I'd never suggest a mac pro based on longevity. I'd suggest it if it's what you need today. If you need a mini, buy a mini. There's no point in buying either for some mythical future-proofing quality. Your prior statement months ago about 1,1 owners upgrading the gpu rather than buying a new machine was merely a suggestion that Apple did little to make the new units compelling relative to their old ones. In some cases this is completely true. If they're going further than that, they're likely on their own at that point with hardware support.


     


    Firewire is a little bleh. It has always been the best of a bad situation. It outperformed usb without the cpu overhead of usb. Thunderbolt was never going to kill it. Note they remain present on the thunderbolt display. What may finally kill it is usb3. It's a significant jump and the firewire spec hasn't been updated in a very long time. Beyond that usb3 retains the same style of connector. It has been the same since the 1990s. Overall that's great. You can plug in the same thing with mice and keyboards, so it minimizes premature e-waste.

  • Reply 168 of 257
    tallest skiltallest skil Posts: 43,388member


    Originally Posted by hmm View Post

    Tallest suggested he wants to be able to add on to his old computer when it goes out of date. I said that won't happen anyway and per their vintage policy, hardware service would eventually be unattainable anyway. It's not designed to run forever. My entire point was that his suggestion wouldn't work.


     


    I'm still not sure why that idea wouldn't work, but I am sure you're probably more well-versed in that area of stuff than I am. Why would it be impossible for Apple to create a new framework standard (I'm calling it Core Distribution from here on out) for distributed computing that works over Thunderbolt, push it out to all their Thunderbolt-based computers, and then just tack on new strings (not… plist strings, like… whatever it would need to recognize the capabilities of the new hardware and accept its processing) for accepted models as new ones are released? With the core code for Core Distribution in place, isn't that all it would take to add functionality to newer machines while leaving the same capabilities on the old ones? Apple already does this with basically everything else; why not distributed computing? That would ALSO give them a way to make users upgrade anyway!


     


    So say a first-gen Thunderbolt machine can be used in Core Distribution for just CPU processing and a second-gen Thunderbolt machine can be used in Core Distribution for CPU processing and GPGPU processing. And then a sixth-gen Thunderbolt machine can be used at its launch for all facets of both of those, while a second-gen machine can only be used for subsets of what we consider modern CPU and GPU features by then.




    Don't gimp the new machines in their ability to distribute what their processors can do by building to tightly with the old ones, is what I'm saying.

  • Reply 169 of 257
    wizard69wizard69 Posts: 13,377member
    Marvin wrote: »
    "Intel® MIC Architecture software environment includes a highly functional, Linux* OS running on the co-processor with:
    – A familiar interactive shell
    – IP Addressability [headless node]
    – A local file system with subdirectories, file reads, writes, etc
    – standard i/o including printf
    – Virtual memory management
    – Process, thread management & scheduling
    – Interrupt and exception handling
    – Semaphores, mutexes, etc...
    when I first read that some time ago the first thing that came to mind is that there is enough hardware to run a UNIX like OS on the hardware. What immediately followed was that Mac OS is UNIX. So right off the bat one can see multiple ways for Apple to exploit this hardware, one being putting the Mac OS kernel right on the hardware to act as a slave to run processes and threads. Other options include a micro kernel to run GCD/OpenCL threads dispatched from the main processor sort of like the current approach to GPU usage.
    What does this mean?
    – A large majority of existing code even with OS oriented calls like
    fork() can port with a simple recompile
    This is huge, even if Apple used Intels Linux approach there is a huge amount of similarity here. Porting efforts could be far simpler than the GPU approach.
    – Intel MIC Architecture natively supports parallel coding models like Intel® CilkTM Plus, Intel® Threading Building Blocks, pThreads*, OpenMP*"
    Which also means that the hardware is capable of supporting blocks, GCD and other common models seen in OS/X.
    http://www.olcf.ornl.gov/wp-content/training/electronic-structure-2012/ORNL_Elec_Struct_WS_02062012.pdf
    Apple can allow OS X to be used in the same way over PCI. Even just addressing the MIC would be enough.
    It actually looks like the hardware is flexible enough to support multiple models at once. For example it may be possible to put whole Mac OS processes over there, while at the same time handling code dispatched to the MiC via GCD/OpenCL methods. Such flexibility, if implemented could be a huge boost for Mac OS. Many back ground search and indexing tasks could run there without impairing the main processors. Or you could have a transcoding operation run there as a process significantly reducing main processor work loads.

    I think the key here, and Apples probable interest is in the flexibility that MiC offers.
    Ah, the old 'I want to buy one Mac Pro in my lifetime' argument - compelling for Apple to take interest in it I'm sure. Is it really too much to ask that you buy a new computer every 7 years?
    Maybe it is maybe it isn't. Personally after years of almost constant upgrading I've tried to displine myself and have stuck with the same main computer since 2008. Yes that computer sucks now. However if I ever did buy a Mac Pro that computer would be expected to last even longer than the 4 years I've already gotten out of the MBP. 7 years may be a long time for a computer but I don't see it as totally unreasonable. If you are in business many business run their capital equipment into the ground because it makes sense to get as much value out of an investment as is possible. Computers are different in that they improve so much faster but every year that you can run a machine past it's paid for date is gravy.
    Anyway, this is just more fearmongering. Firewire has been around for 17 years so you can't tell what obsolescence will occur in what time-frame.
    The most interesting thing here is Intels intentions to integrate a clustering standard right into the MiC chips. Frankly I'm pulling a blank here but I believe it was Infiniband {as a side note searching MiC and Clustering takes you far away from Intel}. In any event lets say Infiniband (which Apple capitalizes for me) is part of the new Mac Pro, this does not invalidate the usefullness of TB in a Pro machine it just takes away the need to develop a hardware and software infrastructure around TB to support clustering.

    In any event Intel is up to something more than has been made public with Xeon Phi. I'm actually expecting a main processor to go with the Phi coprocessor. One feature that processor may have is an extra 16 lanes of PCI Express for the coProcessor. Rumor has it that intel may do the public release of the line at a super computer conference in November so our wait might not be until next year. In any event it will be most interesting to see how this all comes together. Much is still unknown at this time but I still have this feeling that the hardware will be almost ideal for a Mac Pro replacement.
  • Reply 170 of 257
    wizard69wizard69 Posts: 13,377member
    It is interesting here Apples timing of the dropping of XGrid which I thought was somewhat successful. They did so quietly which is also interesting in that there has been little in the way of negative comments. Either nobody cares about XGrid or they are expecting something else.
    I'm still not sure why that idea wouldn't work, but I am sure you're probably more well-versed in that area of stuff than I am.
    There is no reason why it wouldn't work. Scientist have been known to build clusters out of used computers if it can get the job dome. Also one has to consider that if you have a machine capable of one teraflop you really can't just toss it out on a whim. Even five years from now that will still be a very useful computational machine.

    If you think about some of the rumors Apple could deliver a node in a 12" cube that delivers a bit over one teraflop in that cube. If you think about it it would make building a clip ulster a matter of adding in nodes one teraflop at a time. That is just silly and would have been thought impossible only a year or two ago. Another way to look at this is an office with ten of these computers sitting on each engineers desk. That effectively makes a 10 teraflop computer given a super computer interconnect to each machine. Two computers on each desk and you have a 20 teraflop computer.
    Why would it be impossible for Apple to create a new framework standard (I'm calling it Core Distribution from here on out) for distributed computing that works over Thunderbolt, push it out to all their Thunderbolt-based computers,
    It likely would not be Thunderbolt as such interconnects are still needed for connection to local storage arrays. If you are going to build hardware that makes modular supercomputers possible, your ports should have well defined uses. I think it would be a mistake to tie up ports like TB for clustering communications when there are well defined and frankly sound systems that already exist for just that. Beyond that I see TB as just too limited for that sort of communications.
    and then just tack on new strings (not… plist strings, like… whatever it would need to recognize the capabilities of the new hardware and accept its processing) for accepted models as new ones are released?
    Why not plists, Apple uses them for everything else.
    With the core code for Core Distribution in place, isn't that all it would take to add functionality to newer machines while leaving the same capabilities on the old ones? Apple already does this with basically everything else; why not distributed computing? That would ALSO give them a way to make users upgrade anyway!
    XGrid basically did much of this for you. I'm not sure why it got dropped, maybe XGrid 2 is coming.
    So say a first-gen Thunderbolt machine can be used in Core Distribution for just CPU processing and a second-gen Thunderbolt machine can be used in Core Distribution for CPU processing and GPGPU processing.
    XEON Phi makes the need to throw GPU code around far less of an issue. With PHi you could just send X86 code around or at least a proper subset.
    And then a sixth-gen Thunderbolt machine can be used at its launch for all facets of both of those, while a second-gen machine can only be used for subsets of what we consider modern CPU and GPU features by then.
    Apples approach with LLVM and GCD/OpenCL is most Interesting here as your other option is to send code around the system that gets dynamically compiled to run on that nodes local hardware. With that arrangement it wouldn't matter if one node has newer hardware capabilities relative to an older one.

    Frankly my programming experience doesn't extend to using OpenCL but My understanding is that they basically do such now to support GPUs. HoweverGPu support is apparently somewhat hardware dependent. Maybe there is an OpenCL programmer out there with more specific information. It is interesting though that Intel has supported OpenCL on X86 hardware.

    Don't gimp the new machines in their ability to distribute what their processors can do by building to tightly with the old ones, is what I'm saying.

    One thing is for certain, next year will be most interesting when it comes to Macs and high performance. If they dont debut a spectacular Mac Pro replacement they will likely have a revolt on their hands.

    The other thing is this, if they do leverage Phi (seems likely at this point) how well will it work with existing code. That is will existing code, via GCD mechanisms be able to execute those blocks on Phi processors? Will existing threads and processes run there? Will there be an XGrid replacement that leverages new communications technology on the machines? Answers to these questions will tell us much about the immediate success of Phi based hardware. If existing code can't benefit from all of those processors it could take years for software to leverage the new hardware.

    Like I said next year will be very interesting. Either that or the next Pro flops and gets killed.
  • Reply 171 of 257
    ipegipeg Posts: 2member


    image


     


     


     


     


     


    I just saw that Pic on the front page. *Ron Popeil voice* But wait! There is more: ;-)


     


    http://pascaleggert.de/iMacPro.html

  • Reply 172 of 257
    tallest skiltallest skil Posts: 43,388member


    Originally Posted by iPeg View Post

    http://pascaleggert.de/iMacPro.html


     


    Few things about that.


     


    First, frigging gorgeous mockup.


    Second, you had the exact same idea I did about how to handle a multitouch desktop, but I still don't think it's the best idea. That thing's gonna be heavy.


    Third, the only thing I don't like about it is the design and the fact that you think that Apple would ever let anyone do anything to the inside of their iMac like that (or that such a design is even physically possible this side of 2035) and the curves. But the latter is just my personal preference. 


     


    And why's your website so wide? Having to scroll sideways is problematic.


     


    Again, absolutely gorgeous mockup with some great stuff showcased, but it's impractical.


     


    This:


     


    image


     


    Is a really neat idea. The keys are transparent, you say, and there's a single LCD panel beneath the keyboard? That'd allow for dynamic key labels without the cost, size, and annoyance of the Optimus Maximus! It's a great idea, but how are the key presses supposed to be registered? And tactile keyboards will only have a very small place in the future.

  • Reply 173 of 257
    wizard69wizard69 Posts: 13,377member
    hmm wrote: »
    I still don't see this going via thunderbolt, nor do I see thunderbolt taking many available lanes to include that many as one chip doesn't support that number of ports.
    I'm pretty sure that isn't what he is talking about. One or more of the Phi (MiC) chips is a coprocessor that sits on a sixteen lane wide PCI Expess bus. The trick is how do you get Mac OS and apps to leverage that coprocessor. Right now it appear to map onto OpenCL type usage really well. However apps that use blocks and GCD dispatch alone would seem to have problems. It is a very interesting question really, how would you leverage such a coprocessor.
    It would be configured to consume the same number of lanes with PCI 3 whether it takes advantage of the increased bandwidth or not. We have yet to see any chip versions from Intel that support more than 2 thunderbolt ports per chip, and i can't find any specs for multiple chips. It's also not designed as an interconnect. I would suggest I'll eventually be right on this one.
    Actually I see no reason why one couldn't just map in another TB interface chip. It is more or less a bridge between buses.
    Are you even reading what I said? Tallest suggested he wants to be able to add on to his old computer when it goes out of date. I said that won't happen anyway and per their vintage policy, hardware service would eventually be unattainable anyway. It's not designed to run forever. My entire point was that his suggestion wouldn't work.
    I don't know about that, Apple could be forced to provide support for these machine over seven years or longer. People will look at these machines as capital equipment and thus will want to milk them for all they are worth. Maybe I will be wrong in five years but I don't see people being happy about tossing out a one teraflop machine just because Apple doesn't want to support it. If Apple is at all serious about business they will likely have to look closely at policy here. I wouldn't be surprised to find businesses demanding at least five years of OS support with 7 to 10 likely.
    You really shouldn't assign me an opinion based on that. I'd usually suggest if something is beyond 3 years old, you generally don't want to run the latest thing. Machines can perform their tasks fine if you keep it to tasks they were designed to run until you're ready to upgrade,
    A given block of computational performance will always have that level of performance. If Apple delivers a dramatically more capable machine that just means it will remain viable as a block of performance much longer.
    and I'd never suggest a mac pro based on longevity. I'd suggest it if it's what you need today. If you need a mini, buy a mini. There's no point in buying either for some mythical future-proofing quality.
    I think you mis understand, it isn't future proofing it is leveraging a capital investment for as long as possible. No body here is denying that future hardware will be more powerful, rather the point is a high performance node is nothing to sneeze at and you won't be seeing people tossing functional nodes simply because this years machine is 1,5 times faster.

    Your prior statement months ago about 1,1 owners upgrading the gpu rather than buying a new machine was merely a suggestion that Apple did little to make the new units compelling relative to their old ones. In some cases this is completely true. If they're going further than that, they're likely on their own at that point with hardware support.
    This very well may be the case, but frankly I see people/businesses becoming more and more frustrated with that. It is often the case that old machines get handed down to new employees simply because it is economical. Especially when Apple has nade a very compelling case that a Mac Pro upgrade isn't worth the money.

    I really don't know what the long term solution will be but I see it as completely reasonable for users to want well defined software support commitments from Apple for Pro hardware. Apple has been playing a little too loose with hardware support in Mac OS of late and it is pissing people off.
    Firewire is a little bleh. It has always been the best of a bad situation. It outperformed usb without the cpu overhead of usb. Thunderbolt was never going to kill it. Note they remain present on the thunderbolt display. What may finally kill it is usb3. It's a significant jump and the firewire spec hasn't been updated in a very long time. Beyond that usb3 retains the same style of connector. It has been the same since the 1990s. Overall that's great. You can plug in the same thing with mice and keyboards, so it minimizes premature e-waste.

    FireWire ought to be buried but I fear it will be around for awhile. By the way the spec was updated some time ago but Apple never bought into the much faster versions. It is just as well as support was far too limited.

    Sadly we are still waiting for USB 3 on most of Apples desktop machines. This is just pathetic and frankly frustrates me to no end. I really believe that no body at Apple gives a damn about the desktop. They couldn't even bother to get a basic Mini Ivy Bridge upgrade out the door, tht is really disgusting.
  • Reply 174 of 257
    ipegipeg Posts: 2member


     


    Quote:


    Originally Posted by Tallest Skil View Post


     


    Few things about that.


     


    First, frigging gorgeous mockup.


    Second, you had the exact same idea I did about how to handle a multitouch desktop, but I still don't think it's the best idea. That thing's gonna be heavy.


    Third, the only thing I don't like about it is the design and the fact that you think that Apple would ever let anyone do anything to the inside of their iMac like that (or that such a design is even physically possible this side of 2035) and the curves. But the latter is just my personal preference. 


     


    And why's your website so wide? Having to scroll sideways is problematic.


     


    Again, absolutely gorgeous mockup with some great stuff showcased, but it's impractical.


     


    This:


     


    image


     


    Is a really neat idea. The keys are transparent, you say, and there's a single LCD panel beneath the keyboard? That'd allow for dynamic key labels without the cost, size, and annoyance of the Optimus Maximus! It's a great idea, but how are the key presses supposed to be registered? And tactile keyboards will only have a very small place in the future.



     


     


    My Website is so wide, so people with big screen can enjoy it. True Apple spirit, never support outdated stuff ;-) Sorry to hear you don't like the lines, my first apple product ever was an iPod Mini and I loved every line on it. 


     


    Regarding the keyboard: Well, the press registration is quite simple, there are dozens of touch screen technologies out there. For a little block of glass hitting another glass surface you wouldn't need the capacitive touch technology from an iPad, a classic resistive sensor would work. To make the glass block move up and down, a foldable ring of rubber would do. 

  • Reply 175 of 257
    tallest skiltallest skil Posts: 43,388member


    Originally Posted by iPeg View Post

    My Website is so wide, so people with big screen can enjoy it. True Apple spirit, never support outdated stuff ;-)


     


    But no one fullscreens their browser… That's crazy! Websites are only ever ~1,000 pixels wide, so I happily keep it like this:


     


    image


     


    It's all whitespace otherwise. 






    Sorry to hear you don't like the lines, my first apple product ever was an iPod Mini and I loved every line on it. 



     


    Ah, that's where you got it; thought so. It's also reminiscent of the original aluminum Cinema Displays, but with much more curve. And that's not a bad thing necessarily; I'm just concerned about it staying in place (and also about airflow when in work mode. I assume you consider an equivalent of a "work mode" when it's down in the multitouch way and "play mode" when it's vertical?).





    Regarding the keyboard: Well, the press registration is quite simple, there are dozens of touch screen technologies out there. For a little block of glass hitting another glass surface you wouldn't need the capacitive touch technology from an iPad, a classic resistive sensor would work. To make the glass block move up and down, a foldable ring of rubber would do. 



     


    A touchscreen you never touch. Now that's an idea. Not sure how I feel about it in terms of an actual implementation (cost/benefit), but I love how wild it is.

  • Reply 176 of 257
    sequitursequitur Posts: 1,910member

    Quote:

    Originally Posted by WelshDog View Post




    Quote:

    Originally Posted by pinkunicorn View Post


    http://store.apple.com/us/browse/home/shop_mac/mac_accessories/storage ;


     


    The fact that you can already buy a 1tb hard drive for around $100 means that they will soon be $50, and then $25 ect ect. Also a hard drive is far smaller than a stack of dvds (especially if you have cases) and easier to organize. Hello the future... I mean present. 



    The problem with hard drives is that they can go bad sitting on the shelf.  You go to plug it in and - pfftt, nothing.  Not reliable long term storage.  SSD memory can be damaged by Cosmic Rays.  Current day optical is not very long lived either.  That leaves LTO tape which is archival, but prohibitively expensive.


     


    Seems like Internet 2 and cloud storage is the way to go.


     


     


     


     


    Until the power grid goes down where the storage physically exists.  Oh, the power grid never goes down, does it?


  • Reply 177 of 257
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Tallest Skil View Post


     


    I'm still not sure why that idea wouldn't work, but I am sure you're probably more well-versed in that area of stuff than I am. Why would it be impossible for Apple to create a new framework standard (I'm calling it Core Distribution from here on out) for distributed computing that works over Thunderbolt, push it out to all their Thunderbolt-based computers, and then just tack on new strings (not… plist strings, like… whatever it would need to recognize the capabilities of the new hardware and accept its processing) for accepted models as new ones are released? With the core code for Core Distribution in place, isn't that all it would take to add functionality to newer machines while leaving the same capabilities on the old ones? Apple already does this with basically everything else; why not distributed computing? That would ALSO give them a way to make users upgrade anyway!


     


    So say a first-gen Thunderbolt machine can be used in Core Distribution for just CPU processing and a second-gen Thunderbolt machine can be used in Core Distribution for CPU processing and GPGPU processing. And then a sixth-gen Thunderbolt machine can be used at its launch for all facets of both of those, while a second-gen machine can only be used for subsets of what we consider modern CPU and GPU features by then.




    Don't gimp the new machines in their ability to distribute what their processors can do by building to tightly with the old ones, is what I'm saying.



    I was saying that Apple is not always fully predictable with later software support, and if this is a slowly growing cluster of aging machines, you may run into issues with lack of support. Marvin was just trolling me with the Sally Struthers argument of, "is it really too much to ask that users buy a new computer at least once every 7 years?"  Apple tries to offer hardware service for five years past the discontinuation of a model. Once it's moved to vintage status it's a matter of what is available, with things market obsolete after that and no longer serviced.


    Quote:

    Originally Posted by wizard69 View Post



    It is interesting here Apples timing of the dropping of XGrid which I thought was somewhat successful. They did so quietly which is also interesting in that there has been little in the way of negative comments. Either nobody cares about XGrid or they are expecting something else.

    There is no reason why it wouldn't work. Scientist have been known to build clusters out of used computers if it can get the job dome. Also one has to consider that if you have a machine capable of one teraflop you really can't just toss it out on a whim. Even five years from now that will still be a very useful computational machine.

    If you think about some of the rumors Apple could deliver a node in a 12" cube that delivers a bit over one teraflop in that cube. If you think about it it would make building a clip ulster a matter of adding in nodes one teraflop at a time. That is just silly and would have been thought impossible only a year or two ago. Another way to look at this is an office with ten of these computers sitting on each engineers desk. That effectively makes a 10 teraflop computer given a super computer interconnect to each machine. Two computers on each desk and you have a 20 teraflop computer.

     


    I noticed they dropped XGrid. I didn't think they'd do this if they intended to replace it with some other HPC type of solution. If they were going to release something of that sort, I would have guessed they'd drag along support for XGrid until a replacement solution could be announced. I'm aware that these things can lead long lives with multiple updates. I was saying that with Apple hardware you get certain updates for a limited amount of time. These include firmware revisions, general bug fixes, security updates, current OS support (which may or may not be important if running alongside newer machines), etc. Fragmented support can become an issue if you're trying to work with hardware that spans many generations, as Apple may no longer be offering support to help keep these older units running.


    Quote:

    Originally Posted by wizard69 View Post







    I think you mis understand, it isn't future proofing it is leveraging a capital investment for as long as possible. No body here is denying that future hardware will be more powerful, rather the point is a high performance node is nothing to sneeze at and you won't be seeing people tossing functional nodes simply because this years machine is 1,5 times faster.

     


    I know that. Perhaps I should have stated may not work if the older machine is old enough to no longer be supported. Also note that I needed to have some fun with Marvin's argument of "Please.... help Apple by buying a new mac pro". Hehe... Noting some of Tallest's older posts, he owned a mac pro partially for longevity reasons. I would suggest that if you're going far enough back to where hardware will not support newer features that are leveraged here, such as trying to hook up a new machine alongside older ones dating back 5+ years. It's not that unrealistic of a situation if we're examining from the dates they went on sale with one mac pro sold new from 2006-2008 with refurbished available longer and the current one being potentially 2010-2013 + almost identical to the 2009 revision. If this became available next year and you tried to hook up an older 5,1 and 1,1 for distributed computing but found the 1,1 would not work for whatever reason, Apple would not be there to help. This model is off their supported list. Seeing as this is someone who keeps his hardware for some time, it was worth mentioning.

  • Reply 178 of 257
    MarvinMarvin Posts: 15,324moderator
    ipeg wrote:
    I just saw that Pic on the front page. *Ron Popeil voice* But wait! There is more: ;-)

    Some interesting concepts. The folding stand position would need counter-weights to prevent making the machine top-heavy. The power cable and ports also place restrictions on this. It's something Apple has obviously thought about though:

    1000

    http://www.macrumors.com/2010/08/23/apple-discloses-methods-for-transitioning-between-mouse-based-and-touch-interfaces/

    I think it has to be as simple as being able to pull the screen down. Lenovo has their own implementation of this:


    [VIDEO]


    The collapsing stand in Apple's patent and Lenovo's model looks like the only way it would be feasible, although the components have to go into the base so that the cables and ports aren't affected by the movement of the screen. I think they'll most likely avoid doing this until they can make it without a major compromise. It would be pretty cool being able to pull the screen down and play a full keyboard in Garageband or have a 27" Wacom.
    wizard69 wrote:
    If they dont debut a spectacular Mac Pro replacement they will likely have a revolt on their hands.

    I don't think so. By the time we get to late 2013, we will have an all-Haswell consumer lineup. They really could drop it without any problems as evidenced by the effect of what they did this year with the MP after a 2 year wait. It just doesn't affect that many users any more.
    wizard69 wrote:
    The other thing is this, if they do leverage Phi (seems likely at this point)

    I wouldn't say it's likely. It's the only thing that will make the MP worthwhile but it depends on their motivation. If they don't see a long future ahead for it, they simply draw out the updates, make the updates weak and then kill it. This is what happened with their software. Shake had very minor updates over long periods and then one day, they just decided it's not what they wanted to do any more. FCS was similar with minor updates over long periods and then they just dropped it. In the case of the latter they dropped it for a completely rethought replacement. It can really go either way.

    Tim Cook did hint that there would be "something really great". To me, that in no way refers to the same design and an Ivy Bridge Xeon dropped in. So it's either a totally rethought Pro or he just means the Haswell iMac or MBP will be good enough next year to drop it.
  • Reply 179 of 257


    Originally Posted by Marvin View Post

    I think it has to be as simple as being able to pull the screen down. Lenovo has their own implementation of this: CWSo xڍV[oG>^oâMÌÅ1m)jÅÚkÇWÔRR“FVs1v¥ ¤ÊšìÎÚCÖ»ËÎ؉ûÒÇòÄ£¥>Tý ý}ê_èÏIÏÌ:±‘‡³sîßœËÄÇ°ø ý7Àzž^üû6ùk`R`ÃÈè>ÊçY17lÐËÙÁ ?bùçígõñ‹ç=^*åû¯ê’¡'r/ÃÜ™¬Pô¤ƒê8ÔAIàÒc¤õB¹lUÖ­j±€ŸB½f•Ö‹VU~ ÅšUA]Å*—QX*[–äjÉU%W(””I ýc?ÉU7JÊ=æ¬Z¡‚9¯Œ=êÚÁÐÕR¥f¡x €X¹2·È¨×ˆ`~ ”«”kå Eª¨.&< °Cý`d›% êe7«E+Û'¾ÃÍÀÏ‘mlu²E«PD¿kõ{¢ßåÔÐÓËÜßÅEãˆò*mlz¾¼ey÷̝ÏyEUÌÏÛ!7G®·ÃŽZ9igòjîËJ«Ó£u×}öÓaðê{»¼Ymlo »ÛåzÔj ée,ŒçG]ÕDY, ’}•çŒ,®Cƒ.Cå`Tí„{Œ‹î p†²|„ú–µ‚Ýìg£)ìGqÕ¸óQãÎùj½ÆAœd7ˆ†ž#aèQæs¼D¤Âˆ|¡\¨•­¼Ò˜.ó(7ûÌ&fDMì'ó)¨c²±Mì¼)¿a˜œz܁³Á°pcóˆ‰~0f fÏt"Är ÐŽ8%‘ÝÇDQú´ÐñrU^hh'''ÿ,A+yci«Ûmûô(ÏÙ½ñ°Ó‚“ÕõT·^ÿœxSY·çÄD%1)óÁˆÓðX¶"%‚n B1Þ pG”æì´ƒ¶4â8 Dv€IúäÀ£/9 Âs=Âû@q|Lµ5=4åZ¸Ä¦¸R„yÒHÔ°&\(· r(Q0†¡/Q©´8Wƒ`Dg`äHÐH!q€ñÖ<+‡"<ô˜ˆ{ª(sÓrˆ€ù=ÞwAÍ‚Í©—øðqȵïÊñÈçáta³Ó¡¢á÷ÁŽL40`-þCƒ¿’’I¬)ÑC¾—É›H~ÃíOtÐÁÐöIdb&ùßÂ\¸qå_$q³=%ÛŠM~T´Åöד¤dD#õCh)Ž•Æ›2LÃ`ÿ…™fiOÒåæŽDw”ÏŠÒ\Pš‹Š®îIõŸSÄÚ=ØW!.­M …¯=ìÍXí<›8Ï‚b¨>ƒ~y­ý8œ]S&X. ÕUãŠ]iKÌ÷?vfv¯ôê—P³ºè±ªyKrõɵY0ìÂõ˜}ã,ôŹB^ß{›–òlæÐleZ²Ï·Q§¯HjÜWçeu^ߎۢ© ˜1©\uÍÐeÃ/Å7ÔôD


     


    Geez, Lenovo, get it together! You're not gonna be beating Apple like that! image

  • Reply 180 of 257

    Quote:

    Originally Posted by Tallest Skil View Post


     


    Guess that makes sense. Terrible hardware in a terrible package.



    No, better hardware in a not as pretty package that still works fine.

Sign In or Register to comment.