Project "Wolf" and "Presley" in July?

1235710

Comments

  • Reply 81 of 185
    bigcbigc Posts: 1,224member
    3-groups of modelers I know (1 in atmospheric research, one in seismic data analysis and one in fluid flow modeling) have gone to multiprocessor Xenon's with Fortran 90. A system that competes on this level would be welcomed (note that all of these researchers were on Macs before but no good Fortran compilers and poor mac system performance and support (even in clusters) caused the move over to Intel)
  • Reply 82 of 185
    keyboardf12keyboardf12 Posts: 1,379member
    bigc



    are there unix or open source fortran compilers for OSX? Just wondering. It would be cool if they had an option to come back.
  • Reply 83 of 185
    bigcbigc Posts: 1,224member
    [quote]Originally posted by keyboardf12:

    <strong>bigc



    are there unix or open source fortran compilers for OSX? Just wondering. It would be cool if they had an option to come back.</strong><hr></blockquote>



    not GNU for Fortran 90 (that I've found) only f77



    <a href="http://floyd.liunet.edu/"; target="_blank">http://floyd.liunet.edu/</a>;
  • Reply 84 of 185
    jpfjpf Posts: 167member
    First time poster long to lurker



    Well, I must say that this guy has brought me out to become a firstime poster!! I've been watching and reading for a while at the boards and never really desired to play until now.



    I honestly don't know if this guy is for real, but I must say that the 4 "rumors" he started is truly cool. They are very original and the names are not bad at all. My bet with the Presley is that's its Elvis Presley. The mouse clicks are tilting to the left and right.



    Wolf sounds like apple sounds real enough. I don't know about the underwater camera. Maybe waterproof but not underwater. Glove, well that would just rock.



    Just some thoughts







    JPF (this is cool)
  • Reply 85 of 185
    Whoever Allen McJones is, he seems to be creative enough. Maybe we should start a "draft Allen McJones to the Apple Strategic Planning office" petition.
  • Reply 86 of 185
    kormac77kormac77 Posts: 197member
    Think about this !



    Final Cut Pro 4 using Clustering technology ! :eek:



    Situation:



    You are working for NBC. ( or CNN,ABC,CBS)





    Your job is editing HD News program for News Department in NBC for 2006 WorldCup in Germany.

    (Do you know what is real football is ? <img src="graemlins/oyvey.gif" border="0" alt="[No]" /> )



    You have new desktop Mac with Xserver type Raid-type HardDisk with 400+ GB and 2GBit Fibra channel SAN connected to 22 terra-byte Apple central SAN server.



    Even with New DVCPRO HD Quicktime codec,HD Video is big and need fast rendering and only one or two or four cpu Mac is not good for Time-critical News program.



    But Don't worry.





    New 256 node ( or 1024 node ) clustring X-servers in Render Center will render all your effect, title, composition, 3D effect in few second! or close to realtime or faster than realtime.



    And it will be for all of editing systems in News department!



    Even with new PowerBook, It will do the same thing. ( Check with CNN. What are they using for their News editing in the field ?)



    Now, can you guess why APPLE bought those companies for video application like Shake, Rayz, and India pro , and MAYA in the future and why the owner of those company was so happy to be bought by APPLE !



    Check with my old posts and you will see something interesting there.



    See ya!



    [ 06-24-2002: Message edited by: kormac77 ]</p>
  • Reply 87 of 185
    I recall Steve Jobs mentioning that the xserver would be able to be clustered via a software package that was to be released before years end. It was just a passing comment in his xserve introduction keynote.
  • Reply 88 of 185
    bigcbigc Posts: 1,224member
    That would explain all the fast I/O for the xserve
  • Reply 89 of 185
    johnsonwaxjohnsonwax Posts: 462member
    [quote]Originally posted by keyboardf12:

    <strong>bigc



    are there unix or open source fortran compilers for OSX? Just wondering. It would be cool if they had an option to come back.</strong><hr></blockquote>



    Absoft Pro Fortan 7.0 is available for OS X.



    It includes command line and GUI functionality and does Altivec optimization. It also works with UCLA's Appleseed for clustering.



    It's not free, and not even terribly cheap to .edu, but it's got most everything you'd want.
  • Reply 90 of 185
    sizzle chestsizzle chest Posts: 1,133member
    [quote]Originally posted by Bigc:

    <strong>That would explain all the fast I/O for the xserve</strong><hr></blockquote>



    The 3U RAID would explain it.
  • Reply 91 of 185
    tsukuritetsukurite Posts: 192member
    [quote]Originally posted by GardenOfEarthlyDelights:

    <strong>*snip*



    (I'm holding off for project Autobahn, where the Mac finally gets the enough motor to sit in the left lane, forcing all others out of its way as it barrels down the computing roadway.)</strong><hr></blockquote>



    Vroom!
  • Reply 92 of 185
    bigcbigc Posts: 1,224member
    [quote]Originally posted by johnsonwax:

    <strong>



    Absoft Pro Fortan 7.0 is available for OS X.



    It includes command line and GUI functionality and does Altivec optimization. It also works with UCLA's Appleseed for clustering.



    It's not free, and not even terribly cheap to .edu, but it's got most everything you'd want.</strong><hr></blockquote>





    $900 for a couple of programs for my own use is hard to justify but to some it would be worth it. Budgets are tight these days to .edu and research labs
  • Reply 93 of 185
    tsukuritetsukurite Posts: 192member
    Hmm...I suppose this could exist as a service running under any applications (hence the kernal hacking). It would intercept calls to the CPU that were computationally intensive (edit: ignoring things like GUI operations) and break them out into packets to be shipped to other computers on the network.



    Looking at it, it almost seems like a P2P network but for CPU cycles.



    And what about this: (forgive the M$ reference) IF you could create pseudo-clusters by creating a "workgroup" name, then the jobs would only be farmed to those specific machines. Conversly, jobs would only be accepted from those same machines as well. That could speak to the security issues, maybe.



    Flame on!







    [ 06-24-2002: Message edited by: tsukurite ]</p>
  • Reply 94 of 185
    naepstnnaepstn Posts: 78member
    Regardless of the validity of this rumor, I think that it would have many benefits, and in turn, lead to increased revenue for Apple in terms of hardware sales. If I was designing such a scheme, it would have the following features:

    - Whenever a machine (with "Wolf" enabled) went to screensaver or to sleep (with wake-on-LAN turned on), it sends out a signal (through Rendezvous) that it is available for crunching. (I don't think that I'd go with a CPU-cycles-to-spare scheme, ala @Home, etc. mainly due to HIGHLY inconsistent and unreliable processing times, but I'll get to that in a moment)

    - I would think that any MP-coded apps could easily be shared out among different nodes, as if they were additional processors (with some kernel-level changes). This way, instantly, Photoshop, BLAST, FCP and most of the high-end, comp.-intensive apps would make use of it automatically. In addition, this would simplify the issue of sorting out which processed would be worthwhile farming out, because in general, multithreaded apps COULD benefit from this, while non-multithreaded apps wouldn't.

    - Nodes could be benchmarked to get an estimate of how long a particular "vector" takes to process and send back (affected by CPU and network speed, amount of RAM and cache, etc.). This way, the scheduling machine could be made more intelligent in terms of parcelling out the data to maximize task-completion time, and determine whether it will be beneficial to send out a "vector" to a particular machine or just wait for local CPU time (or that of a higher-benchmarked node) to become available. Granted, this could be a lot of processing overhead, but I would think could be made quite efficient.



    I would think that many companies and schools would opt for Macs if they offered this potential use. I think there would be many people/places that can't justify the cost of a "real" cluster (eg. if it's just for something that's done once of twice a week, but takes awhile) or who purely can't afford it. If, however, it is presented to them as purely a bit more up-front cost for each machine as they get replaced, and you get the added benefit of more efficient use of processing power (when people go away on holiday, are out for lunch, or after-hours), then it becomes much more attractive. Imagine the reaction some management-types would have to the potential of utilizing the processing power of any machines that aren't being used at a given time and increasing others' efficiency as a result, while not requiring any additional IT support or setup...



    I don't really think that it would deter (or slow) people or companies from upgrading their machines, since this is only going to speed up machines for specific processes, and "most" apps won't get a benefit from the technology. It's not like now everyone's machine gets way faster. Yes, it would slow the upgrade cycle on machines that had the primary function of JUST doing these long calculations, renders, etc., but those aren't Macs! Now, you're potentially looking at companies not constantly upgrading their Sun, SGI or PC renderfarms, etc. and instead adopting Macs much more widely. It's quite a dichotomy shift but if backed up by solid performance and cost estimates or studies, a realistic one.



    One thing is for sure, allenmcjones can sure stir up interesting debate. Who cares if he's full of **it, as far as I'm concerned!
  • Reply 95 of 185
    [quote]Originally posted by engpjp:

    "You're wrong. Anyone in science that needs a fast cluster doesn't buy macs. Overpriced and underperforming. Reality sucks huh?



    Trust me. I work in academia and people just buy what they need. There's no great cry in the night for a super software solution to steel cycles from the secretaries? over powered G4."



    Scott...etc,

    I don't need to trust you on this one. As a Dane I was extremely interested to read in today's news that Denmark's foremost technical university, DTH, has set up what they claim is Scandinavia's largest Mac cluster:



    "The Danish Technical University (DTU) has developed Scandinavia's largest dedicated Mac cluster, <strong>consisting of 32 dedicated</strong> Power Mac G4 Dual-800 MHz workstations (equivalent to 200 Gigaflops). The cluster, which is called Velocity-X, will be turned on for use on Monday, June 24th with the presence of Apple Denmark. The dedicated Mac cluster will primarily be used to understand proteins' influence on cancer as well as on larger film and animation projects. Velocity-X can be rented for use through the Internet for approximately 50,000 Danish kroner per week ($1 = 8.5 DKK). " (MacNews.com)



    And you trust me: they have more than enough PC's, Sun's and people to work them, to choose anything other than Macs for such a cluster if it wasn't for the fact that they deemed it the best for the job!



    I know. Several of the professors there are among my personal acquaintances. And I have similar contacts at universities in five other countries - their feedback is that PC clustering is rather more difficult than Mac clustering.



    Please continue with this well-informed, technical discussion. I learn a lot from it, and I enjoy partaking in subjective but well-balanced and well founded argumentation like this. Kindly keep it up - and thanks.<hr></blockquote>



    "dedicated"? What does that mean? Hum? I think it means they just bought for their own use what they needed. Which is what I've been saysing the whole damn time. Poeple who need this power just buy it. They don't dream of skimping cycles off of other people work computers.
  • Reply 96 of 185
    [quote]Originally posted by frawgz:

    <strong>



    "If Apple were to offer a scalable, high-density hardware solution, I would push hard for a platform switch," said Patrick Gavin of the University of California at Santa Cruz Center for Biomolecular Science and Engineering. "The PowerPC architecture is vastly superior to anything else out there in terms of power consumption versus processing power."



    <a href="http://www.wired.com/news/mac/0,2125,50454,00.html"; target="_blank">http://www.wired.com/news/mac/0,2125,50454,00.html</a></strong><hr></blockquote>;



    Once again this is what I've been saying the fscking time. If there is a cluster solution then THEY WILL BUY A CLUSTER. NOT look for a way to run on other people's hardware.
  • Reply 97 of 185
    programmerprogrammer Posts: 3,467member
    Unfortunately the system can't just "intercept computationally expensive instructions" or "relocate some threads". Its not particular instructions that make a task expensive, its the number of instructions and how many times they execute. The relocation of threads isn't possible because threads are coded to assume a shared memory model -- something which doesn't exist in a clustered situation. No, the code will need to be redeveloped to work with the new system to achieve clustering... but this level of system support is critical to wide adoption of clustering technology, and to allow the hardware vendor(s) to provide acceleration.
  • Reply 97 of 185
    [quote]Originally posted by johnsonwax:

    <strong>



    Uh, this I would disagree with. The typical research university in this country has 5,000-10,000 administrative computers. Even with low-end G4s, that's a shitload of CPU cycles sitting around doing nothing 120 hours per week.



    I'm reasonably sure that if Apple shipped a generalized distributed computing solution that was easy to administer and to build solutions into, that we could selectively combine administrative computing budgets and funds for computational research system and both parties would be better off.



    If presented with the notion that there might be 5,000+ GFlops laying around the place that could be recycled, I'm pretty sure the university would jump at it. After all, last summer a lot of universities in CA were subsidizing LCD purchases to save electricity consumption.



    I think you might be underestimating the ability of other large institutions to lay down a comprehensive computing plan.



    </strong><hr></blockquote>



    For one thing most of the computers are PeeCees. Take where I am now. U of Michigan. **** load of comptures here. There's no chance in hell the hospital side, where I am, is going to allow some Physics person run egs4 on their hospital boxes at any time. No sys admin will allow other people to run anything they don't know on their boxes. Too much trouble to bother with. Here's another problem. There are alot of comuters here, a lot of old crappy computers. I just got a handed down PIII to replace my handed down PII. Underpowered CPUs with almost enough RAM to run NT. NOT A RESEARCH BOX AT ALL.





    People who need power just buy it. As has been proven by all the people here to tried to prove me wrong.



    If Apple is working on this it's a dumb idea that will be of use for 0.01% of the people out there. It's a waste that is best left to people like Black Lab Linux and others like them.



    [ 06-24-2002: Message edited by: scott_h_phd ]</p>
  • Reply 99 of 185
    johnsonwaxjohnsonwax Posts: 462member
    [quote]Originally posted by scott_h_phd:

    <strong>Once again this is what I've been saying the fscking time. If there is a cluster solution then THEY WILL BUY A CLUSTER. NOT look for a way to run on other people's hardware.</strong><hr></blockquote>



    Have we ever had anything else to look at?



    What I mean is, at what point have we been able to point to a situation whereby the computing group could combine their need to install a cluster and their need to install a general computing lab or upgrade an administrative office, and realistically have the option of installing largely the same hardware and software in both solutions with sufficient expectation that the setup will work?



    I can't see that ever being presented as an option before. I can't see a time that anyone would have looked at that computing lab or office as possessing enough computing power to care, or possessing a software setup that would make it work effortlessly. Honestly, I think that this will be a new idea that if marketed effectively could work.



    Win 2k and XP are about the first OS releases that technically could have done it, and I'm pretty sure nobody around here would trust those OS to pull it off properly without also being a virus conduit. I'm not sure that OS X is sufficiently well-proven, but being unix means that it has a head start with a lot of people. OS X is also not sufficiently entrenched, but campuses like many private schools which make comprehensive computing decisions would at least hear the message and take it seriously.
  • Reply 100 of 185
    keyboardf12keyboardf12 Posts: 1,379member
    I think programmer said it best in this or another thread when he said you can't lump all schools and how they handle tech in one bunch.



    Some will find a use for this tech others will buy new machines. For those that have extra macs not doing much will not find this whole idea dumb.



    Just because U of M won't use the tech does not mean its worthless.



    <img src="graemlins/oyvey.gif" border="0" alt="[No]" />
Sign In or Register to comment.