How fast is your computer? Syncronous vs Asyncronous-Computers without Clocks!

Posted:
in Future Apple Hardware edited January 2014

Comments

  • Reply 1 of 3
    dmgeistdmgeist Posts: 153member
    Here's the link to this artice from scientific american and SUN;

    <a href="http://www.sciam.com/article.cfm?articleID=00013F47-37CF-1D2A-97CA809EC588EEDF&pageNumber=1&catID=2"; target="_blank">Scientific American- Computers without Clocks</a>



    I think this is a great idea. Currently i'm studying to get an A+ certification and as learn more and more about the boring world of

    pC's, im coming to the conclusion that the core of modern computers

    are still useing twenty-year old technology. This is an interesting time

    for all Computer users (all of you) PC and MAC. Chip manufacturers

    are now building the transiters in micro-proccessers at the micro-

    scopic level. That means that in less than ten years from what I hear

    the transitors will be so small that they (Intel,amd,IBM,Motorola)

    will know were one transistor starts and another begins. I think when

    this "Brick Wall" starts coming we should have an alternative To the

    outdated clock cycles.



    To those of you that actually care what I'm stating, take a good look at

    device your useing now. Ya you know who you are the one who follow up

    on "Bottleknecks" and such. Well theres one bottleneck right there. Its

    that little timer that states how fast you computer is, or how slow it is.

    Read the article, its long but true.



    When I hear about everyone complaining about clock cycles,Bus speeds,

    Mgz ratings, I cringed. because the only way the chip industry is going

    to make any progress past "The brick wall" is to change the way the systems work all togeter. Yes of course there is Quantum Mechanics but

    who knows how long we have till we are the Machine.



    --And to all those who only care about how Phat there Mhz Phalis machine is I challaged You! I challaged you to do something usefull with

    that power besides D/L ing porn, riping warez from Hotline, and fraging people 1/2 way accross the world. There lots of organizations

    that need that juice-try too share a little huh.....



    P.S-This Is kind of Future hardware/outsider kind of both, so i went

    with Future hardware. Lucky I have Asyncronous thinking or like

    the Ass between to piles of food I would go hungry.....
  • Reply 2 of 3
    mrbilldatamrbilldata Posts: 489member
    HA HA HA <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> <img src="graemlins/lol.gif" border="0" alt="[Laughing]" /> <img src="graemlins/lol.gif" border="0" alt="[Laughing]" />



    Thats like getting rid of the Distributor in a car and having each sparkplug tell the next to fire.



    The concept will gain small amounts of time but lose in complexity of the circuits to accomplish it. The FPU came about for similar reasons. The processor performed the same cycles to do repetitive actions. Keeping them in the CPU killed the Cyclic timing, so they were moved out and allowed to run at their own speed.



    The next step in that evolution of thinking is to have all Microcode evaluated and segregated. For a long time I had thought that the Apple "Tool-box" of routines ran independant of the processor, allowing the processor to continue while system common tasks took place. Sadly I have learned otherwise.



    I think there is a need to be able to segregate and even self optimize a systems microcode as the processor sees fit.



    Compilers optimise code for a fixed machine code. Why not have a system optimise its microcode for particular sequences.



    Of course modern PCs have hardwired microcode and are unable to do such a thing... and probably never will.
  • Reply 3 of 3
    programmerprogrammer Posts: 3,458member
    The Pentium IV already clocks its integer units on a double speed clock, which is a first step toward "clockless" machines. I think this trend will continue -- some circuits want to run at lower clock rates than others and having to limit a chip based on the slowest part is placing artifical limits on the designs. System-on-a-chip designs make this even more clear, especially when you consider the need to reduce power consumption and heat.
Sign In or Register to comment.