or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Time to rethink microchip size. ?
New Posts  All Forums:Forum Nav:

Time to rethink microchip size. ?

post #1 of 21
Thread Starter 
Ok we all love the G5..it's got tons of ooomph...
It's processing power is brilliant..

But there is a downside to all of this..and that is the need for 5 fans..just to keep the thing cool.

So that got me thinking.
Why the obsession with chip size ?
Have we reached a point at which the costs (heat wise ) outstrip the benefits ?

Wouldn't it be better to s p r e a d the chip over a relatively larger area so that the heat generated could easily be dissipated ?

Running costs ( electricity ) and general wear & tear on the circuit board would surely be worth it.

Could you imagine a chip spread across the the back of a 17" or 21" monitor, making use of all that heat exchange area..?

This miniaturization obsession is not new..the camera industry went through a similar phase back in the 70's. eventually people realised that the cameras were becoming too dinky to handle..

Apple showed the world how passive cooling could work in iMacs..so why not do it again with a new generation of Macs..?
There are 3 types of people in the world.

Those who count.

&

Those who can't.
Reply
There are 3 types of people in the world.

Those who count.

&

Those who can't.
Reply
post #2 of 21
Yeah, but for speed you need the components of a processor really close together.

Can't really spread things out if you need them close. \
post #3 of 21
Thread Starter 
Quote:
Originally posted by Cake
Yeah, but for speed you need the components of a processor really close together.

Can't really spread things out if you need them close. \

True, but then we're only talking microseconds or even a second or two in the worst case..

Besides which..who is going to bust your chops for handing in a piece of work a few seconds later...?
There are 3 types of people in the world.

Those who count.

&

Those who can't.
Reply
There are 3 types of people in the world.

Those who count.

&

Those who can't.
Reply
post #4 of 21
Many points here.

Larger chip, means more silicon, means more expansive chips.
Smaller fabbing process, equal smaller chip for the same amount of processor and means also higher frequencies (but there is others factors).
The number of fans of a G5 are not there to cool an espcially hot processor, but to struggle against noise.
Heat is becoming an issue and a limiting factor in chips.
There is better solutions for evacuating heat than make " delayed chips", it's new tech, like Diamond or microcooling.
More power, means new applications for computer. If the only task of a computer was word processing, then a 68030 will be sufficiant for the job.
post #5 of 21
This will never happen, or at least not until proc architecture changes radically (cell or ?.....)

Die size is the single biggest factor in determining price. As pointed out above, bigger == more expensive. That's why POWER4s and Itaniums are so pricey.
post #6 of 21
No we aren't talking seconds here. The issue lie in the sub nano second range. Processors are so fast now a days that the time of flight of electron charges have to be taken into account. Since things like lead inductances and length impact this, many signals have to be in very close proximity.

Even when the signals go off chip such issues come into play and many others actually. That is why Apple's motherboard for the G5 is really an engineering marvel. It takes a lot of experience to get 1 GHz signals to run across a motherboard correctly. It is not likely that we will see a vast changes to these motherboards, at least where the high speed signals are located. You also have to consider this: routing high speed signals over long distances requires more power so you loose there also.

The issue with high speed off chip signals is one of the reasons I believe that Apple & IBM must be working on a HIGHER integration 970 or deriviative. Such a chip would have an on board memory interface and a hypertransport interface and possibly a few other goodies. It is the only practical way I see for us to ever see a low cost 64 bit machine (iMac, eMac, MacStation or whatever). This leads me to believe that the rumored 980 is more of a 970+ than a Power 5 derived chip.

Off course I could be completely wrong about the 980, it may not be targeted at the low cost end of the market. But in the end it is the industry trends that have to be taken into account, faster machines demand smaller components. Such machines will push us to the smallest processes possible.

In some industries you do get your chops busted and a lot more for late work. This is the domain of realtime programming, where mistakes in processing time can cause crashes of a physical kind. Just imagine if the stabilization software on the space shuttle or the B2 was allowed to be a few seconds late from time to time. A few seconds can represent several miles of flight, would you really want to be on any machine that makes flight decisions every couple of miles. It isn't an issue just with high speed flight, would you let a car make driving decisions in rush hour traffic every two to three seconds? Those are real time issues, don't forget there is a huge performance computing market out there also.

Further for years the computer industry was getting its chps busted for the poor performance of user interfaces. Just imagine what any PC wuold feel like if there where delays of several seconds for every move of the mouse or click on a menu. There was a time when this was exactly the case, hit a return key on a multiuser system 20 years ago could cause several minutes of delay until the system got back to you. Not ideal at all.

The trends for the future are higher integration processors and more of them. One of the best things that Apple could do for itself is to go SMP across all of their lines. This is very likely to happen when the integration levels hit the right point.


Quote:
Originally posted by Aquafire
True, but then we're only talking microseconds or even a second or two in the worst case..

Besides which..who is going to bust your chops for handing in a piece of work a few seconds later...?
post #7 of 21
Thread Starter 
Powerdoc, Wizard, Jouster, & Cake, I marvel at the depth of your collective knowledge.

I am learning stuff that I wouldn't have otherwise appreciated.

Thanx for your imputs.

Wonder if the quantum computer will be forever just out of reach...\
There are 3 types of people in the world.

Those who count.

&

Those who can't.
Reply
There are 3 types of people in the world.

Those who count.

&

Those who can't.
Reply
post #8 of 21
Quatum computers like everything imagined have potential for realization. Now does that mean I will see one on my desk in my life time - probally not. Is the Quantum computer the path way to high performance computing in the future, hard to say also. It would be interesting to see Babbage and the Countess of Lovelace suddenly appear on earth and see what pass for computers today.

Think they be surprised or a little underwhelmed. I ask this becuase it is easy to sitt in front of your latest baby and imagine the possibilities. In imagination there are no limits. Would Babbage or ADA have dreamed about GUI and high level programming languages - hard to say agian. The good thing is that the realization of some of their dreams resulted in more dreamers improving and expanding on a concept. It didn't hurt that wildfires of invention such as Tesla added to the knowledge base in their own oblique ways.

Tesla points out some very interesting concepts, one it that free access to information does lead to accelerated developemnt of technology. The other is that contributions to a technology can come from strange sources. While quatum computing is all the rage today in our collective imaginations that does not mean the next big step in computationa machinery is coming from that camp. Always be on the look out for a technology broad side.

By the way I have to say thanks for the compliment, but to be honest I do not consider my self to have great depth of knowledge in any one area. More of a thin but broad coat. Look at learning as a life long pursuit, just make sure you channel your efforts mostly into things that are of interest to you.

Thanks
Dave
Have a Happy Thanksgiving


Quote:
Originally posted by Aquafire
Powerdoc, Wizard, Jouster, & Cake, I marvel at the depth of your collective knowledge.

I am learning stuff that I wouldn't have otherwise appreciated.

Thanx for your imputs.

Wonder if the quantum computer will be forever just out of reach...\
post #9 of 21
Bear in mind that my objection was not technical but economic. There is a long thread somewhere here about just the sort of thing you mention. See it for a pretty intense discussion re the sort of thing you suggest.

However, bringing up die size/price limitations is not, as you seem to think, a refusal to accept that certain areas may be fertile fields in the future, but rather an attempt to point out a currently exisiting (and important) limiting factor.
post #10 of 21
Like other folks said, it's all about timing. Do a quick calculation (smarter people can correct me if I'm off base here). Speed of light is 3*10^8 m/s. A 1GHz chip has exactly one-billionth of a second to complete a cycle (1/10^9 s). Light can cover 0.3 meters (about a foot) in one-billionth of a second. So a chip in which the bits move at the speed of light can be no bigger than a foot across, or the bits couldn't get to where they need to go in time. In reality, electrons in semiconductors move at just a few percent of the speed of light, meaning that chip size is limited to just a few centimeters across. People smarter than me could tell you whether this is a practical limiting factor these days - I suspect its one of the reasons why shrinking feature size lets you ramp up clock speed.

The interesting thing, though, is that here, as elsewhere in the semiconductor business, we seem to be fast approaching the limits of the laws of nature. People are already talking about building chips out of diamonds so they can withstand the extreme heat that denser and denser features generate. That famous graph we've all seen shows the core of intel's chips being as energy-dense as the center of a nuclear reactor by 2006 or so.

So why not shift to micro-distributed processors? It works on a large scale (c.f. BigMac), and we know that multiple processors make a big difference even plain-vanilla OSX. The major problem is that a single thread/task can be accomplished only as fast as one processor can do it, but the solution is just better-designed software that breaks down threads into smaller quanta. This is surely the wave of the future in software design, anyway.
post #11 of 21
Thread Starter 
You raise the speed of light as a limiting factor..relative to size. Interestingly, several experiments have been carried out with cesium lasers that seem to break the light speed barrier,
Whether or not they can transmit imformation is another thing altogether.
This is an interesting link ( hope it works ).

http://www.sciam.com/article.cfm?art...81809EC588EF21

And I can't do anything about the two pop ups..
There are 3 types of people in the world.

Those who count.

&

Those who can't.
Reply
There are 3 types of people in the world.

Those who count.

&

Those who can't.
Reply
post #12 of 21
Interesting post Towel.

About the distributed computing, I do believe that that is the wave of the future. How far in the future is a question not yet answered. This belief however is why I can't completely discount the rumored new PowerBook implementation of last week.

What I expect to see in the future is in effect distributed SMP machines where each SMP is either 2 or 4X. With a little engineering effort and low power processors such machines could be delivered today. Such machines would be incredibly flexible computing platforms, it would be nothing to fit two computing units (2 2X SMP machines) on an ATX board with todays technology.

Thanks
DAve


Quote:
Originally posted by Towel
Like other folks said, it's all about timing. Do a quick calculation (smarter people can correct me if I'm off base here). Speed of light is 3*10^8 m/s. A 1GHz chip has exactly one-billionth of a second to complete a cycle (1/10^9 s). Light can cover 0.3 meters (about a foot) in one-billionth of a second. So a chip in which the bits move at the speed of light can be no bigger than a foot across, or the bits couldn't get to where they need to go in time. In reality, electrons in semiconductors move at just a few percent of the speed of light, meaning that chip size is limited to just a few centimeters across. People smarter than me could tell you whether this is a practical limiting factor these days - I suspect its one of the reasons why shrinking feature size lets you ramp up clock speed.

The interesting thing, though, is that here, as elsewhere in the semiconductor business, we seem to be fast approaching the limits of the laws of nature. People are already talking about building chips out of diamonds so they can withstand the extreme heat that denser and denser features generate. That famous graph we've all seen shows the core of intel's chips being as energy-dense as the center of a nuclear reactor by 2006 or so.

So why not shift to micro-distributed processors? It works on a large scale (c.f. BigMac), and we know that multiple processors make a big difference even plain-vanilla OSX. The major problem is that a single thread/task can be accomplished only as fast as one processor can do it, but the solution is just better-designed software that breaks down threads into smaller quanta. This is surely the wave of the future in software design, anyway.
post #13 of 21
Quote:
Originally posted by Towel
Like other folks said, it's all about timing. Do a quick calculation (smarter people can correct me if I'm off base here). Speed of light is 3*10^8 m/s.

In structures of this size, electrons don't travel with lightspeed but rather 1/30 - 1/10 (iirc) of c so size and distance matters even more.
post #14 of 21
this is stupid. its not the speed of electrons that are slowing things down.

its the capacitances and inductances in the wiring. besides, most of the delay is in transistor operation anyways.
post #15 of 21
Quote:
Originally posted by Aquafire
Ok we all love the G5..it's got tons of ooomph...
It's processing power is brilliant..

But there is a downside to all of this..and that is the need for 5 fans..just to keep the thing cool.

So that got me thinking.
Why the obsession with chip size ?
Have we reached a point at which the costs (heat wise ) outstrip the benefits ?

Wouldn't it be better to s p r e a d the chip over a relatively larger area so that the heat generated could easily be dissipated ?

Running costs ( electricity ) and general wear & tear on the circuit board would surely be worth it.

Could you imagine a chip spread across the the back of a 17" or 21" monitor, making use of all that heat exchange area..?

This miniaturization obsession is not new..the camera industry went through a similar phase back in the 70's. eventually people realised that the cameras were becoming too dinky to handle..

Apple showed the world how passive cooling could work in iMacs..so why not do it again with a new generation of Macs..?

that is really bad because lengthning the wire in a chip has a quadratic effect on the propagation delay
post #16 of 21
The way to reduce power consupmption and/or increase speed is to shrink the chip further. Smaller dies mean smaller capacitances and inductances. The chip designers generally will put as much circuitry on the die as possible. This is becasue communications to logic on the chip are much faster than between chips. Also, driving signals off-chip increases power consumption due to the larger trace capacitances on the motherboard. The G5 is architecturally powerful because it contains so much logic on one die. And folks who design the chips and boards would much rather slove a heat problem (which is relatively simple) than add complexity by partitioning logic among chips.
post #17 of 21
Big chip = more resistance = more heat.

The heat output of a larger chip with the same transistor count and size as those of a smaller chip is the same, if the vias are superconductors.
Cat: the other white meat
Reply
Cat: the other white meat
Reply
post #18 of 21
Propagation delay is already such an issue that the P4 and the 970 both have pipeline stages that do nothing but give the electrical signals an extra tick to reach their destinations!

One solution is indeed to use lots of smaller processors, but this is not the solution because even if you did successfully move tools and programmers and languages over to a parallel mindset, there are still problems that, at least in current understanding, can only be solved by crunching through a single, long, indivisible list of instructions.

As a practical matter, there will continue to be applications that were originally written for a single big CPU and which will continue to function that way. You can be sure that Steve would get an earful from Adobe engineers the day after he released the PowerMac G6 with 32 IBM 760VX processors.

One promising solution I haven't seen mentioned in this thread actually keeps the charge in the CPU rather than grounding it at the end of each computation. Each transistor gets an oscillator that holds the charge until the transistor needs to be "on" again. The result is drastically reduced power consumption, resulting in drastically reduced heat dissipation, as the CPU's power supply would only have to make up for leakage. The cost is that transistors get larger and more complex, but at least you could really pack 'em in without having all that charge heating your CPU to fissile temperatures.
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
"...within intervention's distance of the embassy." - CvB

Original music:
The Mayflies - Black earth Americana. Now on iTMS!
Becca Sutlive - Iowa Fried Rock 'n Roll - now on iTMS!
Reply
post #19 of 21
Anybody ever wonder- if you do the math using simple DC electrical analogy (naturally, a silicon chip is far more complicated than a simple resistance circuit), powering these CPU's is very close to like your power supply driving a near short? It's pretty ridiculous when you get to the point where 5 volts is enough to drive 10 amps through something and create 50 W of heat. ...or what are they running at these days? 3 volts? That makes things even more extreme. We would be getting down around the .5 to .18 ohm range, hypothetically? To compare, a DCR rating for a typical loudspeaker would be around 6 ohms. If you were to measure one in the "point-something" range, that usually would indicate a shorted coil.
Lauren Sanchez? That kinda hotness is just plain unnatural.
Reply
Lauren Sanchez? That kinda hotness is just plain unnatural.
Reply
post #20 of 21
Intel Scientists are now predicting a "Wall" for Moore's Law

looks like the conference discussing this is still ongoing. more news may follow.

Quote:
Moore's Law, as chip manufacturers generally refer to it today, is coming to an end, according to a recent research paper.

Granted, that end likely won't come for about two decades, but Intel researchers have recently published a paper theorizing that chipmakers will hit a wall when it comes to shrinking the size of transistors, one of the chief methods for making chips that are smaller, more powerful and cheaper than their predecessors.

Manufacturers will be able to produce chips on the 16-nanometer manufacturing process, expected by conservative estimates to arrive in 2018, and maybe one or two manufacturing processes after that, but that's it.

News.context

What's new:
Semiconductor makers won't be able to shrink transistors much, if at all, beyond 2021, according to a new paper from Intel.

Bottom line:
Transistor shrinkage is one of the main drivers of Moore's Law, so chipmakers are going to have to start looking for new ways to make their chips more powerful--and less expensive. Otherwise, the pace of progress in the IT industry could begin to slow.


"This looks like a fundamental limit," said Paolo Gargini, director of technology strategy at Intel and an Intel fellow. The paper, titled "Limits to Binary Logic Switch Scaling--A Gedanken Model," was written by four authors and was published in the Proceedings of the IEEE(Institute of Electrical and Electronics Engineers) in November.


Although it's not unusual for researchers to theorize about the end of transistor scaling, it's an unusual statement for researchers from Intel, and it underscores the difficulties chip designers currently face. The size, energy consumption and performance requirements of today's computers are forcing semiconductor makers to completely rethink how they design their products and are prompting many to pool design with research and development.

Resolving these issues is a major goal for the entire industry. UnderMoore's Law, chipmakers can double the number of transistors on a given chip every two years, an exponential growth pattern that has allowed computers to get both cheaper and more powerful at the same time.

Mostly, the trick has been accomplished through shrinking transistors. With shrinkage tapped out, manufacturers will have to find other methods to keep the cycle going.

These issues will likely be widely discussed this week, when theInternational Technology Roadmap for Semiconductors is unveiled in Taiwan. The ITRS, which is comprised of several organizations, including the Semiconductor Industry Association, outlines the challenges and rough timetable for the industry for 15 years. A new version of the plan will be released in Taiwan on Dec. 2.

Still, Gargini said, researchers are exploring a variety of ideas, such as more efficient use of electrons or simply making bigger chips, to surpass any looming barriers. Other researchers likely will dispute these conclusions.

"We cannot let physics beat us," he said, laughing.

The distinguished circuit

The problem chipmakers face comes down to distinction and control. Transistors are essentially microscopic on/off switches that consist of a source (where electrons come from), a drain (where they go) and a gate that controls the flow of electrons through a channel that connects the source and the drain.

When current flows from the source to the drain, a computer reads this as a "1." When current is not flowing, the transistor is read as a "0." Millions of these actions together produce the data inside PCs. Strict control of the gate and channel region, therefore, are necessary to produce reliable results.

When the length of the gate gets below 5 nanometers, however, tunneling will begin to occur. Electrons will simply pass through the channel on their own, because the source and the drain will be extremely close. (A nanometer is a billionth of a meter.)

Gargini likens the phenomenon to a waterfall in the middle of a trail. If a person can't see through it, they will take a detour around it. If it is only a thin veil of mist, people will push through.

"Where you have a barrier, the electrons penetrate a certain distance," he said. "Once the two regions are close enough, because of tunneling, the charge will go from A to B, even when a voltage is not applied to the gate."

At this point, a transistor becomes unreliable as a source of basic data, because the probability of spontaneous transmission is about 50 percent. In other words, Heisenberg's uncertainty principle is in action, because the location of the electrons can't be accurately predicted.

In chips made on a 16-nanometer technology process, the transistor gate will be about 5 nanometers long.


"At 5-nanometer gate dimension, I would have to agree with them," said Craig Sander, vice president of process technology development for AMD. "I think we will find applications that don't require that we stay on such an aggressive roadmap."

When these chips will start to be produced is a matter of debate. On paper, new manufacturing processes come out every two years. Chips made on the 90-nanometer process, which contain gate lengths of about 37 nanometers, are just starting to be produced. On a two-year cycle, this would mean that 16-nanometer chips would appear in 2013 with the barriers preventing new, smaller chips in 2015.

Manufacturers, however, have had to delay the introduction of new processes recently. Using a three-year calendar, 5-nanometer chips won't hit until 2018 or 2019, putting a barrier generation at about 2021. The ITRS timetable will provide more details about the different manufacturing technologies for a given year.

The tunneling effects, Gargini said, will occur regardless of the chemistry of the transistor materials. Several researchers over the years have predicted the end of Moore's Law but made the mistake of extrapolating on the basis of existing materials.

Designers, however, continually change the materials and structures inside semiconductors. Intel and rival Advanced Micro Devices, for instance, are looking at replacing silicon transistor gates with metallic gates so that chips can be mass-produced with 45-nanometer manufacturing--expected between 2007 and 2009. Gates on this process will be about 18 nanometers, according to the ITRS timetable.

article continues...

fascinating.

IIRC, one of the senior IBM tech guys onstage during the WWDC G5 Stevenote described the current process technology as 4 atoms thick in some places

can't get much thinner in that dimension...
"I do not fear computers. I fear the lack of them" -Isaac Asimov
Reply
"I do not fear computers. I fear the lack of them" -Isaac Asimov
Reply
post #21 of 21
Quote:
Originally posted by curiousuburb
Intel Scientists are now predicting a "Wall" for Moore's Law

looks like the conference discussing this is still ongoing. more news may follow.



fascinating.

IIRC, one of the senior IBM tech guys onstage during the WWDC G5 Stevenote described the current process technology as 4 atoms thick in some places

can't get much thinner in that dimension...

thats probably the gate oxide. you can get rid of that probably by using higher permittivity materiaals so you dont need the gate oxide that thin, so u can have sufficient gate capacitance with a thicker gate oxide.

There is no "Wall" for Moore's Law yet.
There are a lot of technologies that remain uncommercialized, like FinFET transistors, which have been made to gate lengths of 10nm a year ago using present technology.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Time to rethink microchip size. ?