programmer
About
- Username
- programmer
- Joined
- Visits
- 51
- Last Active
- Roles
- member
- Points
- 454
- Badges
- 1
- Posts
- 3,503
Reactions
-
Generation gaps: How much faster Apple Silicon gets with each release
MacPro said:You are aware, I hope, I was referring to the OP's comment about 40 years hence? If you don't think in forty years computing power will be over 1000 times more powerful I am guessing you are young? I started woking for Apple in the late 70s so have a long perspective.Not as old as you, but not far off. And I’m in the industry right now, and have been for decades with a good view of what is really going on. I’m extremely familiar with how far we’ve come, and yes, it is millions of times more powerful than the earliest computers. Could we see 1000x improvement in the next 40 years? Yes, it’s possible.My point is that we can’t take past progress as the metric for future progress. This idea of continuous steady progress in process improvement is gone, and has been for quite a while. Much of Moore’s original paper was about the economics of chip production. Performance was kind of a side effect. The problem is that each successive improvement costs more and more, and delivers less and less, and comes at higher and higher risk. In this situation the economic model could break down, and put that 1000x in 40 years in jeopardy. Nobody knows what that’s going to look like because the industry has never been in this position before. New territory. Makes predictions highly suspect. -
Generation gaps: How much faster Apple Silicon gets with each release
dope_ahmine said:CPUs and GPUs actually complement each other in AI. CPUs handle tasks with lots of decision-making or data management, while GPUs jump in to power through the raw computation. It’s not just about one or the other; the best results come from using both for what each does best.
As for energy efficiency, GPUs perform many tasks at a much lower power cost than CPUs, which is huge for AI developers who need high-speed processing without the power drain (or cost) that would come from only using CPUs.
And on top of all that, new architectures are even starting to blend CPU and GPU functions—like Apple’s M-series chips, which let both CPU and GPU access the same memory to cut down on data transfer times and save power. Plus, with all the popular libraries like PyTorch, CUDA, and TensorFlow, it’s easier than ever to optimize code to leverage GPUs, so more developers can get the speed and efficiency benefits without diving deep into complex GPU programming.
-
Generation gaps: How much faster Apple Silicon gets with each release
MacPro said:1der said:It seems Cook’s law is then about 4 years. It's always fun to make lots of assumptions and project into the future. In doing so I imagine in say 40 years what seemingly AI miracles could be accomplished with the machine in your hand being 1000 times as powerful -
Generation gaps: How much faster Apple Silicon gets with each release
Galfan said:There is an error in this article. It seems that for the m4 pro gpu graph the editors picked out the opencl test report from Geekbench instead of the metal test.
in metal the m4 pro gpu scores around 113865
-
Generation gaps: How much faster Apple Silicon gets with each release
emoeller said:Would be really interesting to know the degree of binning that Apple uses. Generally the more complex the higher the failure rate but that leaves a chip that could probably operate using fewer CPU/GPU parts resulting in a lower grade chip. Since the chips are tested and sealed with the chip name (M4, M4 Pro, M4 Max, M4 Ultra) on the top we can't tell really what the configuration is other than it will be the maximum advertised chipset. Hence a binned M4 Max could be sold/used as either a MR or M4 Pro.