Generation gaps: How much faster Apple Silicon gets with each release

2»

Comments

  • Reply 21 of 38
    chasm said:
    netrox said:
    Exactly why do we need to keep adding more CPU cores when most creative oriented applications would benefit from having more GPU cores? 
    Not that I’m the last word on this topic, but to put this VERY simply CPUs do math and GPUs take that math and manipulate pixels. Graphics are created through math, so more CPUs enable GPUs to do their job better.

    More GPUs are needed when you have really really large screens/more screens. More CPUs are needed when you need more graphics.
    Sorry, but that is wrong.  GPUs excel at doing math at high memory bandwidths... but they basically need to be able to do the math in parallel, and the application has to be written specifically to use the GPU.  CPUs are the default place for code to run, and are generally better at doing complex logic with lots of decisions, randomly chasing through memory for data, and doing less "orderly" computations.  To leverage multiple CPUs, the application has to be written to do that and it isn't the default.  Code typically starts its existence on a single CPU, then the programmer improves it to take advantage of multiple CPUs, then they might improve it further to either use the CPU SIMD or matrix hardware, or re-write critical pieces to run on the GPU.  These days it is also quite common for application programmers to use libraries (often Apple's) which do things like leverage multiple cores, SIMD, matrix hardware, and GPUs.  Creative oriented applications are often graphics or audio heavy, and those things can usually take advantage of all this potential hardware parallelism as long as they are optimized to do so (and the good ones are).

    The question of CPUs vs GPUs on the SoC is a complex one.  Many applications don't use the GPU at all, except for the UI (which hardly needs any GPU at all) but are optimized for multiple CPUs... adding more GPU for those applications gets you nothing.  Even GPU-heavy applications can also benefit from more CPUs, in some cases.  Ultimately though, the GPUs tend to be memory bandwidth limited, so scaling up the GPU beyond what the memory bandwidth can support gets us very little.
    Glad you stepped in here and corrected the previous two comments, @programmer ;


    On top of that, GPUs today are crucial for AI and creative tasks because of their ability to handle tons of calculations in parallel, which is exactly what’s needed for training neural networks and other data-heavy workloads. Modern GPUs even come with specialized hardware, like NVIDIA’s Tensor Cores, that’s purpose-built for the matrix math involved in deep learning. This kind of hardware boost is why GPUs are so valuable for things like image recognition or natural language processing—they’re not just for “manipulating pixels”.
    CPUs and GPUs actually complement each other in AI. CPUs handle tasks with lots of decision-making or data management, while GPUs jump in to power through the raw computation. It’s not just about one or the other; the best results come from using both for what each does best.
    As for energy efficiency, GPUs perform many tasks at a much lower power cost than CPUs, which is huge for AI developers who need high-speed processing without the power drain (or cost) that would come from only using CPUs.
    And on top of all that, new architectures are even starting to blend CPU and GPU functions—like Apple’s M-series chips, which let both CPU and GPU access the same memory to cut down on data transfer times and save power. Plus, with all the popular libraries like PyTorch, CUDA, and TensorFlow, it’s easier than ever to optimize code to leverage GPUs, so more developers can get the speed and efficiency benefits without diving deep into complex GPU programming.
    edited November 2024 Alex1Nwatto_cobra
  • Reply 22 of 38
    programmerprogrammer Posts: 3,489member
    Galfan said:
    There is an error in this article. It seems that for the m4 pro gpu graph the editors picked out the opencl test report from Geekbench instead of the metal test.
    in metal the m4 pro gpu scores around 113865
    Nice catch, thanks!  Their number made no sense, but I didn't know why.

    Alex1Nwatto_cobra
  • Reply 23 of 38
    danoxdanox Posts: 3,475member
    dutchlord said:
    Nobody cares! Even my M1 iMac is still fast enough. Apples focus on speed has no customer value and is not a trigger to upgrade. 
    With Apple, it is the total package software and hardware that triggers an upgrade.
    Alex1Nwilliamlondonwatto_cobra
  • Reply 24 of 38
    programmerprogrammer Posts: 3,489member

    MacPro said:
    1der said:
    It seems Cook’s law is then about 4 years.  It's always fun to make lots of assumptions and project into the future. In doing so I imagine in say 40 years what seemingly AI miracles could be accomplished with the machine in your hand being 1000 times as powerful 
    Same here.  However, I bet your 1000-times increase is way short of the mark in terms of performance gain.
    This sort of abstraction is based on a fallacy that future progress will follow the same pattern as past progress.  Moore's "Law" broke down because that no longer holds.  Until about the mid-2000s, we were rapidly and steadily taking advantage of the relatively easy scaling offered by the available EM spectrum for exposing masks.  Since that time the rate of improvement has gotten slower, much harder, and much more expensive because we've reached extreme frequencies which are hard to use, we've hit the power leakage problem at tiny feature sizes, and so many more issues.  Each process node improvement is a slow, expensive victory with ever more diminishing returns.  For a lot of kinds of chips its not worth the cost of going to a smaller process, and that means there is less demand to drive shrinking to the next node.  So it is not justified to look at the progress over M1 thru M4 and extrapolate linearly.  We aren't at the end of the road, but getting to each successive process node is less appealing. 
    Alex1Nwatto_cobra
  • Reply 25 of 38
    danoxdanox Posts: 3,475member

    aderutter said:
    If you look at Adobe apps for example, they benefit more from CPU as long as the GPU is at a certain level. Once a user has a mid-range GPU then they don’t need GPU as much as RAM & CPU (and the real-life RAM requirements have decreased with Apple Silicon). 

    Another example is ZBrush which is purely CPU based. Even most of the time working in other 3D applications the CPU is more imporant as people working in 3D spend more time not-rendering than rendering and the machine can render while you put the kettle on. 

    It’s gamers and some 3d renderers that use more GPU - but CPU 3D Rendering is more accurate and so CPU rendering (obviously with farms) is the default in hollywood whilst us mortals have to just use what’s available on our budgets - typically a desktop GPU rather than a cloud render. The usual options when thinking only of rendering for games or lower-end 3D rendering are GPU (cheap and fast on PC), or CPU (slower, more accurate and slightly better on Mac generally).  

    When/if Apple release an M4 Ultra that is twice the performance M4 Max (GPU) it should be equivalent to an Nvidia 4090 and set the cat amongst the pigeons. 2025 could be the start of Apple desktop disruption.

    Unless Apple stops overcharging for memory and storage probably not. Current prices for “
    Apple M2 Ultra with 24‑core CPU, 76‑core GPU, 32‑core Neural Engine64GB unified memory2TB SSD storage”Equals to 5,399.99 and a prebuilt pc with a 4090 equals to 3,999.99  https://www.bestbuy.com/site/corsair-vengeance-a7400-gaming-desktop-amd-ryzen-9-9900x-64gb-rgb-ddr5-memory-nvidia-geforce-rtx-4090-2tb-ssd-black/6604319.p?skuId=6604319
    Apple is on a different path than those barn burning PC solutions, internally Apples solution is completely different high megahertz and 350 watts or 650 watts (Multiple cards) just isn’t in the game plan, Apple’s progression will get to a 4090 level with more memory and bigger bandwidth and it will go beyond. It just takes time like it took 13 years to replace Intel processors in Macs but when they get there, it will be well worth it and It will trigger, a meltdown in the PC world in fact, the performance of Apple Silicon M1 has already done that.


    https://www.youtube.com/watch?v=5dhuxRF2c_w  13:05 Wattage used Apple just on a different path than the PC world…..

    edited November 2024 Alex_VAlex1Nwatto_cobra
  • Reply 26 of 38
    Xed said:
    dutchlord said:
    Nobody cares! Even my M1 iMac is still fast enough. Apples focus on speed has no customer value and is not a trigger to upgrade. 
    I care. I am happy to have these chats to compare. If there were other upgrades, like WiFi 7 and a tandem OLED display I'd definitely be upgrading this year with how much faster the M4 Max is over my M1 Max.
    Ok fair enough. But was your M1 max slow and you could not use it anymore? 
    williamlondon
  • Reply 27 of 38
    Are the M4 Pro Metal benchmarks wrong? I see over 100000 and 110000 several other places, also on Geekbench website.
    programmerwatto_cobra
  • Reply 28 of 38
    IF metal is 110000, I am getting a M4 Pro 14/20-48gb... 110000 metal is quite decent. Max is 190000, Ultra will be in the 380 000range but you pay 10x more.

    I think it likely is in the goldilocks zone for me. Not too shabby. Extremely awesome single core performance, very decent multicore, medium high gpu...




    watto_cobra
  • Reply 29 of 38
    Xed said:
    aderutter said:
    If you look at Adobe apps for example, they benefit more from CPU as long as the GPU is at a certain level. Once a user has a mid-range GPU then they don’t need GPU as much as RAM & CPU (and the real-life RAM requirements have decreased with Apple Silicon). 

    Another example is ZBrush which is purely CPU based. Even most of the time working in other 3D applications the CPU is more imporant as people working in 3D spend more time not-rendering than rendering and the machine can render while you put the kettle on. 

    It’s gamers and some 3d renderers that use more GPU - but CPU 3D Rendering is more accurate and so CPU rendering (obviously with farms) is the default in hollywood whilst us mortals have to just use what’s available on our budgets - typically a desktop GPU rather than a cloud render. The usual options when thinking only of rendering for games or lower-end 3D rendering are GPU (cheap and fast on PC), or CPU (slower, more accurate and slightly better on Mac generally).  

    When/if Apple release an M4 Ultra that is twice the performance M4 Max (GPU) it should be equivalent to an Nvidia 4090 and set the cat amongst the pigeons. 2025 could be the start of Apple desktop disruption.

    Unless Apple stops overcharging for memory and storage probably not. Current prices for “
    Apple M2 Ultra with 24‑core CPU, 76‑core GPU, 32‑core Neural Engine64GB unified memory2TB SSD storage”Equals to 5,399.99 and a prebuilt pc with a 4090 equals to 3,999.99  https://www.bestbuy.com/site/corsair-vengeance-a7400-gaming-desktop-amd-ryzen-9-9900x-64gb-rgb-ddr5-memory-nvidia-geforce-rtx-4090-2tb-ssd-black/6604319.p?skuId=6604319
    That's not a valid argument for a claim of overcharging. You want to try again?
    Apologies I tend to write during limited windows. The overcharging part comes from stuff like the upgrading from 512gb to 4tb is around +$1,000 or adding more RAM tends to be in $200 increments. The comparison I was quickly making was how a prebuilt pc with a 4090 can be significantly cheaper than a m2 ultra
    williamlondon
  • Reply 30 of 38
    programmerprogrammer Posts: 3,489member
    CPUs and GPUs actually complement each other in AI. CPUs handle tasks with lots of decision-making or data management, while GPUs jump in to power through the raw computation. It’s not just about one or the other; the best results come from using both for what each does best.

    As for energy efficiency, GPUs perform many tasks at a much lower power cost than CPUs, which is huge for AI developers who need high-speed processing without the power drain (or cost) that would come from only using CPUs.

    And on top of all that, new architectures are even starting to blend CPU and GPU functions—like Apple’s M-series chips, which let both CPU and GPU access the same memory to cut down on data transfer times and save power. Plus, with all the popular libraries like PyTorch, CUDA, and TensorFlow, it’s easier than ever to optimize code to leverage GPUs, so more developers can get the speed and efficiency benefits without diving deep into complex GPU programming.
    This is what the NPU is all about as well.  It is, at its core, a matrix multiplication unit.  Getting a GPU to multiply large matrices optimally, is a tricky piece of code... so having dedicated matrix multiplication hardware which is purpose-built for the task makes a lot of sense.  If you're doing a lot of that.  Prior to the heavy adoption of deep learning it was almost unheard of for consumer machines to do large matrix multiplication.  That was usually the purview of high performance computing clusters.  With the advent of LLMs and generative models, however, things have changed and it is definitely worth having this hardware sitting on the SoC with the CPUs and GPUs.  Apple also appears to have added matrix hardware to their CPUs (in addition to conventional SIMD), so there are lots of options in an Apple Silicon SoC for where to do these matrix operations.  The NPU is very likely the most power efficient at that (by far), and may also have the highest throughput.  And if you're also doing graphics or other compute, now you don't have to worry about your GPU and CPUs being tied up with the ML calculations.  And the SoC's unified memory architecture lets all these units share their data very very efficiently.

    Alex1Ndope_ahminewatto_cobra
  • Reply 31 of 38
    MacPro said:
    aderutter said:
    If you look at Adobe apps for example, they benefit more from CPU as long as the GPU is at a certain level. Once a user has a mid-range GPU then they don’t need GPU as much as RAM & CPU (and the real-life RAM requirements have decreased with Apple Silicon). 

    Another example is ZBrush which is purely CPU based. Even most of the time working in other 3D applications the CPU is more imporant as people working in 3D spend more time not-rendering than rendering and the machine can render while you put the kettle on. 

    It’s gamers and some 3d renderers that use more GPU - but CPU 3D Rendering is more accurate and so CPU rendering (obviously with farms) is the default in hollywood whilst us mortals have to just use what’s available on our budgets - typically a desktop GPU rather than a cloud render. The usual options when thinking only of rendering for games or lower-end 3D rendering are GPU (cheap and fast on PC), or CPU (slower, more accurate and slightly better on Mac generally).  

    When/if Apple release an M4 Ultra that is twice the performance M4 Max (GPU) it should be equivalent to an Nvidia 4090 and set the cat amongst the pigeons. 2025 could be the start of Apple desktop disruption.

    Unless Apple stops overcharging for memory and storage probably not. Current prices for “
    Apple M2 Ultra with 24‑core CPU, 76‑core GPU, 32‑core Neural Engine64GB unified memory2TB SSD storage”Equals to 5,399.99 and a prebuilt pc with a 4090 equals to 3,999.99  https://www.bestbuy.com/site/corsair-vengeance-a7400-gaming-desktop-amd-ryzen-9-9900x-64gb-rgb-ddr5-memory-nvidia-geforce-rtx-4090-2tb-ssd-black/6604319.p?skuId=6604319
    How do you compare the price of RAM for a PC against RAM for an Apple SoC?  There is no way DDR5 in a MoB in a PC can transfer data as fast.  Same with the built-in storage on a SoC.  They are different in every way, so trying to compare prices is not possible.  I have a very high-end Corsair Vengeance Gaming PC and the M2 Ultra, so I can compare performance.  The price was not far apart.  I didn't have to spend days in the BIOS to get 64 GB of 7200 MT/s CL34 DDR5 working on my M2 Ultra either.
    I don't think your considering that the 4090 in this case would have a dedicated 24 GB of GDDR6X memory and a total memory bandwidth of around 1000gb/s.  There is a reason that these things are in such high demand for AI (or Nvidia's blackwell/H200/whatever most recent chip is in the server market).  Lets be real, an x86 system has an advantage in a lot of scenarios on the high end, because it doesn't have the power/thermal limits that Apple designs around.  But right now in performance per watt Apple is in a league of there own.  And if you look at a mid teir gaming laptop(4070), the base 14 inch pro model would give you similar gpu specs, but actually be able to run for more than a few hours untethered when pushing it.  
    edited November 2024 Alex1Nnetroxwatto_cobra
  • Reply 32 of 38
    XedXed Posts: 2,917member
    Xed said:
    aderutter said:
    If you look at Adobe apps for example, they benefit more from CPU as long as the GPU is at a certain level. Once a user has a mid-range GPU then they don’t need GPU as much as RAM & CPU (and the real-life RAM requirements have decreased with Apple Silicon). 

    Another example is ZBrush which is purely CPU based. Even most of the time working in other 3D applications the CPU is more imporant as people working in 3D spend more time not-rendering than rendering and the machine can render while you put the kettle on. 

    It’s gamers and some 3d renderers that use more GPU - but CPU 3D Rendering is more accurate and so CPU rendering (obviously with farms) is the default in hollywood whilst us mortals have to just use what’s available on our budgets - typically a desktop GPU rather than a cloud render. The usual options when thinking only of rendering for games or lower-end 3D rendering are GPU (cheap and fast on PC), or CPU (slower, more accurate and slightly better on Mac generally).  

    When/if Apple release an M4 Ultra that is twice the performance M4 Max (GPU) it should be equivalent to an Nvidia 4090 and set the cat amongst the pigeons. 2025 could be the start of Apple desktop disruption.

    Unless Apple stops overcharging for memory and storage probably not. Current prices for “
    Apple M2 Ultra with 24‑core CPU, 76‑core GPU, 32‑core Neural Engine64GB unified memory2TB SSD storage”Equals to 5,399.99 and a prebuilt pc with a 4090 equals to 3,999.99  https://www.bestbuy.com/site/corsair-vengeance-a7400-gaming-desktop-amd-ryzen-9-9900x-64gb-rgb-ddr5-memory-nvidia-geforce-rtx-4090-2tb-ssd-black/6604319.p?skuId=6604319
    That's not a valid argument for a claim of overcharging. You want to try again?
    Apologies I tend to write during limited windows. The overcharging part comes from stuff like the upgrading from 512gb to 4tb is around +$1,000 or adding more RAM tends to be in $200 increments. The comparison I was quickly making was how a prebuilt pc with a 4090 can be significantly cheaper than a m2 ultra
    I don't consider that to be a way to determine if a company is overcharging.  I would say a company is overcharging if they are charging you for products and services you aren't receiving or if what they are charging is beyond the equilibrium price that would then cause the demand to be too low compared to their supply, or more importantly from Apple's POV, their overall profit.

    I've built countless systems in my time but that's not something I want to do again. This is especially true when it coms to a laptop, especially a modern laptop where integrated comments do allow for a considerably faster system over the limitations of slotted components. Sure, it does kinda suck that I can't remove the SSD for security purposes or upgrade the unified RAM, but I feel the overall benefit of these are worth it. I also don't balk at the price when a Mac laptop now lasts me 4–6 years whereas I was getting a new Mac laptop every year back in the PPC days and mostly every year back in the Intel days. Overall I'm saying money while doing a lot more and having it be a considerably lower percentage of my available income
    danoxwilliamlondonwatto_cobra
  • Reply 33 of 38
    XedXed Posts: 2,917member
    dutchlord said:
    Xed said:
    dutchlord said:
    Nobody cares! Even my M1 iMac is still fast enough. Apples focus on speed has no customer value and is not a trigger to upgrade. 
    I care. I am happy to have these chats to compare. If there were other upgrades, like WiFi 7 and a tandem OLED display I'd definitely be upgrading this year with how much faster the M4 Max is over my M1 Max.
    Ok fair enough. But was your M1 max slow and you could not use it anymore? 
    It's not broken, but I was considering a new MacBook Pro. It's pretty much the same routine with every release. I am a heavy MacBook Pro user as it's my most commonly used device so I will analyze the pros and cons with each release. That speed not only would be nice but it would also allow certain asks to complete faster that I would notice but more importantly would allow tasks to complete faster which would allow for  better battery consumption as it can then back into a low power state more quickly, which for me is the important factor.
    williamlondonwatto_cobra
  • Reply 34 of 38
    sflocalsflocal Posts: 6,145member
    As much as I love my M2 MBP, I'm more a desktop user and prefer serious, desktop horsepower.  For the time being, I have a spec'd-out 2020 iMac and when the day comes where I have decide to retire it in a few more years, I'm looking forward to whatever iMac or Mac-Studio in that year.  Could be an M6 or M7, but I'm enjoying watching the advancements that Apple is doing with Apple Silicon.  Quite impressive work!
    programmerwilliamlondonwatto_cobra
  • Reply 35 of 38
    MacProMacPro Posts: 19,857member

    MacPro said:
    1der said:
    It seems Cook’s law is then about 4 years.  It's always fun to make lots of assumptions and project into the future. In doing so I imagine in say 40 years what seemingly AI miracles could be accomplished with the machine in your hand being 1000 times as powerful 
    Same here.  However, I bet your 1000-times increase is way short of the mark in terms of performance gain.
    This sort of abstraction is based on a fallacy that future progress will follow the same pattern as past progress.  Moore's "Law" broke down because that no longer holds.  Until about the mid-2000s, we were rapidly and steadily taking advantage of the relatively easy scaling offered by the available EM spectrum for exposing masks.  Since that time the rate of improvement has gotten slower, much harder, and much more expensive because we've reached extreme frequencies which are hard to use, we've hit the power leakage problem at tiny feature sizes, and so many more issues.  Each process node improvement is a slow, expensive victory with ever more diminishing returns.  For a lot of kinds of chips its not worth the cost of going to a smaller process, and that means there is less demand to drive shrinking to the next node.  So it is not justified to look at the progress over M1 thru M4 and extrapolate linearly.  We aren't at the end of the road, but getting to each successive process node is less appealing. 
    You are aware, I hope,  I was referring to the OP's comment about 40 years hence?  If you don't think in forty years computing power will be over 1000 times more powerful I am guessing you are young?  I started woking for Apple in the late 70s so have a long perspective.

    Let's compare the Apple ][ to today's Apple Silicon, which I admit is over 40 years but not by so much as to alter my point.

    Key Comparisons
    Clock Speed:
    6502: ~1 MHz
    M3 Max/Ultra: >3 GHz, or over 3000 times faster in clock speed alone.
    Processing Architecture:
    6502: 8-bit, single-core.
    M3 Max/Ultra: 64-bit, with up to 24+ CPU cores and additional GPU cores, enabling it to handle vastly more data in parallel.
    Instructions Per Second:
    6502: Estimated in the range of 500,000 instructions per second.
    M3 Max/Ultra: In the trillions of instructions per second (teraflops in GPU processing).
    Memory:
    6502: Typically paired with 4 KB to 48 KB of RAM.
    M3 Max/Ultra: Supports up to 192 GB of high-speed unified memory, which is both larger and faster by orders of magnitude.
    Power and Applications:
    The 6502 was powerful for basic calculations, gaming, and rudimentary graphics, while today’s Apple Silicon can handle real-time AI processing, complex 3D graphics, and high-resolution video editing, often simultaneously.

    In simple terms, the top Apple Silicon SoC is millions of times more powerful than the 6502 chip in overall computing capability. The difference isn’t just in speed but in the scale of tasks it can handle—moving from basic computation and graphics to complex machine learning and immersive experiences.  So, I stand by my comment. We have zero idea what will come in the future but I bet you anything you want, whatever it is will be more than a 1000 times faster.  Assuming we still have a planet that is.
    edited November 2024 danoxailoopedwatto_cobra
  • Reply 36 of 38
    programmerprogrammer Posts: 3,489member
    MacPro said:
    You are aware, I hope,  I was referring to the OP's comment about 40 years hence?  If you don't think in forty years computing power will be over 1000 times more powerful I am guessing you are young?  I started woking for Apple in the late 70s so have a long perspective. 
    Not as old as you, but not far off.  And I’m in the industry right now, and have been for decades with a good view of what is really going on.  I’m extremely familiar with how far we’ve come, and yes, it is millions of times more powerful than the earliest computers.  Could we see 1000x improvement in the next 40 years?  Yes, it’s possible.  

    My point is that we can’t take past progress as the metric for future progress.  This idea of continuous steady progress in process improvement is gone, and has been for quite a while.  Much of Moore’s original paper was about the economics of chip production.  Performance was kind of a side effect.  The problem is that each successive improvement costs more and more, and delivers less and less, and comes at higher and higher risk.  In this situation the economic model could break down, and put that 1000x in 40 years in jeopardy.  Nobody knows what that’s going to look like because the industry has never been in this position before.  New territory.  Makes predictions highly suspect.

    Fidonet127muthuk_vanalingamwatto_cobra
  • Reply 37 of 38
    sdw2001sdw2001 Posts: 18,040member
    The pace of performance increase is pretty impressive. For example, I have an M2 Max MacBook Pro 2 TB/32 GB. About 18 months later I am replacing it with an M4 max with similar memory specs. Comparing the two chips is pretty unbelievable. In terms of how far they came in like 18 months.   
    danoxwilliamlondonwatto_cobra
  • Reply 38 of 38
    MacProMacPro Posts: 19,857member
    MacPro said:
    You are aware, I hope,  I was referring to the OP's comment about 40 years hence?  If you don't think in forty years computing power will be over 1000 times more powerful I am guessing you are young?  I started woking for Apple in the late 70s so have a long perspective. 
    Not as old as you, but not far off.  And I’m in the industry right now, and have been for decades with a good view of what is really going on.  I’m extremely familiar with how far we’ve come, and yes, it is millions of times more powerful than the earliest computers.  Could we see 1000x improvement in the next 40 years?  Yes, it’s possible.  

    My point is that we can’t take past progress as the metric for future progress.  This idea of continuous steady progress in process improvement is gone, and has been for quite a while.  Much of Moore’s original paper was about the economics of chip production.  Performance was kind of a side effect.  The problem is that each successive improvement costs more and more, and delivers less and less, and comes at higher and higher risk.  In this situation the economic model could break down, and put that 1000x in 40 years in jeopardy.  Nobody knows what that’s going to look like because the industry has never been in this position before.  New territory.  Makes predictions highly suspect.

    It's a bet I'd happily take ;)
Sign In or Register to comment.