programmer

About

Username
programmer
Joined
Visits
51
Last Active
Roles
member
Points
454
Badges
1
Posts
3,503
  • Early M2 Max benchmarks may have just leaked online

    bulk001 said:
    So what’s it been? A year or two and Apple is already basically at the same place Intel is with Incremental updates spread out over a period of years unable to deliver on a predictable timetable. Yes there are some battery life advantages and the initial jump of the v1 chip but it is not very promising moving forward if this is accurate! 
    ... while dealing with the massive disruption brought about by a global pandemic. I don't share your pessimism. Remember that Apple famously ships a great v1.0 and then iterates consistently; some people moan about the lack of regular "OMG" updates but over time the consistency of improvement leads to massive gains.

    Also, tenthousandthings pointed out that in 2014 (a mere 8 years ago!) TSMC was using a 20nm process node. I mentally used a swear word when I read that. Astonishing progress to be shipping at 5nm and imminently 3nm in that timeframe. Well done to everyone at TSMC, that is spectacular!
    A resetting of expectations is also required.  The reality of semiconductor fabrication is that since running into the power wall back in the mid-00s, things haven't been scaling smoothly like they had since humanity started building integrated circuits.  The time between nodes has increased, the risk of going to new nodes has increased, and the cost of going to new nodes has dramatically increased.  The free lunch ended almost two decades ago, and since then the semiconductor industry has been clawing out improvements with higher effort and lower rewards. A lot of the improvements that have been gained has happened by doing things other than simply bumping CPU clock rate and cache sizes.  GPU advancements were the first "post-CPU" wave, SoC happened in the mobile space first then moved into the laptop/desktop space, and more recently ML related hardware has become common.  Most of these things require software changes to make any use of at all, nevermind actually optimizing for them.  And that is why Apple needed to move to Apple Silicon -- not because they could build a better CPU than Intel, but because they need to build the chip Apple needs.  There is far more in the Axx/Mx chips than just the CPUs and GPUs.  How they are interconnected, how they share cache/memory resources, fixed function hardware units, what is the mix of devices, how the various accelerators are tuned for the workloads running on Apple systems, etc.  Expect to see variants tuned for specific systems, perhaps variants at the packaging level... the same cores re-packaged and used in different configurations, etc.

    Just the fact that TSMC has so many variations on the 5nm process node ought to be a clue about how hard getting to the next level has become.  Intel being stuck for a long time at 10nm was a foreshadowing of the future.  And at each stage, the designers are going to have to work harder and innovate more with each process advancement to wring as much value from it as possible... because the next one is going to be even more horrendously expensive and risky (and likely bring diminishing returns, plus "interesting" problems).

    thttenthousandthingswatto_cobraradarthekat
  • M2 Pro, M2 Max MacBook Pro models could arrive by the fall

    Fred257 said:
    I’m not getting near any M2 anything. The reviews I have seen show absolutely no consistency in both the CPU and GPU reviews. As a matter of fact I think it’s the worst Apple architecture we will see. Apple, fix your stupid mistakes!! Only one NAND chip on the lower end models is a complete design flaw and ultra cheap on your part. The GPU being slower on the 10 core then the 8 is inexcusable!! Stop and improve these mistakes please. And for Apple fanboys here who complain about me I’ve used Macs since 1989 making music on Cubase before many of you were in your mothers womb 
    I think this is a confluence of a couple of things.  Their low end machines are using a single NAND chip as a cost saving measure, and the higher end machines shouldn't do that.  The GPU being slower on 10 vs 8 cores is likely a software thing (in the driver or Metal) or a benchmarking artifact.

    As usual, you need to assess the machine you're interested in (when that becomes possible), not projections or theorizing about what it'll be.  I think the M2 pro/max/ultra/whatever are going to be pretty impressive chips and Apple will put them in either the same or improved systems compared to last year's models.  Hopefully the long awaited Mac Pro arrives too.

    designrfastasleep
  • Volkswagen CEO isn't sure that Apple wants to build cars

    Beats said:
    DAalseth said:
    I’ve thought the same for a while now. It’s one thing to produce a system that is used in other people’s cars, and making the whole thing yourself. The latter involves setting up a dealer network, service and support systems, dealing with warranty claims and more. The current network of AppleStores would simply not work at all for this. Apple would have to build a new car dealership network from scratch. Apple has experience designing software. It’s another story with seats, air bag systems, wheel bearings, door locks, and all the rest of the stuff that goes into a car. 
    Why reinvent the wheel, when they can make something to go on other people’s products. Far less hassle and far more money to be made in the latter. 

    Imagine if Henry Ford and other manufacturers thought this way…. We would all be riding horses.

    Apple spending years and billions to supply an extension of iPhone only is NOT a good idea. At some point they would need hardware.
    The world is an extremely different place now, and the automotive industry completely different.  Frankly I think Apple would be beyond stupid to start building cars.  IMO, it makes way more sense for them to build the computers that go into cars (and the software for them).  Then sell this to any car maker that wants it.

    darkvader
  • Apple Silicon Mac Pro could combine two M1 Ultra chips for speed

    One thing for sure,  they won't include any drive bays for 5.25" devices or conventional hard drives. There is no advantage to having those devices inside of the case from a power, cooling or speed perspective. I also suspect the use of some kind of proprietary RAM expansion.
    That depends on what their big customers with lots of money want.  The advantage is that its all in one box, rather than having external enclosures which can be problematic in some situations.  I tend to agree that the age of 5.25" form factor drive bays is gone, but I'm also not the target audience for such a capability.

    And an earlier reply to my comment:
    zimmie said:
    On the topic of RAM, there's nothing inherent in the M1's design which precludes off-package RAM. They have a RAM controller on the chips, and the RAM controller is shared between CPU and GPU cores, but there's no fundamental reason the RAM couldn't be in DIMMs. Apple just hasn't chosen to do that. They might do so with the Mac Pro, or they might not. I don't see them doing a tiered memory structure, though. They just went to significant lengths to do away with NUMA concerns on the M1 Ultra.
    With the introduction of the Mac Studio, I wouldn't be surprised to see the Mac Pro go primarily rackmount. Very few people need more computing power than the Mac Studio offers at their desks. Almost everyone who does need more computing power is in an environment where they can rack the computer in a closet. For example, recording studios, film studios, scientific labs, and so on all have 19" rack space for other equipment, so putting specialist workstations in there isn't a stretch. That said, rackmount would mostly be relevant for a box with several full-height, full-length PCIe slots (e.g., to add hardwired audio and video inputs), and I'm not yet convinced Apple is interested in that at all. They might say the future is a rackmount interface which connects to the system via Thunderbolt. I'd be curious to know what they have seen the current rackmount Mac Pro doing.

    The in-package memory has much tighter timing tolerances because the memory configuration is fixed, lower signal driver power levels, and at very short distances.  I would imagine that their memory controller takes full advantage of those facts, and cuts a lot of the complicated corners that dealing with DIMM slots creates.  So, I disagree:  I do not think their memory controller could support out-of-package memory without some serious work, and it would represent a large power increase and performance impact.

    What I suggested isn't a software-visible tiered memory structure.  It is just an alternative fast backing-store for the existing virtual memory system that all software currently works with (currently backed by flash memory).  Implementing this would require no changes to the M1 architecture and very little OS change.  The Mac Pro has a very small market (especially with the Mac Studio now taking a chunk of it), so custom work to support it doesn't make a lot of sense for Apple.  That's a big reason why I think the M1 Ultra is what we will see in the Mac Pro.  And probably just one of them as going multi-chip is a lot of specialized added hardware design work that I don't think they want to do.

    The M1 Ultra has a pretty amazing amount of compute, after all.  Bump the clock rate a little and you give it a small edge over the Mac Studio.

    A rack mountable full-sized case (but still a desktop workstation) which can hold lots of extra drives, memory, and PCIe cards would differentiate it from the Mac Studio.  

    I doubt they will do this, but one thing they could do fairly easily in such a form factor is put the M1 Ultra motherboard itself on a PCIe card so one case could hold multiple of them (the case becomes just a PCIe backplane then).  How such a machine would be used becomes more challenging though and would take them away from their preferred programming model of a single shared memory space for many CPU/GPU cores.  Without a lot of OS work such a machine would look like several Macs on a high speed network... has some uses, but gets pretty obscure and way out of the consumer space.  Then again, it is the "Mac Pro" so who knows?

    tenthousandthingswatto_cobraFileMakerFeller
  • Apple Silicon Mac Pro could combine two M1 Ultra chips for speed

    My guess is that the Mac Pro will use the same M1 Ultra as the Mac Studio does.  The difference will be in the system around the SoC.  With a larger form factor, they have more cooling potential and could bump up the clock rates a little... but really, the M1 Ultra is a monster as it is (both in terms of size and performance).  I would just take what Turnes said at face value, this is already the last of the M1 series.  And I think we will see a Mac Pro that uses it.

    So what could differentiate the Mac Pro?  In a word:  expandability.

    1) PCIe slots.  The M1 Ultra seems to have plenty of I/O potential, and a fast PCIe bridge chip would easily enable a lot of expansion potential.

    2) Drive bays.  The Mac Pro would have the same built-in super fast SSD, but in a large case a whole lot of additional storage can be accommodated.

    3) RAM.  This is where it gets tricky.  The Apple Silicon approach is to use in-package memory, and there are real constraints on how much can be put into a single package.  Some Pros just need more than can be fit into a single package, or more than is worth building in the TSMC production run.  So conventional DIMMs are needed to supplement the super fast in-package memory.  The question is, how does OSX use it?  Apple seems to want to keep the programming model simple (i.e. CPU/GPU shared memory with a flat/uniform 64-bit virtual address space), so having some fast vs slow areas of memory doesn't seem like the direction they want to go in (although they could and just rely on the M1 Ultra's ENORMOUS caches).  They are already doing virtual memory paging to flash, however... so why not do virtual memory paging to the DIMMs instead?  Big DMA data transfers between in-package and on-DIMM memory across the very fast PCIe 5.0 lanes would ensure that the available bandwidth is used as efficiently as possible, and the latency is masked by the big (page-sized) transfers.  A 128GB working memory (the in-package RAM) is huge, so doing VMM to get to the expanded pool is not as bad as you might think.  Such a memory scheme may even just sit on PCIe cards so buyers only need to pay for the DIMM slots if they really need it.  Such "RAM disk" cards have been around for ages, but are usually hampered by lack of direct OS support... and issue Apple could fix easily in their kernel.

    bsimpsenmjtomlinviclauyycrundhvidfastasleepwatto_cobra