anonconformist
About
- Username
- anonconformist
- Joined
- Visits
- 111
- Last Active
- Roles
- member
- Points
- 585
- Badges
- 0
- Posts
- 202
Reactions
-
ARM iMac, 13-inch MacBook Pro coming at end of 2020, says Ming-Chi Kuo
rob53 said:bobolicious said:"Apple would also not be beholden to Intel" - is this ironic in that Apple now seems to make (almost) every move to increase customer dependence on a proprietary Apple ?
Apple is not a proprietary computer manufacturer. They have a few Apple-designed components but the vast majority of components are common. Read through an iFixit or other vendors teardown and you'll see all kinds of components without Apple's name on them. Even the bulk of macOS has been open-sourced, https://opensource.apple.com.
The only way they could become less constrained by outside issues is to own a foundry for exclusive use. If we reach a limit for process miniaturization then it might become worthwhile for them to own one if they don’t keep advancing technology, because foundries are very expensive and become outdated (currently) very quickly. -
ARM iMac, 13-inch MacBook Pro coming at end of 2020, says Ming-Chi Kuo
jdb8167 said:ShapeshiftingFish said:Would it be possible to make the current Mac Pro a dual processor architecture solution via a plug-in card? So to have a hardware acceleration instead of software emulation? It would be surprising if Apple didn’t think of a viable solution in order to not piss the pros (yet again), giving the short timeframe between releasing the new Pro and announcing the major architecture shift, which must have been in planning for a very long time. Yes, I know, Apple this and Apple that in not losing much sleep over radical transitions, but given the cost of the new Pro systems, this would be a huge slap in the face, even by their standards.
But Apple must want to showcase their CPU design chops and what better place than the top of the line Mac Pro. If they can produce a 64 core Mac Pro that doubles the performance of the current 28 core Xeon, why wouldn’t they want to do that? We already know that there are companies producing 64 core ARM and above CPUs so it is certainly possible for Apple to do the same. For marketing reasons alone, making a new Mac Pro that destroys the current Intel Xeons would be a major win for Apple.
Applications need to be architected in different ways than most are before NUMA is not a performance loss, or any given application can only use so many cores and be effective, while other applications use the other cores, with a minimal amount of sharing between them.
Most probable for heavy CPU threading scenarios is they’ll benefit with fewer faster cores far more; Amdahl’s Law defines the limits as to how linearly you can scale performance by adding more threads to a problem due to communication and synchronization overhead, and RAM throughout limitations combined with CPU cache coherency quickly make most processing tasks far less efficient: it’s even entirely possible and common that the more threads and cores you throw at a task, the slower it gets, to where you lose total throughout with a given larger number of cores than if you just used fewer cores of the same speed.
Consider in a more limited basis in smartphones: how much value was it for octocore Android devices? Not much!
-
Intel aims beyond 5Ghz for future MacBook Pro H-series processors
rob53 said:FUD from Intel to try and slow down Apple's migration to i's own chips. Intel won't come out with these any time soon. Single CPU clock speed seems to still be important because too many applications are not programmed to make use of multi cores. If developers spent more time getting away from single thread programs, they could see their applications run faster by using more cores. Unless something has changed recently, Adobe products continue to make use of single cores to run the majority of their consumer software. Non-Adobe products are using multi cores making them run a lot faster, especially on "slower" CPUs. -
Apple to face class action over MacBook butterfly keyboard
entropys said:wood1208 said:When Apple agreed to repair and extend keyboard warranty, class action law suit is immaterial. But, if this law suit continues than at most Apple might have to agree to repair more than current 4 years from the purchase date.Even pre 2015 Macbook Pro keyboard has kind of high keys. Butterfly keyboard has very low keys, There are users who like either or don't care. But, Magic keyboard seems likable by most.
If all you use your laptop for is for appearance, then sure, perhaps it's no big deal: you can move over to some arbitrary laptop while your regular laptop is down.
It's true, people aren't absolutely required to buy Apple laptops, they can also buy Apple desktop machines, too. You may be thinking to yourself (like an idiot earlier in the thread) that nobody is bending your arm to buy an Apple laptop, get a Windows machine. For a lot of things, that's valid, if you're wanting a cheaper system you can replace easily with another cheap system.
However, Apple laptops don't cater to the cheap laptop-using crowd, and a MBP is certainly not a cheap machine: it's a tradeoff between being portable and powerful, and for those that have a reasonable need for that combination of things, a good buy, until it starts eating their time.
"Nobody needs a Mac laptop!" someone may say. What if they're doing MacOS/iOS/other Apple platform development? Sure, they can still do it with another form factor Macintosh, but what if there's a valid reason to need it to be portable? Anyone suggesting people carry around a non-laptop and calling it "portable" aren't serious.
I live in the Seattle area and do developer support at a big tech company by day (and when doing on-call, weird hours of the morning on the weekend) and iOS development otherwise. I'm not a graphic designer or anything like that: just a developer with lots of experience. What do you think my time is worth per hour?
I just bought a 2019 16" with maxed out GPU, 1 TB SSD, 64 GB RAM (I run virtual machines) to replace my 2016 15" MBP, largely due to the keyboard issue (though I've not had a fatal problem with mine, yet, but that's easily foreseeable) and not nearly as much RAM as desired, that's going away.
If things went perfectly smoothly, I could get in and out of the nearest Apple Store in about an hour to drop off my laptop if it needed to be repaired, but then I'd need to disrupt my schedule to do that after regular work hours not only to drop it off, but to pick it up once it was fixed. That's 2 hours alone right there.
I have Parallels installed on my machine: that requires a per-machine machine-locked license. If I am forced to spend time taking my laptop for repair, I also need to get a backup machine and switch over licensing for a short time for Parallels, or do without it. It's common that such per-machine licensing schemes don't allow you to switch back and forth too often or for more than a fixed number of times, who wants to deal with getting into it with their Support? What if you have other applications that also are licensed like this?
What if you have someone's data on your machine that you're not allowed for legal or security reasons to have on a machine when you take it in for repairs? All that costs you time, and time is money, quite a bit of money.
There are legitimate business reasons to have a laptop that's sufficiently powerful, and being a consultant of some form is one of them: time is money. Wasted time is wasted money and lost revenue, because you can never get that time back.
That's why there's a reasonable justification for a lawsuit: these are machines that claim to be for professionals, but history has shown enough people have them out of commission for extended time periods for a cause that was readily avoided, and Apple should have reasonably figured out quicker than they did. Sure, Apple has extended the "free repair" program longer than normal Apple Care, but that doesn't help users that otherwise treat their machines perfectly and have them go out of commission get their time back. -
Putting the 16-inch MacBook Pro's thermal management to the test
prismatics said:cpsro said:The new thermal design has done nothing to avoid the throttling that occurs when memory larger than the L3 cache is beaten on in a random pattern, as opposed to sequentially the way common benchmarks access it. Multiple recent generations of Apple laptops do this and the latest model is no different. The CPU power consumed in such situations is a small fraction of the TDP, the CPU is driven well below the advertised sustained speed for all cores, and the processor temperature is only mildly warm.
Sorry, random pattern memory access _never_ happens except if you have hash tables, Directed Acyclic Graphs or something but you are not using a CPU for that kind of problem, you are using GPUs or, if you have a very special application, Content Addressable Memory (or CAM for short, which is almost never used except for core network routers where you have some billion packets per second to be switched).
Besides having been a software developer for decades, for several years I've done developer support, and your claim about random memory access not happening except in the conditions you listed is a bunch of BS. If for nothing else, Address Space Layout Randomization causes allocations to be... quite random by design, and this is done in allocating heaps and sub-allocations as well, and then you regularly have applications do lots of allocations interspersed with freeing memory, then doing more allocations, etc. causing fragmentation, and a lot of structures that have pointers linking to other things, that are followed when looking for things in memory. Most applications don't only allocate and use data in linear arrays, but tend to use memory much more flexibly, even without using hash tables or DAGs. Even a lot of software that is written with arrays as the abstraction isn't exactly linear in memory throughout the array due to underlying implementation, and linked lists are notorious for resulting in random memory access patterns (insofar as the pages accessed, which may remain the same for a given linked list for awhile are not allocated in sequential memory more often than not) even if a lot of the allocations started out with chunks of list elements being in short arrays to start with.
Another big cause of random memory access patterns, even if you store your objects in fixed arrays happens with object-oriented data structures and code: unless you only ever put objects of a single class (not all sharing the same base class, but precisely one class) those objects often have memory layouts that differ in size and layout, and often contain objects that also point to other objects, recursively, of many class types, so just the memory layout of the data alone (and quite often in the resulting object graph, it will NOT be a DAG) results in random memory access patterns while processing what appears to be a linear array of objects. Once you have more than one particular type of class instance in an array, unless they're all sorted by type, you'll call virtual methods on the subclasses (or protocols/interfaces, depending on language) which means you're loading even more code into the CPU cache, worst-case scenario if the cache is very full and busy, on each object in the array being processed. This is more common than your stated assumptions take into account: why do you think hyperthreading has so much hardware thrown at it? It's not just for program data, it's also for program code, to keep the CPU fed as much as possible, but short of all data being shared in nice linear layout in RAM, the CPU is evicting cache lines in LRU order (unless special instructions direct otherwise) and refilling them from random spots in RAM address space. Mind you, that's just logical address space: that doesn't even address the fact that logical address space often results in one logical page being nowhere near another logical page when seen in the physical address space, and you really really don't want to consider what happens when your process calls into the OS and drivers, which they must do every so often, or your process isn't worth much in practice.
While modern CPUs are waiting for main memory, even with hyperthreading, the core is left waiting a fair amount of the time, and the more cores you have, and the relatively less fast cache you have, the more that increases in severity: why do you think NUMA (Non-Unified Memory Architecture) machines exist? They exist because it's all too easy to saturate the memory bus and also die from CPU cache coherency overheads beyond a certain point. When CPU cores are waiting on memory, the memory interface logic is using full power, but the core will be power-gated and down-clocked, so it isn't using max power. If systems actually had main RAM that was fast enough, with no latency (and reading/writing main RAM on modern CPUs is done a whole cache line at a time, which is typically 32 bytes or more in some power of 2 in size) they'd scale linearly in performance with CPU clock speed, but that isn't what we have: we have CPUs that are extremely rarely running at peak efficiency, and that's almost always on artificial benchmarks, so those CPUs are being randomized in memory access most of the time, and are left running at lower core speeds/temperatures because they're twiddling their thumbs waiting for main memory.