JustSomeGuy1
About
- Banned
- Username
- JustSomeGuy1
- Joined
- Visits
- 60
- Last Active
- Roles
- member
- Points
- 1,172
- Badges
- 1
- Posts
- 330
Reactions
-
Report suggests Apple's A15 Bionic lacks significant CPU upgrades due to chip team brain d...
(Posted this on another article, but it belongs here.)[...] it seems likely that CPU performance is not significantly improved - lacking more facts, my money is on Andrei's analysis (in AnandTech), maybe 5-6%. But there are two wildly divergent ways to look at this.It is possible that this is simply the result of a brain drain. That's a popular take in the press, right now. It's not clear how that analysis lines up with other known facts, like the massively improved NPU.There is another possibility though. Apple is now designing a pair of cores for use not just in the phone, but also in the Mac. What are the needs of those two devices?- For the phone, the biggest need is NOT more CPU performance. It's lower power use, which leads to greater sustained performance or longer battery life.- For the Mac, it *is* more performance. But Macs are very different from phones, even the laptops. They can afford to burn more power on increased clock speed, unlike phones... IF the chip has the ability to run at higher clocks. It seems likely that the A14/M1 does NOT have that ability, simply based on the MBPs not clocking past 3.2GHz even when on wall current. (This is normal - every chip design has a maximum beyond which it can't go, no matter how much power you throw at it.)The A-series chips have sped up from ~2.3GHz to ~3GHz over the last five years, since the iPhone 7, but most of the performance has come from widening the cores. But this leaves a ton of performance on the table- they should be able to get at least 4GHz, and possibly close to 5GHz, out of the process node they're using now, with a newer design. (Power requirements prevent that in the phone, of course.)Now... what would such a redesign look like? Really, you'd want to try to preserve the IPC of your existing design while allowing for higher clocks. And you'd probably also want to increase your caches to compensate for the fact that every cache miss is going to cost more cycles (as each cycle is quicker). Once that design is done, if you don't need the max performance out of that chip in one situation, you'd run it slower and pocket the power savings.This looks like it might be what Apple has done. They're claiming better battery life, despite a high-refresh-rate screen, a brighter screen, and a doubled system cache. And oh yeah, that doubled cache seems telling.So, I think we can't really know what's going on at Apple until the new Macs ship. And maybe not until a new desktop (27" imac and/or Pro, not so much the mini) ship. If my guess is right, what we're seeing is Apple being very smart about maximizing the RoI on a single pair of core designs (high-perf & high-efficiency). They get better power efficiency in the A15, which is their primary design goal this time around, while being able to drive the cores much faster (4-4.5GHz, maybe?) in the M2 Macs. That would give the cores +25%-+40% performance PER CORE from clockspeed. You'd lose some performance due to longer pipelines, cache misses, etc, probably made up for by the larger cache (which might be where the +6% is coming from in the A14).Next month will be *fascinating*. -
Compared: 2021 New 16-inch MacBook Pro vs. 2019 16-inch MacBook Pro
You're really missed the entire point of what they did with the camera. There is no notch.That is, the old laptop has a 16x10 screen. The new one has a 16x10 screen (with no notch). And then, on top of that, they stuck a camera and some extra screen space, which can be used to hold a menu bar. I expect that some enterprising developer will shortly release a little utility that blacks out that extra space, moving the menu bar down, and then you can just use it like the older version, with a perfectly rectangular display area.Put another way, this model doesn't come with a notch cut out of the screen. It comes with tabs *added* to the screen, on either side of the camera.(Oh, and if you're wondering, full screen apps display with no cutout because they exist entirely below the camera. Like I said, it's not a notch, it's bonus tabs.) -
Questions raised about M1 Mac SSD longevity, based on incomplete data
Mostly decent article but some of your important numbers are WAY off. You wrote:The cells in an SSD are durable for at least 5,000 read and write cycles in the cheapest Triple Level Cell-based flash memory chips, though more typically around 10,000 cycles for mid-range Multi Level Cell-based chips. Even at the low end of the scale, that still equates to over a decade of usage based on the one complete drive-write per day rating.TLC can be 3000 cycles at the high end, less at the lower end, with 1000 cycles typical today. MLC is irrelevant, as there are no MLC consumer drives, nor have there been for quite a while. (Maybe Samsung still sells one? They're going to rare and costly, if they exist at all.
Typical warrantied DWPD on consumer TLC drives is 0.3. Some go lower. 1DWPD is firmly within the domain of Data Center SSDs, and not even all of those (though some will go to 3DWPD or more).All your math after that is wrong because the base number are wrong.I've got one of these Macs and I have zero worries about this. Apple uses more flash than anyone else in the world and they're not going to screw this up. They couldn't, really - that much bad flash isn't available. I mean, if they wanted to cheap out they could have gone to QLC, and they didn't.OTOH, it's possible that the 8GB machines are so fast even when paging that users don't realize that they're thrashing their disk. That's unlikely to be a real problem for most people, but I guess time will tell. If you are a heavy-duty user and you buy an 8GB machine... you were just asking for trouble. But I still doubt you'll find it. -
M2 and beyond: What to expect from the M2 Pro, M2 Max, and M2 Ultra
I'm sorry, but this article is a serious failure due to ignorance of some of the basic underlying technologies.For example, the guesses about memory are completely off base. There is literally no chance at all that they're even close, based on the article's assumptions.The M1 has a bandwidth of ~68GB/s because it has a 128-bit memory bus and uses LPDDR4 memory at 4.266GT/s. The M1 Pro has higher bandwidth of ~200GB/s because it uses LPDDR5 memory at 6.4GT/s, and ALSO because it uses a double-wide bus (256 bits).The M2 has the same memory bus size (128 bits) as the M1, but it's already using LPDDR5 at 6.4GT/s. If there's an M2 Pro based on the same doubling as the M1 Pro was, it won't get any further benefit from the LPDDR5 (since the M2 already has that). It will have the same ~200GB/s bandwidth as the M1 Pro.Of course this all depends on timing - if the M2 Pro came out a year from now, higher-performance LPDDR5 might be common/cheap enough for the M2 Pro to use it, in which case you'd see additional benefits from that. But it DEFINITELY wouldn't get you to 300GB/s. LPDDR5 will never be that fast (that would require 9.6GT/s, which is not happening in the DDR5 timeframe - unless DDR6 is horribly delayed, years from now).You're also assuming Apple won't go with HBM, which is not at all a safe assumption. If they do they might well do better than 300GB/s for the "M2 Pro", if such a thing were built.Your entire article could have been written something like this:M1 Ultra = 2x M1 Max = 4x M1 Pro ~= 6x-8x M1, so expect the same with the M2 series.It's a really bad bet though.There are much more interesting things to speculate about! What are they doing for an interconnect between CPU cores, GPU cores, Neural Engine, etc? Improvements there are *critical* to better performance - the Pro, Max, and Ultra are great at some things but extremely disappointing at others, and that's mostly down to the interconnect- though software may also play some part in it (especially with the GPU).Similarly, the chip-to-chip interconnect for the Ultra is a *huge* advance in the state of the art, unmatched by any other vendor right now... and yet it's not delivering the expected performance in some (many) cases. What have they learned from this, what can they do better, and when will they do it?(Edit to add) Most of all, will desktop versions of the M2 run at significantly higher clocks? I speculated about this here when the A15 came out - that core looked a lot like something built to run at higher clocks than earlier Ax cores. I'd like to think that I was right, and that that's been their game all along. But... Apple's performance chart (from the keynote) for the M2, if accurate, suggests that I was wrong and that they don't scale clocks any better than the M1 did. That might still be down to the interconnect, though it seems unlikely. It's also possible that they're holding back on purpose, underestimating performance at the highest clocks, though that too seems unlikely (why would they?).For this reason, I suspect that the M2 is a short-lived interim architecture, as someone else already guessed. Though in terms of branding, they may retain the "M2" name even if they improve the cores further for the "M2 Pro" or whatever. That would go against all past behavior, but they don't seem terribly bound by tradition. -
Design failure in Apple's Time Capsule leads to data loss
AppleInsider said:The "parking ramp" is the part of the HDD that connects the drive to the external enclosure. Unfortunately, as the poorly-ventilated Time Capsule heats up, the two materials heat at different rates, leading to eventual wear and destruction of the parking ramp.This is embarrassing. Stop writing about tech you don't understand. You have at least two quite clueful writers on staff, why not run this stuff by them?The parking area is INSIDE the drive itself. It provides a place where the heads can stay when the drive is sleeping/off (or idle for long enough, depending on drive and OS). This protects the media against damage if the drive sustains mechanical shocks (g-forces, not electricity) while the heads are parked. It improves survivability especially when moving or shipping drives.
-
Inside Apple's fantastically fast new Mac Pro
hmurchison said:sdw2001 said:I can't get over what a monster this thing is. Apple's "pro" machines have always been more marketed to prosumers/power users rather than true workstation users. This machine changes everything.
I'm not up on PC workstation class machines, so a question for someone who is: Is there anything even close to this?The obvious question is who needs this bandwidth? AI, 8K and VR encoding, Machine Learning, Databases as always and even gaming. I think Apple has designed or workstation form factor that is going to be able to scale with the insane amount of bandwidth increases we have coming in just 4-5 years.You need to do a lot more reading, and you can start with the link you provided. You are confused about the relationship between CXL and PCIe5. PCI5 does not use any aspect of CXL. Rather, CXL leverages PCIe 5, with its major selling point being memory coherency between CPUs and attached coprocessors of various sorts (GPUs, FPGAs, NNPs, etc.). It's roughly similar to CCIX, and somewhat similar to GenZ (though CXL and CCIX use PCIe5 mechanicals, whereas GenZ doesn't).You are similarly confused about the state of play in NVMe SSDs. Not one person who understands the technology thinks that "existing PCI-E limitations limit the benefits of that extra speed". The limitation comes from the only PCIe4-capable controller that's on the market currently (the Phison PS5016-E16), which was a quick patch job on a previous controller to add PCIe4 compatability. Better controllers are coming soon, and they should be able to push read and write speeds up to ~7GB/sec... that is, if you're moving bulk data. For most people, random I/O is more important, as is latency, and the PCIe version doesn't make any difference for that- it's the flash and the controller.As for 100Gbps Ethernet, a single PCIe3 x16 slot is almost but not quite adequate to saturate the link. A PCIe4 x16 could handle a dual port card. By the time you get to PCIe5, a single x4 slot could handle a single port. PCIe6's bandwidth will obviously be welcome for anyone using 100Gbps Ethernet, but it's far from necessary. -
Apple's new diversity exec hails from Bank of America
I'll add that calling people racists when they are attempting to provide some minimal redress for centuries of wrongdoing are following a time-honored playbook. By labeling others with descriptions that apply to themselves, they (often successfully) seek to confuse the issue and avoid the blame they deserve."When everyone's a nazi, nobody's a nazi!"It can be hard to distinguish the truth in such cases. And it's getting harder by the hour. -
Apple's new Mac Pro internal components - answers and lingering questions [u]
melgross said:The one thing I’m disappointed about is that it’s PCI-E 3. Not 4. As PCI-E 4 motherboards are coming out, and both AMD and Intel announced support for some of their latest high end chips, I would have thought that this would be that. I really want to buy this, this year. I don’t want to buy another one next year, or the one after that. This will have to last me four years. So even if it’s possible to upgrade the CPU, and the GPU(s), I’d hate to be thinking that a double speed bus will be out a year after I bought this, with all the major performance, security and feature enhancements that 4 will bring. Then, PCI-E 5 is expected for 2021. So round we go again.This was never going to happen. Next-gen AMD CPUs (Zen 2/Ryzen 3xxx) and their chipset (570) have announced support for PCIe4, but they are not shipping until 7/7. Intel has not announced *any* PCIe4 platforms/chips for this year. The earliest CPU with PCIe4 support will be "Ice Lake" Xeons sometime next year... assuming they can ship it on time. It's on the "10nm+" process, and I have to say, the first 2019 10nm chips are disappointing. We don't really know if they have finally beaten their yield problems either - though if they haven't, their top executives will wind up in court (and possibly jail), given the things they've been saying to investors.The only chance we had was if Apple announced that they'd be shipping Ryzens (or next-gen TRs/EPYCs). I would have loved to see that - they're dramatically superior to the current Intels in pretty much every way - but I understand that they're especially risk-adverse at the moment. Maybe next year (would love that, but doubt it).melgross said:Later this year, both AMD and Intel will have some CPUs ready for 4, but not right now. From what I remember, Intel will have some Xeons. But it’s possible that it’s too late.4 is just becoming available in some mobocracy. But even there it’s not all 4, but a hybrid of 3 and 4. 5 isn’t expected until 2021. The problem we’re seeing is that 3 has been around too long.Intel will not have any this year. AMD's are shipping 7/7. I'm not sure what you mean by "mobocracy" but all the 570 chipset mobos for AMD that I've seen (a couple dozen at least) are pure PCIe4. -
Apple's new diversity exec hails from Bank of America
40domi said:AppleZulu said:Came here to see the knee-jerk semi-racist and pretty-much-racist responses to anything ever posted here related to this subject. I was unsurprised to find it stacking up as expected.
It's funny when people object to intentional efforts for diversity and inclusion as a pearl-clutching affront to "merit-based" hiring, as if merit based hiring has ever been a thing. When non-male, non-white people are excluded and steered away from a field like coding or engineering at every turn starting with early childhood, the resultant competition among the folks who make it to the point where they can even apply for the job cannot then be called winners in a merit-based system. If your competition has been repeatedly kneecapped before they ever make it to the starting line, getting to the finish line first does not make you a merit-based winner. If half your competition has never had a chance to get to the race, even as you "win," you should know that you've never actually been tested in a merit-based system.Systemic racism is overwhelmingly well documented. Unfortunately, evidence ("substantiation") is not actually of interest to people who foam at the mouth the moment the topic comes up.However, for the benefit of those who are not in that group, here are a couple of examples pulled at random from a quick google search. I'm sure someone who works in this field can easily provide better examples.It's easy to mess with data, or the analysis of it, to produce confusing or deceptive results. For example, this article contains lots of data, much of which supports the concepts of systemic and institutional racism, but it's consistently presented badly, with correct but deceptive charts that overstate the problem by, for example, using a nonzero baseline. This is a peculiar choice by the authors, since you don't need to be deceptive to support their position. Of course, many people who have a political stance against those concepts (like the posters here, but less obviously frothing) engage in the same behavior, when they bother to look at the data at all - which they mostly don't, since it's blindingly clear they're wrong.If you are curious about why and how things got so bad, one of the best examples, and best documented, is probably the history of redlining in this country. This practice alone shifted vast amounts of wealth out of the hands of minorities, blacks especially. Googling "redlining in the US" will probably give you a decent start. -
Apple's new diversity exec hails from Bank of America
9secondkox2 said:DEI is a sham and waste of money.Hire the most qualified applicants and reap the rewards.It’s really that simple.