JustSomeGuy1

About

Banned
Username
JustSomeGuy1
Joined
Visits
60
Last Active
Roles
member
Points
1,172
Badges
1
Posts
330
  • Early M2 benchmarks show clear CPU, GPU performance gains over M1

    In single-core Geekbench 5 testing, the M2 chip received a score of 1919. The same chip got a multi-core Geekbench score of 8928. For comparison, a late 2020 MacBook Pro with an M1 chip received a single-core score of 1749 and a multi-core score of 7719.
    [...]
    Per the Geekbench 5 comparison, the M2 chip is 11.6% faster than the M1 in single-core scoring and 19.5% faster in multi-core scoring.
    Math fail. 1919/1749=1.097 ; 8928/7719=1.1566.

    So by the numbers provided in the article, geekbench (obviously not definitive, but a useful benchmark) says the M2 in the 13" Pro chassis is a hair under 10% faster in single-core and 16% faster in multicore.
    Alex1Nmuthuk_vanalingamwatto_cobra
  • Foxconn chairman accuses other Apple suppliers of poaching workers in Vietnam

    Boo hoo.

    This is how workers earn more. By being in demand. Don't like it? Too bad, it's just like having to pay more for strategic minerals or any other commodity that's in demand. Especially where unions aren't practical (lack of political cover), it's the only shot they have at improving their living conditions. We're supposed to feel bad about this??
    JWSCmuthuk_vanalingamNotSoMuchStrangeDays
  • Apple passkey feature will be our first taste of a truly password-less future

    The issue for me is trusting a dialog box on a website that’s asks for my Mac password. The Apple UI doesn’t make it clear that this info stays on my device and isn’t sent to the server as is normal. This paaswordless stuff will suffer the same issue and I will struggle to trust where my data goes.
    This is an excellent point, and one I'm really surprised Apple hasn't addressed yet. But it's difficult - not technically, but in terms of training users how to behave.

    So far the only entities I've seen addressing this issue are some banks, and not even most of them - I think they feel the ROI isn't worth it.

    The obvious way to do it is to allow the user to select an image which is proof that the system is talking, and not the app. The image is protected and inaccessible to all apps. Then when you get a dialog asking for a password or other sensitive info, the system displays this image along with the request. The presence of the image authenticates the request.

    There are other similar schemes (text or sound instead of an image). In general, you have to have a token signifying legitimacy (not a physical one, unless you intend to put a little LED on the phone just to signal "system interaction", and Apple would never do something so ugly). Implementing this is not in the least bit challenging.

    The big problem is teaching users to pay attention and understand the significance of the token. People understand the idea of "password" - it means "way to prove I'm really me". They *don't* generally understand the concept of "way for the OS to prove it's really the OS (and not malware spoofing the OS)", and they have no simple word for that like "password". This won't be an easy battle to fight and I guess Apple isn't willing to take it on yet. :-(
    On réflection i think Apple has already done this. Visit say iCloud.com and the modal dialog, that asks for my Mac password, disables the window’s close button. I have to use the awkward Cancel button to get the regular authentication that ironically I trust. I guess this is the ‘image’ you mention…
    No, not at all.

    Your confusion (and that of a couple of other followups) unfortunately demonstrates my point - people don't really understand yet that real security demands *two-way* authentication. Any time you have to provide a password (or other authentication), the authenticator also needs to prove to *you* that it is the right thing to ask for authentication. And this authentication exists in multiple contexts.

    First of all, there's you authenticating to your local system. With biometrics, authenticationg the system to you is implicit (which of course trains people not to think about this, unfortunately): If the biometric hardware is being accessed, you know the real system is doing it and that's enough authentication (at least in the context of an iPhone - other systems, that may not be true). However, if you're putting in a password, the problem remains: How do you know the thing asking for your password is the iphone OS, and not an evil app pretending to be the OS?

    The only good answer to that, if you don't have another piece of hardware helping you out, is to have a secret that you know, and that the OS knows, but that the evil app can't know. Then if a password request offers you the secret, you can be confident that it's really from the OS and not from an evil app. That secret must be a pre-agreed-upon image, or sound, or text, that is carefully guarded so evil apps can never get at it. The icloud thing you mentioned above is nothing like that.

    The second context is between you and an off-device server. The same fundamental idea applies - you need to know that the server is really the server, and not supply secrets to an impostor. But the threat environment and solutions are completely different. Once you have a secure local device, you can use zero-knowledge techniques to authenticate to a remote server without ever revealing your shared secret, if you have one, or (much better) use public-key crypto techniques, which by design never reveal any secrets and also don't share any secrets. Either way a local agent on your device, under control of the OS, can make the authentication happen - which then reduces the problem down to the case of local authentication, you to the agent. And then we're back to case #1, and we can use the same solution - a local shared secret, probably an image that's agreed upon by you and the OS when you set up your device.

    AI should probably write an article about this. Or pay me to clean this all up and post that...
    muthuk_vanalingamwatto_cobra
  • M2 and beyond: What to expect from the M2 Pro, M2 Max, and M2 Ultra

    I'm sorry, but this article is a serious failure due to ignorance of some of the basic underlying technologies.

    For example, the guesses about memory are completely off base. There is literally no chance at all that they're even close, based on the article's assumptions.

    The M1 has a bandwidth of ~68GB/s because it has a 128-bit memory bus and uses LPDDR4 memory at 4.266GT/s. The M1 Pro has higher bandwidth of ~200GB/s because it uses LPDDR5 memory at 6.4GT/s, and ALSO because it uses a double-wide bus (256 bits).

    The M2 has the same memory bus size (128 bits) as the M1, but it's already using LPDDR5 at 6.4GT/s. If there's an M2 Pro based on the same doubling as the M1 Pro was, it won't get any further benefit from the LPDDR5 (since the M2 already has that). It will have the same ~200GB/s bandwidth as the M1 Pro.

    Of course this all depends on timing - if the M2 Pro came out a year from now, higher-performance LPDDR5 might be common/cheap enough for the M2 Pro to use it, in which case you'd see additional benefits from that. But it DEFINITELY wouldn't get you to 300GB/s. LPDDR5 will never be that fast (that would require 9.6GT/s, which is not happening in the DDR5 timeframe - unless DDR6 is horribly delayed, years from now).

    You're also assuming Apple won't go with HBM, which is not at all a safe assumption. If they do they might well do better than 300GB/s for the "M2 Pro", if such a thing were built.

    Your entire article could have been written something like this:
    M1 Ultra = 2x M1 Max = 4x M1 Pro ~= 6x-8x M1, so expect the same with the M2 series.

    It's a really bad bet though.

    There are much more interesting things to speculate about! What are they doing for an interconnect between CPU cores, GPU cores, Neural Engine, etc? Improvements there are *critical* to better performance - the Pro, Max, and Ultra are great at some things but extremely disappointing at others, and that's mostly down to the interconnect- though software may also play some part in it (especially with the GPU).

    Similarly, the chip-to-chip interconnect for the Ultra is a *huge* advance in the state of the art, unmatched by any other vendor right now... and yet it's not delivering the expected performance in some (many) cases. What have they learned from this, what can they do better, and when will they do it?

    (Edit to add) Most of all, will desktop versions of the M2 run at significantly higher clocks? I speculated about this here when the A15 came out - that core looked a lot like something built to run at higher clocks than earlier Ax cores. I'd like to think that I was right, and that that's been their game all along. But... Apple's performance chart (from the keynote) for the M2, if accurate, suggests that I was wrong and that they don't scale clocks any better than the M1 did. That might still be down to the interconnect, though it seems unlikely. It's also possible that they're holding back on purpose, underestimating performance at the highest clocks, though that too seems unlikely (why would they?).

    For this reason, I suspect that the M2 is a short-lived interim architecture, as someone else already guessed. Though in terms of branding, they may retain the "M2" name even if they improve the cores further for the "M2 Pro" or whatever. That would go against all past behavior, but they don't seem terribly bound by tradition.
    This comment contains many of my thoughts as well!
    if you were to “extrapolate” that’s pretty boring actually. Yeah a linear-scaled guess is barely an article. You could do this for the M4, M5, M10 etc. it’s kind of useless and didn’t even go that far.

    at the same time the article ignores the most exciting questions like mentioned above. What can Apple and TSMC do to improve bandwidth on ArmV9? Fabric stacking? HBM over LPDDR5? 

    Also, at 3nm the next Pro and Max chips are not the linear improvement the article suggests. Why was it written like that? 
    I concur the memory bandwidth projection is a blunder, and the article puts a lot of words together to explain what could be explained more concisely, but the speculation that Apple may do things with their desktop class chips to further differentiate them from mobile chips is just that-pure speculation. The near  future is likely a lot more boring, with M2 Max and M2 Ultra chips being largely as described in the article. Apple probably won’t sink that level of silicon engineering investment for such a small market. Taking existing IP developed for iPhones, bumping the clock a little, and adding cores, chiplets, and a couple extra features is unfortunately the name of the game. The next Pro and Max chips will not be 3nm, there is no evidence for that, but there is a chance Apple incorporates 5nm++ and A16 cores in the M2 Pro and Max after the iPhone 14 comes out. Even if that’s the case, A16 cores are likely to bring the smallest performance jump ever, so the general scaling idea won’t be that far off. The only major thing really wide open for speculation is the next MacPro, but that’s going to start at $$$$ and very few people will need/buy it. M2 series will roll out over the next year at least, no M3 series chips until the end of 2023 and that will include 3nm.
    You're right that discussion of desktop-class chips is speculation, but so is "likely a lot more boring". If you think anything at all about chip development in the next couple of years is boring, you're really not paying attention. The chiplet revolution is well underway, but there's a LOT more runway yet, as Apple has shown with the interconnect on the Ultra - which is both an amazing achievement and a (probably) mediocre first effort. It's amazing in terms of raw bandwidth, blowing past anything else ever done. On the other hand the end result is not very good for some types of workloads. So there's a LOT more to do there in terms of raw design, and if you think Apple won't put in the effort to get it right, you're wrong. That kind of performance will be *table stakes* for advanced VR. Everyone's going to need it, and it's going to be everywhere, not just in some high-end desktops like Studios and Pros.

    BTW, this explains Apple's insistence on efficiency - if the real end-game is VR everywhere, they're going to need good efficiency for mobile solutions, starting with the rumored headset but not ending there. Whatever we get for the Mac Pro will be a great workstation, but you can count on most of the tech in there being eventually useful in high-powered mobile devices.

    It may be that, as you say, next-gen cores will have very little performance improvement - though I wouldn't count on it! There's still a ton of room for improving the interconnects, which are a huge part of efficiency and overall performance. Apple's shown they can do brute force better than anyone. Now they have to add finesse, and there's *so much* that can be done here.
    tenthousandthingsdewme
  • Apple passkey feature will be our first taste of a truly password-less future

    The issue for me is trusting a dialog box on a website that’s asks for my Mac password. The Apple UI doesn’t make it clear that this info stays on my device and isn’t sent to the server as is normal. This paaswordless stuff will suffer the same issue and I will struggle to trust where my data goes.
    This is an excellent point, and one I'm really surprised Apple hasn't addressed yet. But it's difficult - not technically, but in terms of training users how to behave.

    So far the only entities I've seen addressing this issue are some banks, and not even most of them - I think they feel the ROI isn't worth it.

    The obvious way to do it is to allow the user to select an image which is proof that the system is talking, and not the app. The image is protected and inaccessible to all apps. Then when you get a dialog asking for a password or other sensitive info, the system displays this image along with the request. The presence of the image authenticates the request.

    There are other similar schemes (text or sound instead of an image). In general, you have to have a token signifying legitimacy (not a physical one, unless you intend to put a little LED on the phone just to signal "system interaction", and Apple would never do something so ugly). Implementing this is not in the least bit challenging.

    The big problem is teaching users to pay attention and understand the significance of the token. People understand the idea of "password" - it means "way to prove I'm really me". They *don't* generally understand the concept of "way for the OS to prove it's really the OS (and not malware spoofing the OS)", and they have no simple word for that like "password". This won't be an easy battle to fight and I guess Apple isn't willing to take it on yet. :-(
    appleinsideruserwatto_cobra