JustSomeGuy1

About

Banned
Username
JustSomeGuy1
Joined
Visits
60
Last Active
Roles
member
Points
1,172
Badges
1
Posts
330
  • Russia tried to hijack some of Apple's internet traffic for 12 hours

    Less ignorance, more facts.
    DAalseth said:
    Apple needs to sever all ties with Russia. Cut them off cold to updates, services, iCloud, AppleMusic, everything. Flip the switch without warning. You live in Russia your device is bricked and you are SOL. I know that Apple keeps talking about trying to protect their customers. It's too late for that. Until the general populace starts feeling the pain from Putin's war they won't put an end to it. Remember, that's what brought down the Tzar. The people get fed up with paying in blood and treasure for the Tzar's adventure in WWI. It's time for another revolution and Apple needs to step up and do their part.
    This would have zero impact on the situation described in the article. Had they already done so, nothing would have changed. Rostelecom could still have announced Apple's route(s) - which is an entire /8!!! - and everything would have played out exactly the same way.
    ... is this a good reminder of the potential vulnerability of (especially large, high value) cloud services with so many potential attack vectors ...?

    ... is it the opposite of the concept of the internet in terms of communication reliability of multiple web connections ...?
    No, to both questions. This has nothing to do with attacks on cloud services. It's fundamental to all traffic on the internet. And the problem is exactly the multiple possible connections, in that the lack of a central authority for the net means there's no single source of truth for who is allowed to announce which routes. There has been an answer to that problem for over two decades, but it's not used everywhere, much to everyone's detriment. See http://irr.net, or google "radb". If the entire world used and enforced registration of routes in a route database like the RADB, this attack could not have any effect outside of Rostelecom's own customers.
    Very serious and calculated move by Russia. 
    Also finding vulnerabilities in the routing infrastructure. 

    Russias war isn’t going to stop with Ukraine. That’s a strategic move to gain a massive nuclear power plant while advancing its dominance agenda. They e already threatened their own surrounding countries as well as the USA. And China is right behind with its unprecedented disrespect and threatening of the USA as it seeks to devour one of the most prolific product economies in Taiwan prior to its 2049 buildup goal. 

    Though Apple was vigilant, there is no doubt that some data was stolen. You have to wonder what kind of blackmail is planned for any incriminating info discovered, especially where apple using politicians, media, and big tech folks are concerned.
    This is extra ignorant. Just stop.
    1) This isn't a "vulnerability in the routing infrastructure". It is, unfortunately, a designed-in feature. It will continue to be the case until use of route databases is universally enforced.
    2) I have a LOT of doubt that any user data was stolen. In fact it's virtually certain that no data was stolen, as all of it was likely encrypted, though they certainly would be able to capture some metadata - for example, who was connecting to Apple services, and when. The scenario you envision is not the problem. It is conceivable that the metadata alone could matter in a specific case involving a high-value target, however. That's a reasonably plausible explanation for the whole event, in fact, though we'll likely never know.
    3) Off topic, but the notion that Russia invaded Ukraine just to get control of one aging nuclear plant is ludicrous.
    jony0ransonbaconstangAlex1Ny2anfastasleepMac4macwatto_cobra
  • Apple passkey feature will be our first taste of a truly password-less future

    The issue for me is trusting a dialog box on a website that’s asks for my Mac password. The Apple UI doesn’t make it clear that this info stays on my device and isn’t sent to the server as is normal. This paaswordless stuff will suffer the same issue and I will struggle to trust where my data goes.
    This is an excellent point, and one I'm really surprised Apple hasn't addressed yet. But it's difficult - not technically, but in terms of training users how to behave.

    So far the only entities I've seen addressing this issue are some banks, and not even most of them - I think they feel the ROI isn't worth it.

    The obvious way to do it is to allow the user to select an image which is proof that the system is talking, and not the app. The image is protected and inaccessible to all apps. Then when you get a dialog asking for a password or other sensitive info, the system displays this image along with the request. The presence of the image authenticates the request.

    There are other similar schemes (text or sound instead of an image). In general, you have to have a token signifying legitimacy (not a physical one, unless you intend to put a little LED on the phone just to signal "system interaction", and Apple would never do something so ugly). Implementing this is not in the least bit challenging.

    The big problem is teaching users to pay attention and understand the significance of the token. People understand the idea of "password" - it means "way to prove I'm really me". They *don't* generally understand the concept of "way for the OS to prove it's really the OS (and not malware spoofing the OS)", and they have no simple word for that like "password". This won't be an easy battle to fight and I guess Apple isn't willing to take it on yet. :-(
    On réflection i think Apple has already done this. Visit say iCloud.com and the modal dialog, that asks for my Mac password, disables the window’s close button. I have to use the awkward Cancel button to get the regular authentication that ironically I trust. I guess this is the ‘image’ you mention…
    No, not at all.

    Your confusion (and that of a couple of other followups) unfortunately demonstrates my point - people don't really understand yet that real security demands *two-way* authentication. Any time you have to provide a password (or other authentication), the authenticator also needs to prove to *you* that it is the right thing to ask for authentication. And this authentication exists in multiple contexts.

    First of all, there's you authenticating to your local system. With biometrics, authenticationg the system to you is implicit (which of course trains people not to think about this, unfortunately): If the biometric hardware is being accessed, you know the real system is doing it and that's enough authentication (at least in the context of an iPhone - other systems, that may not be true). However, if you're putting in a password, the problem remains: How do you know the thing asking for your password is the iphone OS, and not an evil app pretending to be the OS?

    The only good answer to that, if you don't have another piece of hardware helping you out, is to have a secret that you know, and that the OS knows, but that the evil app can't know. Then if a password request offers you the secret, you can be confident that it's really from the OS and not from an evil app. That secret must be a pre-agreed-upon image, or sound, or text, that is carefully guarded so evil apps can never get at it. The icloud thing you mentioned above is nothing like that.

    The second context is between you and an off-device server. The same fundamental idea applies - you need to know that the server is really the server, and not supply secrets to an impostor. But the threat environment and solutions are completely different. Once you have a secure local device, you can use zero-knowledge techniques to authenticate to a remote server without ever revealing your shared secret, if you have one, or (much better) use public-key crypto techniques, which by design never reveal any secrets and also don't share any secrets. Either way a local agent on your device, under control of the OS, can make the authentication happen - which then reduces the problem down to the case of local authentication, you to the agent. And then we're back to case #1, and we can use the same solution - a local shared secret, probably an image that's agreed upon by you and the OS when you set up your device.

    AI should probably write an article about this. Or pay me to clean this all up and post that...
    muthuk_vanalingamwatto_cobra
  • M2 and beyond: What to expect from the M2 Pro, M2 Max, and M2 Ultra

    I'm sorry, but this article is a serious failure due to ignorance of some of the basic underlying technologies.

    For example, the guesses about memory are completely off base. There is literally no chance at all that they're even close, based on the article's assumptions.

    The M1 has a bandwidth of ~68GB/s because it has a 128-bit memory bus and uses LPDDR4 memory at 4.266GT/s. The M1 Pro has higher bandwidth of ~200GB/s because it uses LPDDR5 memory at 6.4GT/s, and ALSO because it uses a double-wide bus (256 bits).

    The M2 has the same memory bus size (128 bits) as the M1, but it's already using LPDDR5 at 6.4GT/s. If there's an M2 Pro based on the same doubling as the M1 Pro was, it won't get any further benefit from the LPDDR5 (since the M2 already has that). It will have the same ~200GB/s bandwidth as the M1 Pro.

    Of course this all depends on timing - if the M2 Pro came out a year from now, higher-performance LPDDR5 might be common/cheap enough for the M2 Pro to use it, in which case you'd see additional benefits from that. But it DEFINITELY wouldn't get you to 300GB/s. LPDDR5 will never be that fast (that would require 9.6GT/s, which is not happening in the DDR5 timeframe - unless DDR6 is horribly delayed, years from now).

    You're also assuming Apple won't go with HBM, which is not at all a safe assumption. If they do they might well do better than 300GB/s for the "M2 Pro", if such a thing were built.

    Your entire article could have been written something like this:
    M1 Ultra = 2x M1 Max = 4x M1 Pro ~= 6x-8x M1, so expect the same with the M2 series.

    It's a really bad bet though.

    There are much more interesting things to speculate about! What are they doing for an interconnect between CPU cores, GPU cores, Neural Engine, etc? Improvements there are *critical* to better performance - the Pro, Max, and Ultra are great at some things but extremely disappointing at others, and that's mostly down to the interconnect- though software may also play some part in it (especially with the GPU).

    Similarly, the chip-to-chip interconnect for the Ultra is a *huge* advance in the state of the art, unmatched by any other vendor right now... and yet it's not delivering the expected performance in some (many) cases. What have they learned from this, what can they do better, and when will they do it?

    (Edit to add) Most of all, will desktop versions of the M2 run at significantly higher clocks? I speculated about this here when the A15 came out - that core looked a lot like something built to run at higher clocks than earlier Ax cores. I'd like to think that I was right, and that that's been their game all along. But... Apple's performance chart (from the keynote) for the M2, if accurate, suggests that I was wrong and that they don't scale clocks any better than the M1 did. That might still be down to the interconnect, though it seems unlikely. It's also possible that they're holding back on purpose, underestimating performance at the highest clocks, though that too seems unlikely (why would they?).

    For this reason, I suspect that the M2 is a short-lived interim architecture, as someone else already guessed. Though in terms of branding, they may retain the "M2" name even if they improve the cores further for the "M2 Pro" or whatever. That would go against all past behavior, but they don't seem terribly bound by tradition.
    radarthekath4y3sPascalxxmobirdretrogustobenracicotdewmeprogrammersmalm
  • AltStore allows limited sideloading of iPhone apps Apple doesn't approve

    rob53 said:
    So AI is using an app that violates Apple’s rules and actually telling people about it. I can’t wait for Apple to find out about it. The developer should lose the license/developer certificate while Apple can immediately cancel all apps using that certificate. I also wonder if Apple could go after the users, including AI. Be careful about removing my comment because all I’m doing is commenting on illegal activity, both the developer’s and user’s violations. I’m talking about the misuse of the obvious Xcode development program. This is not for limited testing, the developer is getting paid which is against Apple rules. 
    Lol, "illegal activity"! You are extremely confused about the difference between criminal and civil law. I'd suggest googling that, there's a ton of material available.

    In short, no possible Apple ToS, T&C, click-through agreement, etc., could possibly make any action by this or any other person or company illegal. And no, there is no world where Apple could go after end users. There's also no world where Apple is stupid enough to try, even if they could.

    Now... Can Apple cancel is developer cert? Sure. I'm a little surprised they haven't.
    radarthekatwilliamlondongrandact73
  • Apple, Google, Microsoft announce commitment to 'passwordless' future


    It's not immediately clear how falling back to a device PIN would be more secure than a properly configured password, however.

    The reason it’s more secure is because there are multiple factors - the device (something you have), and the PIN (something you know) or biometric - face, or finger (something you are).  No one is suggesting we replace passwords with PINs, they’re saying a device AND a PIN - or some other factor. 
    Is that true? I don't know if you will need to enroll a new device before using it. But either way, this is a dramatic improvement over passwords because it will prevent them from ever being transmitted. Like Kerberos or public-key SSH, no secret will be transmitted between server and client.

    Among other benefits, that means that there will be no compromises due to shared/reused passwords exposed by compromised sites. We see customers fall victim to that every week.
    A pin is basically a password. 

    Everything else is tying your activities directly to you. 

    Don’t like where this is going. 
    Yeah... no. I sympathize with your distaste for tracking, but you need to learn a lot more if you're going to have a meaningful opinion. As I said above, device PINs (while they have issues as well) are NOT the same as passwords.
    StrangeDays9secondkox2watto_cobra