longpath

About

Username
longpath
Joined
Visits
151
Last Active
Roles
member
Points
1,018
Badges
1
Posts
419
  • iPhone belonging to Spanish prime minister was hit with Pegasus spyware

    I wonder at what point will high ranking political targets & the entities charged with protecting their secrets have more juice than the collective desires of investigators (whether law enforcement, counter espionage, etc) who always seek to be able to open anything. Given that the US government is willing to criminalize journalism, it seems this issue is close to reaching critical mass.
    h4y3s
  • iCloud outages resolved after nearly every Apple service went down globally

    Strangely, Apple's status page shows all greens; but that's clearly not the case. Emails I am trying to send disappear from the Outbox; but never get to Sent, and attempting to log into iCloud.com on my laptop is failing miserably.
    JaiOh81Alex_V
  • Mergers over $5 billion would be prohibited if new bill becomes law

    hexclock said:
    So they pick a number out of thin air, as usual. I love how they try to blame rising prices on these companies, what a nice touch. Warren’s third grade idea of how the economy works should be a real big clue that this bill is a bad idea. 
    Agreed. The rapid and reckless expansion of the M1 money supply is completely ignored, just as Congress’ irresponsible spending and expansion programs, departments, and agencies whose mere existence is an affront to the 10th Amendment is heart and soul of the issue.

    it’s also incredibly hypocritical for any member of Congress to malign the tech industry on the one hand, and in the other give a billion dollars in advertising funds to push an agenda.

    It’s also a massive conflict of interest when so many in Congress profit handsomely from big tech: https://www.bloomberg.com/news/articles/2022-01-27/congress-s-big-tech-stock-trading-makes-antitrust-regulation-awkward
    iOS_Guy80jcs2305designrgilly33
  • Australian man alleges all of his iOS and macOS devices have been persistently hacked

    Assuming the gentleman isn’t suffering some mental health issues, I would think that he would have to be the victim of engaging in unsafe online activity, such that he is repeatedly reinfecting/compromising his gear, & that substantial amounts of identifying information is already available to one or more malicious actors.
    watto_cobra
  • 'Apple Car' needs Machine Learning to make driving decisions fast enough

    sflagel said:
    When a car is able to successfully navigate the Elephant and Castle Roundabout in London, at night, during rush hour, in December, in the rain, then it shall be deemed safe.
    Why set the bar there? ... we already let humans attempt that... and they're obviously not 100% capable of it.
    I suppose the question becomes: "just how often does it need to do this successfully?... what error rate are we willing to accept?" I would think that as long as it's a single tic better than the human rate, then it should be allowed on the roads. (Else, we should quit letting humans drive as well.)

    The question of where to set the bar ultimately comes down to where responsibility, and therefore liability is ascribed. A self driving vehicle is, ultimately a robot. If the AI controlling that robot is not recognized as a self-owned entity, then responsibility for the decisions it makes either come down to who/what owns the robot, or who designed and built said robot. Only once that distinction is clear will a consumer be able to make an informed decision as to whether his, her, or its risk aversion is high enough or low enough to delegate decisions to the AI or not. If you hire a chauffeur and said chauffeur makes a decision that jeopardizes persons or property, the chauffeur is ultimately the bearer of responsibility; however, unlike the chauffeur, as long as the AI is not recognized widely, if not universally, as its own entity, then either the robot's owner, or the robot's designer & builder will shoulder the liability for harm alleged to be caused by the AI. The very fact of a lack of consensus over where liability should lie causes uncertainty for potential consumers. This uncertainty serves to drive the acceptable level upward, to act as a hedge. The less clear who or what bears responsibility for harm to persons or property, the more certainty consumers will seek that there won't be a need to identify who or what is liable.

    For example, if I, as consumer, am considering a roving robot to transport goods or passengers for my business, and the manufacturer agrees to fully indemnify me and my business against all possible liability, then my risk is lower and I can be more tolerant of a lack of certainty of the capabilities of the AI. If, on the other hand, I agree to bear full risk for the decisions of the AI, then I want to be as close to absolutely certain that nothing will go wrong, no matter how bizarre or challenging the situation. If I don't know where in the continuum between those two extremes liability and responsibility falls, that uncertainty may make me even less tolerant of risk, to the point where I want something with slick weather vehicle handling skills that would put a Group B rally driver and ice racer to shame, regardless of how absurdly high an unreasonable that standard may be.

    This isn't merely a question of the capabilities of an AI & its underlying hardware and software.

    muthuk_vanalingam