AppleZulu

About

Username
AppleZulu
Joined
Visits
261
Last Active
Roles
member
Points
9,258
Badges
2
Posts
2,575
  • A pre-WWDC message to Apple retail staff does not mean new hardware

    It feels like we're working with a triple negative, here. Apple doesn't tip off internal or third-party retailers of new hardware releases, and Apple sent out a message that doesn't tip off retailers of new hardware releases, therefore "Tim Cook is still more likely to come on stage empty handed at WWDC."

    Wouldn't it be more correct to just say that the message reveals nothing either way?

    Logically, the only potential new hardware that's been discussed recently that would be likely to be announced at WWDC is the rumored home hub device. If Apple is going to introduce HomeOS and/or much in the way of new home-device features or support, it would be hard to talk all the way around a looming new HomeOS or home hub without actually mentioning either one. Since they've already mentioned that AI-powered Siri is supposed to be coming, it would also be pretty hard for them to totally bypass that subject, even if the full feature set has been delayed. Then of course, if there is a HomeOS along with the potential for native HomeOS applications, the whole point of WWDC is to provide the info and tools for developers to get started at coding those apps.

    Now, none of that has anything to do with the non-announcement announcement to retailers, because it was a non-announcement, right? 
    williamlondon
  • Jony Ive and Laurene Powell Jobs say 'humanity deserves better' from technology

    Bottom line: unless AI programs are only using public domain material and/or material with the appropriate rights permissions secured then it's nothing more than theft. The LLMs themselves have no value whatsoever without the database used for training. It's basically a gigantic torrent with some techno jargon slathered on top as a smokescreen. 
    Does Apple have this database used for training? (Serious question).
    Undoubtedly they do. 

    Here's the thing. Current LLM AI appears to be combining two relatively recent technological developments to essentially do what Eliza did a half century ago. Computational speed combined with teraflops of data is used to brute-force probabilistic mimicry of human language and thought. What's got everyone fooled is the fact that the mimicry is pretty good. What comes out of that process is something that looks something a human thought and said or wrote. There is, however, no thought. It's an illusion. It is a much larger database doing Eliza's trick of coyly responding with "How does [regurgitate user input] make you feel?"

    The tell of course is the fact that, despite the use of all the internet's data and massive computational power, these things still produce output that includes "hallucinations" that an average human with nominal subject-manner knowledge can spot.

    Look at it another way. A reasonably intelligent high school student, trained on a tiny fraction of the data used to train LLM AI, and with access to an equally tiny fraction of subject-matter data, is perfectly capable of writing a better, more accurate research essay than these massive AI models can create. It'll take the human more time, but with very little experience and only a few (compared to LLM training) very low-bandwidth lessons from their teachers, the human kid is capable of sorting good sources of information from bad, and writing a simple research paper with no made-up content and no plagiarism. 

    Likewise, in creative arts like writing and musical composition and performance, most of the best start producing original and inspiring work in their teens and early twenties, when their "training data" is, once again, a tiny fraction of what's used for AI. While lots of young people show their influences and create derivative work, those who advance to originality are not unicorns by any stretch. Meanwhile, while violating IP rights consuming all the world's data, AI still can't produce original work, only sliced and diced collages of the training data. 

    Until such point that AI can be trained on relatively small amounts of data, you can be confident that it's all just advanced mimicry. It's not evident that Apple is doing that, and it's abundantly clear that the other well-known AI projects (including whatever it is Ive and Altman are up to) are still just brute-force mimicry. There's a utility and place for that, but none of it is really the paradigm shift or the impending robot overlords that the hype says is already here.
    MassiveAttackentropys
  • Millions of apps were denied by Apple in 2024 amid fraud crackdown

    nubus said:
    MisterKit said:
    Not to worry. Open market app stores can't wait to host the fraudsters.
    First DOGE those that could deal with fraudsters and then demand a monopoly due to the world being scary? Crony capitalism. It will help Apple until Apple has been standing still for too long.

    Go to any DMV "service location" to bask in the experience of no free markets. Visit any USPS office to enjoy the short queues and customer focused products delivered (or not). Having no competition is great if you're too lazy to compete. I say we're ready for a Family Store for apps, an American Apps store, and even The Sleazy Store.
    The DMV has to serve every person who wants a driver’s license, from the conscientiously prepared to the idiot who shows up reeking of weed. You want better, faster service and shorter lines? Quit voting for politicians who promise lower taxes and fewer government employees. 

    The USPS does have competition, like UPS and FedEX. Interestingly, USPS is the only one that will deliver a letter or package to any address in the country for the same price. Everyone complains about the cost of postage, but have you ever tried sending a letter using one of the other services? It’s orders of magnitude more expensive, and gets even more so the further you go and the further you go into rural, red-state flyover country. 

    Of course none of that is actually comparable to Apple’s App Store. That has competition, too, on other devices using other platforms. People buy iPhones because they just work, and they just work because Apple uses the App Store to do things like filter out fraudulent developers. This benefits not only Apple and its customers, but honest app developers as well. This is because iPhone users can be more confident that risk is minimized when downloading an app from an unknown developer. As a result, new and small developers don’t have to spend resources simply trying to convince potential customers that they’re not a crook before they’ll consider trying out the developer’s app. 
    ihatescreennameswatto_cobra
  • Siri Chatbot prototype nears ChatGPT quality, but hallucinates more than Apple wants

    In a nutshell, this explains why Apple is “behind” with AI, but actually isn’t. 

    It’s remarkable the consistency with which this pattern repeats, yet even people who consider themselves Apple enthusiasts don’t see it. Tech competitors “race ahead” with an iteration of some technology, while Apple seemingly languishes. Apple is doomed. Then Apple comes out “late” with their version of it, and the initial peanut gallery reception pronounces it too little, too late.

    Then within a couple of years, Apple’s version is the gold standard and the others -those cutting-edge innovators- race to catch up, because “first” is often also “half-baked.”

    In the news this week, it was exposed that RFK Jr’s “Make America Healthy Again” report was evidently an AI-produced document, replete with hallucinations, most notably in the bibliography, and of course it was. This is what happens when the current cohort of AI models are uncritically used to produce a desired result, without any understanding of how profoundly bad these AI models are. When I read about this in the news, I decided to experiment with it myself. Using MS Copilot -in commercial release as part of MS Word- I picked a subject and asked for a report taking a specific, dubious position on it, with citations and a bibliography. After it dutifully produced the report, I started checking the bibliography, and one after another, failed to find the research papers that Copilot used to back the position taken. I didn’t check all the references, so it’s possible some citations were real, but finding several that weren’t was sufficient to bin the whole thing. It’s bad enough when humans intentionally produce false and misleading information, but when a stock office product will do it for you with no disclaimers or warnings, should that product really be on the market? I also once asked ChatGPT to write a story about green eggs and ham, in the style of Dr. Seuss. It then plaigerized the actual Seuss story, almost verbatim, in a clear abuse of copyright law. This is the stuff that Apple is supposedly trailing behind.

    So the report here that Apple is developing AI but, unlike their “cutting edge” competitors, not releasing something that produces unreliable garbage, suggests that no, they’re not behind. They’re just repeating the same pattern again of carefully producing something of high quality and reliability, and in a form that is intuitively useful, rather than a gimmicky demonstration that they can do a thing, whether it’s useful or not. Eventually they’ll release something that consistently produces reliable information, and likely does so while respecting copyright and other intellectual property rights. The test will be that not only will it be unlikely to hallucinate in ways that mislead or embarrass its honest users, it will actually disappoint those with more nefarious intent. When asked to produce a report with dubious or false conclusions, it won’t comply like a sociopathic sycophant. It will respond by telling the user that the reliable data not only doesn’t support the requested position, but actually refutes it. Hopefully this will be a feature that Apple uses to market their AI when it’s released.

    P.S. As a corollary, the other thing that Apple is likely concerned with (perhaps uniquely so) is AI model collapse. This is the feedback loop where AI training data is scooped up from sources that include AI-produced hallucinations, not only increasing the likelihood that the bad data will be repeated, but reducing any ability for the AI model to discern good data from bad. Collapse occurs when the model is so poisoned with bad data that even superficial users find the model to be consistently wrong and useless. Effectively every query becomes an unamusing version of that game where you playfully ask for “wrong answers only.” Presumably the best way to combat that is to train the AI as you would a human student: start by giving it information sources known to be reliable, and eventually train it to discern those sources on its own. That takes more time. You can’t just dump the entire internet into it and tell it that the patterns repeated the most are most likely correct. 

    P.P.S. I just repeated the above experiment in Pages, using Apple’s link to Chat GPT. It also produced hallucinated references. I just chased down the first citation in the bibliography it created. Searching for the cited article didn’t turn up anything. I did find the cited journal, went to it and searched for the cited title, got nothing. Searched for the authors, got nothing.  Finally, I browsed to find the issue supposedly containing the referenced article, and that article does not exist. So Apple gets demerits for subbing in ChatGPT in their uncharacteristic worry that they not be perceived as being “late.” This part does not fit their usual pattern, with the exception perhaps of their hastened switch to Apple Maps, based largely at first on third-party map data. In the long run, their divorce from Google maps was important, as location services was rapidly becoming a core OS function, not just a sat nav driving convenience that can adequately be left to third party apps. The race to use AI is perhaps analog, but the hopefully temporary inclusion of ChatGPT’s garbage should be as embarrassing as those early Apple Maps with bridges that went underwater, etc. 
    Apple-a-daywilliamlondoncharlesnJinTechchasmmattinozOfersagan_studentjas99socalrey
  • Trump 'Liberation Day' tariffs blocked by U.S. trade court

    Apparently an appeals court has lifted the first court block in tariffs but a second court has also placed a new block.
    Hey, great. More instability. Just what every American business needs to thrive. This administration's Big Plan is really working!




    Wait. Did I say "business?" I meant to say "short seller" and "inside trader."
    grifmxmattinozmuthuk_vanalingamsphericwatto_cobra