mrstep
About
- Username
- mrstep
- Joined
- Visits
- 49
- Last Active
- Roles
- member
- Points
- 308
- Badges
- 0
- Posts
- 542
Reactions
-
Apple details user privacy, security features built into its CSAM scanning system
newisneverenough said:Apple is becoming pathetic in their deafness to criticism. This not a feature. It’s an invasion of privacy and the the ´back door’ they said they wouldn’t build. I don’t buy their devices to become a part of the surveillance state. Get out the h*** out of my data, Apple. Serve the customer. iCloud is supposed to helpfully sync across devices and helpfully provide off device storage. Now, suddenly they are proud to announce that it’s become a tool for surveillance.
-
Netgear has a new $1,500 Wi-Fi 6e mesh router
TripleTrouble said:My Orbi Router set (RBR50s) is the worst piece of kit I've ever owned (at least since the Lowes Iris smart home fiasco). Bad build, bad user experience, bad networking, bad app, bad customer service, what's there to like?
People seem to like the Google stuff, Deco has been been good for my house, I really wish Apple had stayed in the router game... But I'll never get near an Orbi again. -
A clever hack fixes the new Mac mini power button's awkward location
> If you're the sort who leaves their Mac on all the time, then you won't be concerned about the power button. And you're also not concerned about the planet.
if you really care about the planet, you longingly gaze at your unpowered Mac mini when stepping out of the commune-tent to eat a bug snack. Otherwise you're just an eco-poser.. -
Meta wants to upload every photo you have to its cloud to give you AI suggestions
-
FCC votes to restore net neutrality protections in the United States
neillwd said:More government more censorship less freedom. -
New 15.5-inch MacBook Air rumored to arrive in early 2023
dewme said:On the naming front, I'm in total agreement with those who don't see a reason for dragging along the "Air" moniker any more when simply calling it a MacBook is totally sufficient. Same thing on the iPad side of things. The early versions of the Air named products were somewhat niche because you knew you're were giving up a little something just to put maximum focus on lightness, portability, and minimalism.
I loved my 12" MacBook as a portable - just toss it in a bag without worrying about whether it's worth bringing along - and with an M2 it would have been a killer replacement and machine in general. 🤷♂️ (I really like my 14" MBP too, fantastic machine, but as a portable the 12" MacBook was just perfection.) -
DoJ's Apple App Store probe is 'firing on all cylinders'
teejay2012 said:Apple has grown too big and does not provide enough payola to Washington.'That's a nice tech company you have there Tim.. would be a shame if somethin' was to happen to it... ' -
Apple's upcoming low-cost MacBook: Colorful and affordable
greg.edwards69 said:So it's a large ipad, with a fixed keyboard, no touch screen and runs MacOS. It will sell by the bucketload.
Personally, I'd love to see Apple revisit the 12" MacBook with this processor instead. It was a brilliant design, just hampered by dreadful performance. -
John Giannandrea out as Siri chief, Apple Vision Pro lead in
jas99 said:mpantone said:gatorguy said:mpantone said:As far as I can tell, consumer-facing AI isn't improving in leaps and bounds anymore, and probably hasn't for about a year or 18 months.
A couple weeks before Super Bowl I asked half a dozen LLM-powered AI assitant chatbots when the Super Bowl kickoff was scheduled. Not a single chatbot got it right.
Earlier today I asked several chatbots to fill out a 2025 NCAA mens basketball tournament bracket. They all failed miserably. Not a single chatbot could even identify the four #1 seeds. Only Houston was identified as a #1 seed by more than one chatbot, probably because of their performance in the 2024.
I think Grok filled out a fictitious bracket with zero upsets. There has never been any sort of major athletic tournament that didn't have at least one upset. And yet Grok is too stupid to understand this. It's just a dumb probability calculator that uses way too much electricity.
Context, situational awareness, common sense, good taste, humility. Those are all things that AI engineers have not programmed yet into consumer facing LLMs.
An AI assistant really need to be accurate 99.8% of the time (or possibly more) to be useful and trustworthy. Getting one of the four #1 seeds correct (published on multiple websites) is appallingly poor. If it can't even identify the 68 actual teams involved in the competition, what good is an AI assistant? Why would you trust it to do anything else? Something more important like schedule an oil change for your car? Keep your medical information private?
As I said a year ago, all consumer facing AI is still alpha software. It is nowhere close to being ready for primetime. In several cases there appears to be some serious regression.
25% right isn't good enough. Neither is 80%. If a human assistant failed 3 out of 4 tasks and you told them so, they would be embarrassed and probably afraid that they would be fired. And yes, I would fire them.
Apple senior management is probably coming to grips with this. If they put out an AI-powered Siri that frequently bungles requests, that's no better than the feeble Siri they have now. And worse, it'll probably erodes customer trust.
"Fake it until you make it" is not a valid business model. That's something Elizabeth Holmes would do. And she's in prison.
Your comment brings up an important illustrative point. No one has the time to dork around with 7-8 AI chatbots to find one (or more) that gives the correct answer for each question. That's not a sustainable approach.
There's probably some AI chatbot that will might get the right answer to a simple question. The problem is no AI chatbot is reliably accurate enough to instill trust and confidence. I can't ask ten questions to 8 chatbots and wade through the responses. In the same way, having ten human personal assistants isn't a worthwhile approach.
Let's say Grok has a 20% accuracy score and Gemini is 40%. That's double the accuracy for Gemini but it still is way too low to be trusted and deemed reliable.
Like I said I think Apple's senior management is understanding this which is why they've postponed the AI-enabled Siri. Even if it were 60-80% accurate, that's still too low to be useful. You really don't want a personal assistant -- human or AI -- that makes so many messes that you have to go clean up after them or find an alternate personal assistant to might do some of those failed tasks better. In the end for many tasks right now, you are better off using your own brain (and maybe a search engine) to figure many things out because AI chatbots will unapologetically make stuff up.
All of these AI assistants will flub some questions. The problem is you don't know which one will fail which question at any given moment. That's a waste of time. I think the technology will eventually get there but I'm much more pessimistic about the timeline today compared to a year ago because improvements in these LLMs seems to have stalled or even regressed. I don't know why that is but it doesn't matter to Joe Consumer. It just needs to work. Basically all the time. And right now none of them do.
For sure I am not the first person to ask an AI assistant about the Super Bowl and March Madness. And yet these AI ASSistants have zero motivation to improve accuracy even if they are caught fibbing or screw up an answer.
I've used all of the major AI assistants and they all routinely muck up. The fact that I did not try Gemini is simply because I got far enough by the third AI chatbot to deem this a waste of time. I can't keep jumping from one AI chatbot to another until I find one that gives an acceptable result.
In most cases, the AI assistant doesn't know it is wrong. You have to tell the developer (there's often a thumbs up or thumbs down for the answer). And that doesn't make the AI assistant get it right for the next person who asks the same question. Maybe enough people asked Gemini the question and the programmers fixed Gemini to give the proper response. One thing for sure, AI assistants don't have any common senses whatsoever. Hell, a lot of humans don't either so if LLMs are modeled after humans, it's no wonder that AI assistants are so feeble.
Here in March 2025 all consumer-facing AI assistants are silly toys that wastefully use up way too much electricity and water. I still use some AI-assisted photo editing tools because my Photoshop skills are abysmal. But for a lot of other things, I'll wait for AI assistants to mature. A lot more mature.
Apple has to generate their own neural network machine learning instead of this LLM snake oil.
I’m just sorry Apple committed itself to being a purveyor of a useful system called Apple Intelligence when, apparently, it’s based on snake oil LLMs.
There are absolutely great things it can do - identify things in photos, drive things like 'smart fill' and 'smart mask' effects in photos and videos that can save tons of time, and then it drops to giving semi-functional code snippets, generating massive quantities of vapid text, and just getting things wrong. Unfortunately for consumers, spotting 7-finger hands, code that doesn't work, and data that's just wrong is easier for people - the models don't.Not a dig at Apple, I hope they get it working in a way that also protects our data & privacy.. -
Tantalizing details of Jony Ive's AI device leak after OpenAI meeting
avon b7 said:My guess is a thin (very thin?) puck-like device with a thin (very thin?) LED strip around the base.
Brushed metal (caressible) or glass (lickable).
Internal wireless connectivity (no ports) and using mics/speakers from other devices.
Cloud processing plus subscription. Bingo!
Honesty, the 'sci-fi dream' is what it's always been: AI with an interface of some kind. Ideally voice and a screen. That's it.
Would a humanoid robot be a plus? Yes for some situations, but if it's going to sell in hundreds of millions, I'll rule that option out.