gatorguy
About
- Username
- gatorguy
- Joined
- Visits
- 574
- Last Active
- Roles
- member
- Points
- 18,919
- Badges
- 3
- Posts
- 24,772
Reactions
-
How and where Trump's new tariffs affect Apple
9secondkox2 said:AppleZulu said:A tariff is a sales tax levied on goods as they enter the country. The buyer pays the tax, not the seller. On finished products, that tax will be passed on directly to the consumer. On parts, that tax will be incorporated into the price and passed on to the consumer. The talking point that Apple or “China” or anyone on the supply side will pay the tax for any length of time is ridiculous. Also remember that sales taxes are regressive. The more of your income that goes directly to buying things, the greater percentage of your income goes directly to paying the tax.This is going to be a disaster.Actually it depends on the seller whether or not they will absorb the cost.Ford motor Compsny announced today that they will be selling their vehicles under invoice from today through early June. They have enough inventory to help offset the loss somewhat, but it’s an example of how sellers get creative in a temporary reset of trade relations. -
Trump's 'Liberation Day' tariffs hit every one of Apple's international manufacturing part...
nubus said:- Uninhabited islands hit by 29%. Those penguins and seals are so mean and unfair to U.S. workers.
- Taiwan - not recognized by U.S. as a nation - got a tariff on their own. China is furious and U.S. products could become socially unacceptable. Hard to beat China on nationalism.
- Israel got hit by 17%. Probably a bit unexpected to see Israel as an enemy to the U.S.
-
The future of internet liability is uncertain as congress targets Section 230
tundraboy said:-anonymouse said:
Yes the curators won't be muted. But now they will be legally responsible for the information that they put on their sites. Just like any publisher. Because if you are curating a site, then you are a de facto publisher.I don't know how you have formed your ideas about this but you still have it completely backwards. Repealing Section 230 isn't going to stop, to use your word, "curation" of content by FB and others, it's just going to mean that they delete a lot of content that they now leave up. Individual users would still be "on the hook for their own posts," but anything the "curators" deem to put them on the hook as well, like false and defamatory posts from political operatives, will be removed. Without Section 230 a lot of individual "voices" are going to be muted, but the so called "curators" aren't going to be muted.
That's all that repeal advocates want. You should be legally answerable for what you allow to be printed on your newspaper, broadcasted on your TV network, displayed on your highway billboard, and now, posted on your website.
Seriously, what is wrong or bad about that? -
John Giannandrea out as Siri chief, Apple Vision Pro lead in
mpantone said:gatorguy said:mpantone said:gatorguy said:mpantone said:As far as I can tell, consumer-facing AI isn't improving in leaps and bounds anymore, and probably hasn't for about a year or 18 months.
A couple weeks before Super Bowl I asked half a dozen LLM-powered AI assitant chatbots when the Super Bowl kickoff was scheduled. Not a single chatbot got it right.
Earlier today I asked several chatbots to fill out a 2025 NCAA mens basketball tournament bracket. They all failed miserably. Not a single chatbot could even identify the four #1 seeds. Only Houston was identified as a #1 seed by more than one chatbot, probably because of their performance in the 2024.
I think Grok filled out a fictitious bracket with zero upsets. There has never been any sort of major athletic tournament that didn't have at least one upset. And yet Grok is too stupid to understand this. It's just a dumb probability calculator that uses way too much electricity.
Context, situational awareness, common sense, good taste, humility. Those are all things that AI engineers have not programmed yet into consumer facing LLMs.
An AI assistant really need to be accurate 99.8% of the time (or possibly more) to be useful and trustworthy. Getting one of the four #1 seeds correct (published on multiple websites) is appallingly poor. If it can't even identify the 68 actual teams involved in the competition, what good is an AI assistant? Why would you trust it to do anything else? Something more important like schedule an oil change for your car? Keep your medical information private?
As I said a year ago, all consumer facing AI is still alpha software. It is nowhere close to being ready for primetime. In several cases there appears to be some serious regression.
25% right isn't good enough. Neither is 80%. If a human assistant failed 3 out of 4 tasks and you told them so, they would be embarrassed and probably afraid that they would be fired. And yes, I would fire them.
Apple senior management is probably coming to grips with this. If they put out an AI-powered Siri that frequently bungles requests, that's no better than the feeble Siri they have now. And worse, it'll probably erodes customer trust.
"Fake it until you make it" is not a valid business model. That's something Elizabeth Holmes would do. And she's in prison.
Your comment brings up an important illustrative point. No one has the time to dork around with 7-8 AI chatbots to find one (or more) that gives the correct answer for each question. That's not a sustainable approach.
There's probably some AI chatbot that will might get the right answer to a simple question. The problem is no AI chatbot is reliably accurate enough to instill trust and confidence. I can't ask ten questions to 8 chatbots and wade through the responses. In the same way, having ten human personal assistants isn't a worthwhile approach.
Let's say Grok has a 20% accuracy score and Gemini is 40%. That's double the accuracy for Gemini but it still is way too low to be trusted and deemed reliable.
Gemini is untrustworthy. All AI ASSistants are untrustworthy.
80% is still way too low. I would fire any human personal assistant if that was their success rate. Your standards sucks.
You might read my post again, that one where I agreed with you that 80% isn't good enough.
That said, wouldn't it be great if each of us here could be right at least 80% of the time? Instead, we all have those members we generally trust but still verify, the news articles we generally trust but still verify, and our time-tested research sources that we generally trust but still verify.
Besides your partner, is there any source you have that's "right" (which itself needs defining) 100% of the time? -
John Giannandrea out as Siri chief, Apple Vision Pro lead in
mpantone said:As far as I can tell, consumer-facing AI isn't improving in leaps and bounds anymore, and probably hasn't for about a year or 18 months.
A couple weeks before Super Bowl I asked half a dozen LLM-powered AI assitant chatbots when the Super Bowl kickoff was scheduled. Not a single chatbot got it right.
Earlier today I asked several chatbots to fill out a 2025 NCAA mens basketball tournament bracket. They all failed miserably. Not a single chatbot could even identify the four #1 seeds. Only Houston was identified as a #1 seed by more than one chatbot, probably because of their performance in the 2024.
I think Grok filled out a fictitious bracket with zero upsets. There has never been any sort of major athletic tournament that didn't have at least one upset. And yet Grok is too stupid to understand this. It's just a dumb probability calculator that uses way too much electricity.
Context, situational awareness, common sense, good taste, humility. Those are all things that AI engineers have not programmed yet into consumer facing LLMs.
An AI assistant really need to be accurate 99.8% of the time (or possibly more) to be useful and trustworthy. Getting one of the four #1 seeds correct (published on multiple websites) is appallingly poor. If it can't even identify the 68 actual teams involved in the competition, what good is an AI assistant? Why would you trust it to do anything else? Something more important like schedule an oil change for your car? Keep your medical information private?
As I said a year ago, all consumer facing AI is still alpha software. It is nowhere close to being ready for primetime. In several cases there appears to be some serious regression.
25% right isn't good enough. Neither is 80%. If a human assistant failed 3 out of 4 tasks and you told them so, they would be embarrassed and probably afraid that they would be fired. And yes, I would fire them.
Apple senior management is probably coming to grips with this. If they put out an AI-powered Siri that frequently bungles requests, that's no better than the feeble Siri they have now. And worse, it'll probably erodes customer trust.
"Fake it until you make it" is not a valid business model. That's something Elizabeth Holmes would do. And she's in prison.