larryjw
About
- Username
- larryjw
- Joined
- Visits
- 199
- Last Active
- Roles
- member
- Points
- 3,338
- Badges
- 2
- Posts
- 1,040
Reactions
-
TSMC sends more Taiwanese workers to finish Arizona plant
JP234 said:Previously, TSMC engineers in Taiwan have complained that Americans don't work hard enough, and "are the most difficult to manage."
So did anyone else see the documentary film "American Factory?" Same scenario except the company is run by a PRC businessman who buys a closed GM plant in Ohio, and makes auto glass. He hires Americans laid off by GM, saving the town from economic death, but wages are half what they made before they made nothing. He had the same complaints, and add in unionization efforts, when American workers are expected to work extra hours without pay. And of course the bosses are all Chinese and have the same racist tendencies as the Americans who work for them.
But apparently they've reached accord, or at least detente, since Fuyao Glass America is still operating there.
Wouldn't it be nice, and surprising if managers actually had some basic knowledge? -
Apple Vision Pro could help surgeons see vital data during operations
FrancescoB said:This place is cringe when bulls*t articles like this appear. This tech already exists, has existed for years, and is in use in countless hospitals around the globe.
Apple didn't invent anything new. -
Apple Vision Pro could help surgeons see vital data during operations
I know neurosurgeons who perform robotic surgery. One of them is my doctor. Quite complex. The equipment they use contain cameras that peer into the brain, eye pieces that enlarge the camera images, robotic arms that hold and manipulate the scalpels, foot pedals and a mouth pieces controlling different aspects of the robotic assembly.
I had no idea until my doctor and other of his colleagues gave lectures with surgical videos to a broad audience of patients and families of patients.
Can the Vision Pro support such activities? I'll let the medical technologists make that decision. It's certainly not beyond the possibility. -
ChatGPT might quit the EU rather than comply with regulations
zimmie said:larryjw said:When first installed, the ChatGPT app highlights the warning that it may return false information,
The May 10 Lancet published the following article, along with detailed supplemental addendum showing the interaction with ChatGPT which anyone can then reproduce to verify the results.
The dangers of using large language models for peer review
ChatGPT responded with totally made up material, sounding quite authoritative.
Banning ChatGPT would seem a good idea. It would give alternative AI systems which actually can tell the truth an opportunity to be developed -- if that can be done.
Large Language Models like GPT-4 are inherently lying machines.
This behavior becomes much more obvious when you realize GPT (and Bard, and so on) will believe anything you tell it is at the other end of something which looks like a URL. The domain doesn't have to resolve, and the path doesn't have to be valid. An expert would load the page and potentially respond with new information based on what they read there. GPT can't actually load a URL or read, but it can fake a response which looks like what an expert who did all that might say.
People have been doing things like telling Bard that Google discontinued Bard months ago, and it responds with something like "Oh. I'm sorry, according to the link you provided, you're correct. Bard was discontinued in March."
GPT-4 (or 3 or 3.5) doesn't believe anything. It just constructs an enormous set of word associations (n-grams) assigning conditional probabilities of a word given the previous words and then generates the next word (subject to an arbitrary parameter called the Temperature) to add some randomness so it doesn't always choose the most probable. -
ChatGPT might quit the EU rather than comply with regulations
When first installed, the ChatGPT app highlights the warning that it may return false information,
The May 10 Lancet published the following article, along with detailed supplemental addendum showing the interaction with ChatGPT which anyone can then reproduce to verify the results.
The dangers of using large language models for peer review
ChatGPT responded with totally made up material, sounding quite authoritative.
Banning ChatGPT would seem a good idea. It would give alternative AI systems which actually can tell the truth an opportunity to be developed -- if that can be done.
Large Language Models like GPT-4 are inherently lying machines.