larryjw

About

Username
larryjw
Joined
Visits
199
Last Active
Roles
member
Points
3,338
Badges
2
Posts
1,040
  • TSMC sends more Taiwanese workers to finish Arizona plant

    JP234 said:
    Previously, TSMC engineers in Taiwan have complained that Americans don't work hard enough, and "are the most difficult to manage."

    So did anyone else see the documentary film "American Factory?" Same scenario except the company is run by a PRC businessman who buys a closed GM plant in Ohio, and makes auto glass. He hires Americans laid off by GM, saving the town from economic death, but wages are half what they made before they made nothing. He had the same complaints, and add in unionization efforts, when American workers are expected to work extra hours without pay. And of course the bosses are all Chinese and have the same racist tendencies as the Americans who work for them.

    But apparently they've reached accord, or at least detente, since Fuyao Glass America is still operating there.
    The movie was interesting. As regards to racist tendencies I noticed typical management ignorance. The Americans had a much harder time working near the furnaces, That's not laziness but basic physics/biology. The Americans are substantially larger than their Chinese counterparts. Large animals cannot dissipate heat as efficiently as small animals -- elephants have big ears for a reason. The larger American men working near the ovens simply dissipate the heat of the furnaces as well as Chinese workers. 

    Wouldn't it be nice, and surprising if managers actually had some basic knowledge? 
    waveparticleJP234
  • Apple Vision Pro could help surgeons see vital data during operations

    This place is cringe when bulls*t articles like this appear. This tech already exists, has existed for years, and is in use in countless hospitals around the globe.
    Apple didn't invent anything new. 
    I'm sure Apple has many patents of the Vision Pro. That means they DID invent something new according to the patent office. 
    baconstangcg27williamlondon9secondkox2
  • Apple Vision Pro could help surgeons see vital data during operations

    I know neurosurgeons who perform robotic surgery. One of them is my doctor. Quite complex. The equipment they use contain cameras that peer into the brain, eye pieces that enlarge the camera images, robotic arms that hold and manipulate the scalpels, foot pedals and a mouth pieces controlling different aspects of the robotic assembly. 

    I had no idea until my doctor and other of his colleagues gave lectures with surgical videos to a broad audience of patients and families of patients. 

    Can the Vision Pro support such activities? I'll let the medical technologists make that decision. It's certainly not beyond the possibility. 
    williamlondonhcrefugeewatto_cobraradarthekatgregoriusmcg27
  • ChatGPT might quit the EU rather than comply with regulations

    zimmie said:
    larryjw said:
    When first installed, the ChatGPT app highlights the warning that it may return false information, 
    The May 10 Lancet published the following article, along with detailed supplemental addendum showing the interaction with ChatGPT which anyone can then reproduce to verify the results. 
    The dangers of using large language models for peer review

    ChatGPT responded with totally made up material, sounding quite authoritative. 

    Banning ChatGPT would seem a good idea. It would give alternative AI systems which actually can tell the truth an opportunity to be developed -- if that can be done. 

    Large Language Models like GPT-4 are inherently lying machines. 
    GPT isn't strictly lying, you just aren't asking it the question you think you're asking. People think they're asking the question they put in the prompt. What GPT actually does is produce something which looks like what an expert would say in response to the prompt. For example, experts cite stuff when they're asked about legal or medical matters, so the response should have citations. If there are no high-quality real citations to include, make some up which look plausible.

    This behavior becomes much more obvious when you realize GPT (and Bard, and so on) will believe anything you tell it is at the other end of something which looks like a URL. The domain doesn't have to resolve, and the path doesn't have to be valid. An expert would load the page and potentially respond with new information based on what they read there. GPT can't actually load a URL or read, but it can fake a response which looks like what an expert who did all that might say.

    People have been doing things like telling Bard that Google discontinued Bard months ago, and it responds with something like "Oh. I'm sorry, according to the link you provided, you're correct. Bard was discontinued in March."
    It isn't strictly lying because lying requires intent and a neuronetwork neither thinks or has intent. It doesn't know anything but it hasn't been designed to know when it doesn't know and just generates in general most likely set of words that follows the previous set of words it's been working with. GPT-4 is a neuronetwork with about 90 layers and about a trillion parameters. (Galileo only needed to estimate 2 parameters determine that the force of gravity causes a falling body to trace a parabola.). 

    GPT-4 (or 3 or 3.5) doesn't believe anything. It just constructs an enormous set of word associations (n-grams) assigning conditional probabilities of a word given the previous words and then generates the next word (subject to an arbitrary parameter called the Temperature) to add some randomness so it doesn't always choose the most probable. 
    watto_cobrazimmie
  • ChatGPT might quit the EU rather than comply with regulations

    When first installed, the ChatGPT app highlights the warning that it may return false information, 
    The May 10 Lancet published the following article, along with detailed supplemental addendum showing the interaction with ChatGPT which anyone can then reproduce to verify the results. 
    The dangers of using large language models for peer review

    ChatGPT responded with totally made up material, sounding quite authoritative. 

    Banning ChatGPT would seem a good idea. It would give alternative AI systems which actually can tell the truth an opportunity to be developed -- if that can be done. 

    Large Language Models like GPT-4 are inherently lying machines. 
    Alex_VwilliamlondonCluntBaby92magman1979watto_cobra