larryjw

About

Username
larryjw
Joined
Visits
199
Last Active
Roles
member
Points
3,338
Badges
2
Posts
1,040
  • ChatGPT might quit the EU rather than comply with regulations

    zimmie said:
    larryjw said:
    When first installed, the ChatGPT app highlights the warning that it may return false information, 
    The May 10 Lancet published the following article, along with detailed supplemental addendum showing the interaction with ChatGPT which anyone can then reproduce to verify the results. 
    The dangers of using large language models for peer review

    ChatGPT responded with totally made up material, sounding quite authoritative. 

    Banning ChatGPT would seem a good idea. It would give alternative AI systems which actually can tell the truth an opportunity to be developed -- if that can be done. 

    Large Language Models like GPT-4 are inherently lying machines. 
    GPT isn't strictly lying, you just aren't asking it the question you think you're asking. People think they're asking the question they put in the prompt. What GPT actually does is produce something which looks like what an expert would say in response to the prompt. For example, experts cite stuff when they're asked about legal or medical matters, so the response should have citations. If there are no high-quality real citations to include, make some up which look plausible.

    This behavior becomes much more obvious when you realize GPT (and Bard, and so on) will believe anything you tell it is at the other end of something which looks like a URL. The domain doesn't have to resolve, and the path doesn't have to be valid. An expert would load the page and potentially respond with new information based on what they read there. GPT can't actually load a URL or read, but it can fake a response which looks like what an expert who did all that might say.

    People have been doing things like telling Bard that Google discontinued Bard months ago, and it responds with something like "Oh. I'm sorry, according to the link you provided, you're correct. Bard was discontinued in March."
    It isn't strictly lying because lying requires intent and a neuronetwork neither thinks or has intent. It doesn't know anything but it hasn't been designed to know when it doesn't know and just generates in general most likely set of words that follows the previous set of words it's been working with. GPT-4 is a neuronetwork with about 90 layers and about a trillion parameters. (Galileo only needed to estimate 2 parameters determine that the force of gravity causes a falling body to trace a parabola.). 

    GPT-4 (or 3 or 3.5) doesn't believe anything. It just constructs an enormous set of word associations (n-grams) assigning conditional probabilities of a word given the previous words and then generates the next word (subject to an arbitrary parameter called the Temperature) to add some randomness so it doesn't always choose the most probable. 
    watto_cobrazimmie
  • Biden administration: Apple & Broadcom should quit stalling and pay Caltech $1.1 billion i...

    What the heck is up with this administration???

    this is a PRIVATE SECTOR DISPUTE! 

    Let the justice system play out. It’s not like the Biden family hasn’t been doing much stalling of their own. 

    There is an order and process to things. 

    The Biden admin needs to remember that this is America and not some communist dictatorship. 
    There is nothing private about this case. It's about the validity of Patents, which I might remind you is a wholly Federal issue, enshrined the Constitution itself. Article I, Section 8.
    applebynatureunbeliever2ilarynxbaconstangchasmrezwitsbyronljony0
  • ChatGPT might quit the EU rather than comply with regulations

    When first installed, the ChatGPT app highlights the warning that it may return false information, 
    The May 10 Lancet published the following article, along with detailed supplemental addendum showing the interaction with ChatGPT which anyone can then reproduce to verify the results. 
    The dangers of using large language models for peer review

    ChatGPT responded with totally made up material, sounding quite authoritative. 

    Banning ChatGPT would seem a good idea. It would give alternative AI systems which actually can tell the truth an opportunity to be developed -- if that can be done. 

    Large Language Models like GPT-4 are inherently lying machines. 
    Alex_VwilliamlondonCluntBaby92magman1979watto_cobra
  • Biden administration: Apple & Broadcom should quit stalling and pay Caltech $1.1 billion i...

    Accepting the above at face value, the validity of the cal tech patents have not be adjudicated. The US and Cal Tech argument is purely procedural and not substantive. 

    I, for one, don't want patents to exist which violate basic patent law concepts such as prior art. 
    dewmerezwitsjony0
  • EU's drafted AI policies aim to keep Big Tech in check

    Unlike the US, the EU law doesn't make money equivalent to speech. Does that stifle innovation? Perhaps. But perhaps they're unwilling to accept innovation just for its own sake. 
    gatorguy