larryjw
About
- Username
- larryjw
- Joined
- Visits
- 191
- Last Active
- Roles
- member
- Points
- 3,299
- Badges
- 2
- Posts
- 1,031
Reactions
-
Apple's headset will need killer apps & services to be successful
I have no imagination whatsoever. But, I've seen quite imaginative apps for AR on the iPhone and iPad.
Games: Erect a AR scene on a real table and show "people and jets" landing. Walk around the scene seeing these objects in 3D.
Astronomy: Render the nearby galaxies viewable in the night sky adjusted for wavelength and exposure. I've seen renderings of a galaxy compared to the full moon -- equal in size.
Hand lens: I have a high quality 10X Belomo loupe hand lens that allows me to see the details of living plants and insects. Based on university lectures.
It takes more expertise than imagination. There is plenty of expertise available, -
ChatGPT might quit the EU rather than comply with regulations
danox said:1. Aside from Germany and Holland, all practical industrial/tech things will originate, be built somewhere else outside the EU.
2. Silicon Valley doesn’t have to worry about anything competitive in tech (AI) coming from Europe because of interference from the EU. -
ChatGPT might quit the EU rather than comply with regulations
zimmie said:larryjw said:When first installed, the ChatGPT app highlights the warning that it may return false information,
The May 10 Lancet published the following article, along with detailed supplemental addendum showing the interaction with ChatGPT which anyone can then reproduce to verify the results.
The dangers of using large language models for peer review
ChatGPT responded with totally made up material, sounding quite authoritative.
Banning ChatGPT would seem a good idea. It would give alternative AI systems which actually can tell the truth an opportunity to be developed -- if that can be done.
Large Language Models like GPT-4 are inherently lying machines.
This behavior becomes much more obvious when you realize GPT (and Bard, and so on) will believe anything you tell it is at the other end of something which looks like a URL. The domain doesn't have to resolve, and the path doesn't have to be valid. An expert would load the page and potentially respond with new information based on what they read there. GPT can't actually load a URL or read, but it can fake a response which looks like what an expert who did all that might say.
People have been doing things like telling Bard that Google discontinued Bard months ago, and it responds with something like "Oh. I'm sorry, according to the link you provided, you're correct. Bard was discontinued in March."
GPT-4 (or 3 or 3.5) doesn't believe anything. It just constructs an enormous set of word associations (n-grams) assigning conditional probabilities of a word given the previous words and then generates the next word (subject to an arbitrary parameter called the Temperature) to add some randomness so it doesn't always choose the most probable. -
Biden administration: Apple & Broadcom should quit stalling and pay Caltech $1.1 billion i...
9secondkox2 said:What the heck is up with this administration???
this is a PRIVATE SECTOR DISPUTE!Let the justice system play out. It’s not like the Biden family hasn’t been doing much stalling of their own.There is an order and process to things.The Biden admin needs to remember that this is America and not some communist dictatorship. -
ChatGPT might quit the EU rather than comply with regulations
When first installed, the ChatGPT app highlights the warning that it may return false information,
The May 10 Lancet published the following article, along with detailed supplemental addendum showing the interaction with ChatGPT which anyone can then reproduce to verify the results.
The dangers of using large language models for peer review
ChatGPT responded with totally made up material, sounding quite authoritative.
Banning ChatGPT would seem a good idea. It would give alternative AI systems which actually can tell the truth an opportunity to be developed -- if that can be done.
Large Language Models like GPT-4 are inherently lying machines.