ChatGPT might quit the EU rather than comply with regulations
The CEO of ChatGPT developer OpenAI says the firm may pull out of the Europe Union if current draft legislation is not toned down.

ChatGPT
ChatGPT seems to have become instantly omnipresent in 2023, but it may not stick around in Europe. According to Reuters, CEO Sam Altman says the company does not plan to leave, but may need to.
"The current draft of the EU AI Act would be over-regulating, but we have heard it's going to get pulled back," Altman told Reuters. "They are still talking about it."
An EU-wide AI legislation has been in development for years. In 2020, representatives from Apple, Google, and Facebook lobbied the EU over its AI regulation plans.
Speaking about the latest proposals at an industry event in London, Alman said that OpenAI would try to comply with regulation if it can. But currently the planned legislation presents greater hurdles to what are called General All Purpose AI Systems -- such as ChatGPT.
"There's so much they could do like changing the definition of general purpose AI systems," Altman told Reuters. "There's a lot of things that could be done."
Before any of the current concerns about ChatGPT -- concerns that prompted Apple to ban its use on company devices -- the EU has been ahead of the game in looking to ensure that AI can be trusted.
"On artificial intelligence, trust is a must, not a nice-to-have," said Margrethe Vestager, the commission's digital chief, in 2021. "With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted."
A ChatGPT app for iPhone has recently been released, and is now available in an increasing number of countries and territories.
Read on AppleInsider

ChatGPT
ChatGPT seems to have become instantly omnipresent in 2023, but it may not stick around in Europe. According to Reuters, CEO Sam Altman says the company does not plan to leave, but may need to.
"The current draft of the EU AI Act would be over-regulating, but we have heard it's going to get pulled back," Altman told Reuters. "They are still talking about it."
An EU-wide AI legislation has been in development for years. In 2020, representatives from Apple, Google, and Facebook lobbied the EU over its AI regulation plans.
Speaking about the latest proposals at an industry event in London, Alman said that OpenAI would try to comply with regulation if it can. But currently the planned legislation presents greater hurdles to what are called General All Purpose AI Systems -- such as ChatGPT.
"There's so much they could do like changing the definition of general purpose AI systems," Altman told Reuters. "There's a lot of things that could be done."
Before any of the current concerns about ChatGPT -- concerns that prompted Apple to ban its use on company devices -- the EU has been ahead of the game in looking to ensure that AI can be trusted.
"On artificial intelligence, trust is a must, not a nice-to-have," said Margrethe Vestager, the commission's digital chief, in 2021. "With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted."
A ChatGPT app for iPhone has recently been released, and is now available in an increasing number of countries and territories.
Read on AppleInsider
Comments
It's far better to try and lay foundational controls to guide progress than have a free-for-all and try to clean up the mess later.
As it is, this is still draft legislation and will see modifications as it progresses.
The May 10 Lancet published the following article, along with detailed supplemental addendum showing the interaction with ChatGPT which anyone can then reproduce to verify the results.
The dangers of using large language models for peer review
ChatGPT responded with totally made up material, sounding quite authoritative.
Banning ChatGPT would seem a good idea. It would give alternative AI systems which actually can tell the truth an opportunity to be developed -- if that can be done.
Large Language Models like GPT-4 are inherently lying machines.
2. Silicon Valley doesn’t have to worry about anything competitive in tech (AI) coming from Europe because of interference from the EU.
This behavior becomes much more obvious when you realize GPT (and Bard, and so on) will "believe" anything you tell it is at the other end of something which looks like a URL. The domain doesn't have to resolve, and the path doesn't have to be valid. An expert would load the page and potentially respond with new information based on what they read there. GPT can't actually load a URL or read, but it can fake a response which looks like what an expert who did all that might say.
People have been doing things like telling Bard that Google discontinued Bard months ago, and it responds with something like "Oh. I'm sorry, according to the link you provided, you're correct. Bard was discontinued in March."
Edit: Put "believe" in quotes. As Larryjw pointed out, the model doesn't actually know anything, so it doesn't actually believe anything. Ultimately my point is people think of GPT as being built to produce correct output, when it's actually built to produce plausibly formatted output.
Don't you even try telling me
That you don't want me to end this way
From the song 'Go Now' by the Moody Blues.
Go ChatGPT, you won't be missed. Most people don't know who you are and don't care.
GPT-4 (or 3 or 3.5) doesn't believe anything. It just constructs an enormous set of word associations (n-grams) assigning conditional probabilities of a word given the previous words and then generates the next word (subject to an arbitrary parameter called the Temperature) to add some randomness so it doesn't always choose the most probable.
Seems like the motivation is always more related to anti-Americanism than clear dispassionate thinking.
Any technology whose foundation is based on theft of intellectual property should be vehemently boycotted, as I am boycotting it.
Please LET the door hit you on the way out of the EU!