Paid Apple Intelligence features won't come until at least 2027

2»

Comments

  • Reply 21 of 39
    eightzeroeightzero Posts: 3,128member
    dutchlord said:
    DAalseth said:
    My only fear with this is that Apple might say Pages and Numbers are now enhanced with AI features and so make them subscription as well. MS has made a lot of money with Office365, why not do the same with iWork. 
    Who uses iWork anyway? Totally unusable in a corporate environment. 
    What a painfully myopic comment. 

    There are millions of consumers that buy Macs and iPads for personal use and don’t need Office.  There are millions of students that have Apple products that really don’t need office. 

    Also, Apple uses Keynote for their keynote presentations. And Apple is ….. checks notes …. A corporation. So, there is at least one corporate environment that uses iWork and it doesn’t seem to hold them back in terms of success. 
    I left the corporate work world some time back, and my last day as an employee was the very last time in my life I will have ever used MS Office. Not a day went by that I didn't swear at Word. What a POS. I was forced to use it by my employer's IT department. All the executives, top paid people, however, had Macs. They really didn't need IT support.

    I am a completely Microsoft free household. That makes me very, very happy. I find numbers and pages (heck, often textedit) very much sufficient. They would have been for my employment too, but...well...we peasants got the cheapest thing they could find.

    Riddance.
    williamlondonmattinoz
  • Reply 22 of 39
    jimh2jimh2 Posts: 654member
    2 years is an eternity in the computer world. There is no way this is factual as every other company will be looking at Apple in their rear view mirror and be on the 2nd or 3rd generation of their AI offering.
    avon b7
  • Reply 23 of 39
    jdwjdw Posts: 1,408member
    blastdoor said:
    jdw said:
    ChatGPT4o, is...
     TL;DR
    I often forget Gen-Z has that handicap. :smiley: 

    For my fellow Gen-Xers out there, that cryptic acronym today's youth enjoy using (because their fingers lack the dexterity to type out a complete sentence) is:

    Too Long; Didn't Read

    I muse over the choice of a semi-colon there.  But hey... is the use of "TL;DR" really the best choice in a forum, which caters to people who know how to both read and write more than a single paragraph?  (The answer is "no" for our younger readers who may not know what a rhetorical question is.)

    If Gen-Z refuses to read, God help so-called "Gen-Alpha" and beyond.  

    Or maybe it will be the man-made digital god that will help them.

    It very well could be that Gen-Alpha won't even need to read or write because the AI of the future will be able to read-then-summarize all text for them, injecting those magical acronym-laced bits of brevity straight into their brains, then reading their minds to know what they want. Eventually, AI won't even need to read the contents of the ever-contracting human mind, opting instead to read facial expressions.  One brief smirk tells AI everything you need!

    The future looks bright, folks!
    slow n easy
  • Reply 24 of 39
    mattinozmattinoz Posts: 2,440member
    jdw said:
    blastdoor said:
    jdw said:
    ChatGPT4o, is...
     TL;DR
    I often forget Gen-Z has that handicap. :smiley: 

    For my fellow Gen-Xers out there, that cryptic acronym today's youth enjoy using (because their fingers lack the dexterity to type out a complete sentence) is:

    Too Long; Didn't Read

    I muse over the choice of a semi-colon there.  But hey... is the use of "TL;DR" really the best choice in a forum, which caters to people who know how to both read and write more than a single paragraph?  (The answer is "no" for our younger readers who may not know what a rhetorical question is.)

    If Gen-Z refuses to read, God help so-called "Gen-Alpha" and beyond.  

    Or maybe it will be the man-made digital god that will help them.

    It very well could be that Gen-Alpha won't even need to read or write because the AI of the future will be able to read-then-summarize all text for them, injecting those magical acronym-laced bits of brevity straight into their brains, then reading their minds to know what they want. Eventually, AI won't even need to read the contents of the ever-contracting human mind, opting instead to read facial expressions.  One brief smirk tells AI everything you need!

    The future looks bright, folks!
    Oh think that’s a bit unfair. I mean in gen-x we sure have our share of “those people”.
     
  • Reply 25 of 39
    jdw said:
    blastdoor said:
    jdw said:
    ChatGPT4o, is...
     TL;DR
    I often forget Gen-Z has that handicap. :smiley: 

    For my fellow Gen-Xers out there, that cryptic acronym today's youth enjoy using (because their fingers lack the dexterity to type out a complete sentence) is:

    Too Long; Didn't Read

    I muse over the choice of a semi-colon there.  But hey... is the use of "TL;DR" really the best choice in a forum, which caters to people who know how to both read and write more than a single paragraph?  (The answer is "no" for our younger readers who may not know what a rhetorical question is.)

    If Gen-Z refuses to read, God help so-called "Gen-Alpha" and beyond.  

    Or maybe it will be the man-made digital god that will help them.

    It very well could be that Gen-Alpha won't even need to read or write because the AI of the future will be able to read-then-summarize all text for them, injecting those magical acronym-laced bits of brevity straight into their brains, then reading their minds to know what they want. Eventually, AI won't even need to read the contents of the ever-contracting human mind, opting instead to read facial expressions.  One brief smirk tells AI everything you need!

    The future looks bright, folks!
    Gen X here. I also passed on your initial post due to the length. I started to read it but realized its length was more about you feeling self important than any actual substance. Your condescending follow up with sweeping generalizations just really proves the point. So many words and no substance.
    blastdoorwilliamlondonmattinoz
  • Reply 26 of 39
    blastdoorblastdoor Posts: 3,520member
    jdw said:
    blastdoor said:
    jdw said:
    ChatGPT4o, is...
     TL;DR
    I often forget Gen-Z has that handicap. :smiley: 

    For my fellow Gen-Xers out there, that cryptic acronym today's youth enjoy using (because their fingers lack the dexterity to type out a complete sentence) is:

    Too Long; Didn't Read

    I muse over the choice of a semi-colon there.  But hey... is the use of "TL;DR" really the best choice in a forum, which caters to people who know how to both read and write more than a single paragraph?  (The answer is "no" for our younger readers who may not know what a rhetorical question is.)

    If Gen-Z refuses to read, God help so-called "Gen-Alpha" and beyond.  

    Or maybe it will be the man-made digital god that will help them.

    It very well could be that Gen-Alpha won't even need to read or write because the AI of the future will be able to read-then-summarize all text for them, injecting those magical acronym-laced bits of brevity straight into their brains, then reading their minds to know what they want. Eventually, AI won't even need to read the contents of the ever-contracting human mind, opting instead to read facial expressions.  One brief smirk tells AI everything you need!

    The future looks bright, folks!
    Gen X here. I also passed on your initial post due to the length. I started to read it but realized its length was more about you feeling self important than any actual substance. Your condescending follow up with sweeping generalizations just really proves the point. So many words and no substance.
    Bingo. I'm also Gen X. And TL;DR has been around since the beginning of this century, probably coined by a Gen Xer. 

    The irony is that if an LLM spewed out such an excessively verbose, substance-lite post it would be rightly and widely criticized for it. 
    williamlondonmattinoz
  • Reply 27 of 39
    XedXed Posts: 2,800member
    DAalseth said:
    My only fear with this is that Apple might say Pages and Numbers are now enhanced with AI features and so make them subscription as well. MS has made a lot of money with Office365, why not do the same with iWork. 
    I doubt this will happen. Pages, Keynote, Numbers, iMovie, Photos and GarageBand are all part of Apple’s whole widget strategy. You can buy a Mac and be productive right out of the box. No need to buy anyone else. And if you need more advanced features there other apps you can buy. They might come up with AI features that you have to subscribe to get but charging for the actual app? Totally not going to happen. 
    I agree with you. It would be folly for Apple to charge for “iWork” apps. While I prefer to use Numbers and Pages over Excel and Word, Apple simply doesn’t have any power behind those apps to charge for them outright. They tried that. It didn’t work. They’d have to invest a lot more time and money to make them more competitive if they wanted to try that. Honestly, I wish they would make them more competitive but I don’t think it’s a focus for the company any more than Preview is at this point.
    williamlondon
  • Reply 28 of 39
    XedXed Posts: 2,800member
    dewme said:
    eightzero said:
    I see we now have entered the future where we predict the price of a product before anyone knows what it does or what it is for. 
    And of course condemn one way or the other like the prediction is reality.
    Spot on from both of you. I’m still at a place where I don’t truly understand what Apple Intelligence really is on a personal level. Why speculate about subtle nuances like how it may possibly be monetized by Apple until we truly understand what it is or what value it actually brings to our personal lives. 

    It may turn out to be wildly life changing, a total flop, or something in-between. I won’t know until I’ve tried it. There have been plenty of “amazing new features” launched by Apple over the years that turned out to be totally meh. 

    I think Apple has pretty much refined all of the basic features of all of their premier products to the point where they mostly “just work.” They still have work to do in some areas. Some things have turned out to be dangerously too good, e.g., as evidenced by a large proportion of smartphone users being psychologically addicted to their smartphones at the expense of properly dealing with the physical things and people in their non-virtual lives. At this point the added value of new features is very incremental despite all of the hype behind them. 

    Will Apple Intelligence fundamentally change the way we use Apple products or will it be just another layer of features on top of an already tall stack of features? I have no idea. So far I’ve been impressed with ChatGPT being a “better search and query” tool that may occasionally return bogus answers. 

    Would I pay to use ChatGPT in its current form? Probably not only because there are free tools that can provide most of the things I use ChatGPT for. Convenient? Yes. Essential? Not yet. I still have a reasonably functioning brain and can generate most of the things I need to generate using the grey blob the sits between my ears. 

    This article may be wildly speculative, but it does illustrate a real concern about how AI in general will pay for itself. Will it be used to raise the quest for answers and knowledge for everyone or will it follow the prevailing capitalist tendencies in countries like the US to only provide the very best benefits to those who can afford to pay, in a similar manner to US healthcare?

    It will be very interesting to see which direction Apple goes with Apple Intelligence. Raise all boats or pay to play?
    I have access to Apple Intelligence on my Mac and I have yet to use it, TBH. I do see that Mail is summarizing articles but the couple times I’ve clicked it it summarized the wrong part of the email.

    Maybe if I were writing one I’ll see about it cleaning it up or changing the tone for me, but with so many decades of practice doing it on my own I’m not sure how useful that will be even if I think it’s a neat feature.

    I have used ChatGPT a lot since it came out. When trying to find answers to odd tech issues I do find that it barrows in on the resolution faster than if I just searched MS’ support files or scoured tech forums. Plus, it keeps a nice running history of the conversation in a format that is better than a browser history so going back to an earlier question and answer is a nicer experience than guessing which website I was to re-read an answer.
  • Reply 29 of 39
    jdw said:
    ChatGPT4o, is the most brain dead stupid thing to come along in quite a while.  Sure, it does a few things surprisingly well, but it makes enormous mistakes of epic proportions every single time I use it. And when I attempt to correct it, it foolishly apologizes and then rephrases its stupid errors.  When I tell it repeatedly that it is in error, it repeatedly apologizes but never learns.

    When I use ChatGPT via Bing (which I often do after my freeloader time limit for GPT4o runs out), it too makes mistakes. And when I get pissed off at it and give it a piece of my mind, silly Bing suggests I change the topic and a new session begins.  Totally hilarious that Bing is protecting ChatGPT!

    Every single time I use ChatGPT4o in Chrome (which is the most reliable browser for it), at times when it gives me a clickable link, the link will never open when I click on it.  That forces me to tell it to give me links in plain text.  Why plain text?  So I can copy/paste it into a browser's address bar myself.  That's a work-around the "can't click the text link" bug, but quite nearly 100% of the time, when I finally am able to open the link, it yields a 404 or totally unrelated info.  It's maddening!

    I am also not inclined to trust ChatGPT because it lies.  Because of it's blatant lies, I almost always ask it for links to its source info.  And that is how I know the links are bogus.  I sometimes believe what it tells me is true, but without a link to source info, I cannot trust it 100%.  I become especially doubtful when it gives me a link that leads to totally unrelated topics.

    I mainly want to use ChatGPT to search the web faster and easier than I can Google something, asking it to check multiple sources.  Sadly, most of the time GPT4o misses info I know exists.  I often test ChatGPT based on what I know is out there on the web and is easily found by Googling.  Often times it misses that info when it does its own searches, and then comes back and tells me something wrong or incomplete.

    I truly hate ChatGPT a lot of the time because it lies so often.  For example, I've recently been comparing OLED TVs and soundbars.  It tells me certain specifications that simply are not true.  When I point out its error, it apologizes and deletes the single error line, and then just repeats the rest it told me before.

    Why do I even torture myself by continuing to use the stupid thing?  Because it sometimes does a decent job in very specific situations like summarizing text or rephrasing.  I prefer that use case because I am good enough in the English language to know if what it tells me is good or not.  But in other cases where I am looking for facts I don't know to be true, it often lies; and even when it doesn't lie, I try to get links to source info, but those links never work.

    So it's a real love-hate relationship, but more hate than love.

    It is totally and utterly laughable that governments and people around the world are afraid of AI.  Yeah right!  Maybe 100, 200 or perhaps 300 years from now it might be worthy of such fears, but at the moment it's not that far ahead of brain dead Siri.  It's no different than the fake promise of "self driving cars." That's not going to happen in my lifetime.  Truly autonomous driving means no human intervention is EVER needed, and you can drive in any situation, like mountain driving in the snow with sunlight reflections hitting the snow and blinding you occasionally.  Or driving on dark roads that aren't marked with paint.  Or driving off-road in the dirt.  Nope.  No matter what these car companies say, what they have now is little more than a joke.  It gives you a great first impression, but using the tech for a while shows how ridiculous it is.  

    The PROMISE of AI is great, but it's far, far in the future before we can sit back and experience true "intelligence" that's "artificial."  What we have now in the world of AI is barely useable.  It's more entertainment than anything else.

    I write all this to say that if Apple CHARGES MONEY for such digital stupidity, I certainly will not be lining up for a subscription, They have a LONG way to go before its worthy of dedicated fees and special charges.  We're very much in the R&D stage right now.  I say this in the hope of something far superior to what we have now, while at the same time, I am a realist too.  It's not great now, and it probably still won't be that great 10 to 20 years hence.  We need a real technological breakthrough to make a huge leap in usability.

    It doesn't have to be correct to be dangerous.

    It doesn't have to be correct for corporations to replace humans with LLMs.  It just has to be cheaper.

    It doesn't have to be correct to be used for election interference.  It just has to repeat the lies the politicians want.

    Was your rant out of a LLM?  It's only sort of triggering my "machine generated" sense, but there's a few bits that just seem a bit... off for a human.
    blastdoor
  • Reply 30 of 39
    jdw said:
    ChatGPT4o, is the most brain dead stupid thing to come along in quite a while.  Sure, it does a few things surprisingly well, but it makes enormous mistakes of epic proportions every single time I use it. And when I attempt to correct it, it foolishly apologizes and then rephrases its stupid errors.  When I tell it repeatedly that it is in error, it repeatedly apologizes but never learns.

    When I use ChatGPT via Bing (which I often do after my freeloader time limit for GPT4o runs out), it too makes mistakes. And when I get pissed off at it and give it a piece of my mind, silly Bing suggests I change the topic and a new session begins.  Totally hilarious that Bing is protecting ChatGPT!

    Every single time I use ChatGPT4o in Chrome (which is the most reliable browser for it), at times when it gives me a clickable link, the link will never open when I click on it.  That forces me to tell it to give me links in plain text.  Why plain text?  So I can copy/paste it into a browser's address bar myself.  That's a work-around the "can't click the text link" bug, but quite nearly 100% of the time, when I finally am able to open the link, it yields a 404 or totally unrelated info.  It's maddening!

    I am also not inclined to trust ChatGPT because it lies.  Because of it's blatant lies, I almost always ask it for links to its source info.  And that is how I know the links are bogus.  I sometimes believe what it tells me is true, but without a link to source info, I cannot trust it 100%.  I become especially doubtful when it gives me a link that leads to totally unrelated topics.

    I mainly want to use ChatGPT to search the web faster and easier than I can Google something, asking it to check multiple sources.  Sadly, most of the time GPT4o misses info I know exists.  I often test ChatGPT based on what I know is out there on the web and is easily found by Googling.  Often times it misses that info when it does its own searches, and then comes back and tells me something wrong or incomplete.

    I truly hate ChatGPT a lot of the time because it lies so often.  For example, I've recently been comparing OLED TVs and soundbars.  It tells me certain specifications that simply are not true.  When I point out its error, it apologizes and deletes the single error line, and then just repeats the rest it told me before.

    Why do I even torture myself by continuing to use the stupid thing?  Because it sometimes does a decent job in very specific situations like summarizing text or rephrasing.  I prefer that use case because I am good enough in the English language to know if what it tells me is good or not.  But in other cases where I am looking for facts I don't know to be true, it often lies; and even when it doesn't lie, I try to get links to source info, but those links never work.

    So it's a real love-hate relationship, but more hate than love.

    It is totally and utterly laughable that governments and people around the world are afraid of AI.  Yeah right!  Maybe 100, 200 or perhaps 300 years from now it might be worthy of such fears, but at the moment it's not that far ahead of brain dead Siri.  It's no different than the fake promise of "self driving cars." That's not going to happen in my lifetime.  Truly autonomous driving means no human intervention is EVER needed, and you can drive in any situation, like mountain driving in the snow with sunlight reflections hitting the snow and blinding you occasionally.  Or driving on dark roads that aren't marked with paint.  Or driving off-road in the dirt.  Nope.  No matter what these car companies say, what they have now is little more than a joke.  It gives you a great first impression, but using the tech for a while shows how ridiculous it is.  

    The PROMISE of AI is great, but it's far, far in the future before we can sit back and experience true "intelligence" that's "artificial."  What we have now in the world of AI is barely useable.  It's more entertainment than anything else.

    I write all this to say that if Apple CHARGES MONEY for such digital stupidity, I certainly will not be lining up for a subscription, They have a LONG way to go before its worthy of dedicated fees and special charges.  We're very much in the R&D stage right now.  I say this in the hope of something far superior to what we have now, while at the same time, I am a realist too.  It's not great now, and it probably still won't be that great 10 to 20 years hence.  We need a real technological breakthrough to make a huge leap in usability.

    It doesn't have to be correct to be dangerous.

    It doesn't have to be correct for corporations to replace humans with LLMs.  It just has to be cheaper.

    It doesn't have to be correct to be used for election interference.  It just has to repeat the lies the politicians want.

    Was your rant out of a LLM?  It's only sort of triggering my "machine generated" sense, but there's a few bits that just seem a bit... off for a human.
    Wow, 3 different posters (who do not belong to the "defend Apple at any cost" or "ultra-aggressive" or even "aggressive" groups in this forum) attacking @jdw for sharing his experince with ChatGPT in what is apparently a very lengthy post. Just unbelievable. If it is too long for you to read, why can't you guys just ignore it and move on, instead of posting TL;DR and further attacking him for his response?
    jdwwilliamlondon
  • Reply 31 of 39
    blastdoorblastdoor Posts: 3,520member
    jdw said:
    ChatGPT4o, is the most brain dead stupid thing to come along in quite a while.  Sure, it does a few things surprisingly well, but it makes enormous mistakes of epic proportions every single time I use it. And when I attempt to correct it, it foolishly apologizes and then rephrases its stupid errors.  When I tell it repeatedly that it is in error, it repeatedly apologizes but never learns.

    When I use ChatGPT via Bing (which I often do after my freeloader time limit for GPT4o runs out), it too makes mistakes. And when I get pissed off at it and give it a piece of my mind, silly Bing suggests I change the topic and a new session begins.  Totally hilarious that Bing is protecting ChatGPT!

    Every single time I use ChatGPT4o in Chrome (which is the most reliable browser for it), at times when it gives me a clickable link, the link will never open when I click on it.  That forces me to tell it to give me links in plain text.  Why plain text?  So I can copy/paste it into a browser's address bar myself.  That's a work-around the "can't click the text link" bug, but quite nearly 100% of the time, when I finally am able to open the link, it yields a 404 or totally unrelated info.  It's maddening!

    I am also not inclined to trust ChatGPT because it lies.  Because of it's blatant lies, I almost always ask it for links to its source info.  And that is how I know the links are bogus.  I sometimes believe what it tells me is true, but without a link to source info, I cannot trust it 100%.  I become especially doubtful when it gives me a link that leads to totally unrelated topics.

    I mainly want to use ChatGPT to search the web faster and easier than I can Google something, asking it to check multiple sources.  Sadly, most of the time GPT4o misses info I know exists.  I often test ChatGPT based on what I know is out there on the web and is easily found by Googling.  Often times it misses that info when it does its own searches, and then comes back and tells me something wrong or incomplete.

    I truly hate ChatGPT a lot of the time because it lies so often.  For example, I've recently been comparing OLED TVs and soundbars.  It tells me certain specifications that simply are not true.  When I point out its error, it apologizes and deletes the single error line, and then just repeats the rest it told me before.

    Why do I even torture myself by continuing to use the stupid thing?  Because it sometimes does a decent job in very specific situations like summarizing text or rephrasing.  I prefer that use case because I am good enough in the English language to know if what it tells me is good or not.  But in other cases where I am looking for facts I don't know to be true, it often lies; and even when it doesn't lie, I try to get links to source info, but those links never work.

    So it's a real love-hate relationship, but more hate than love.

    It is totally and utterly laughable that governments and people around the world are afraid of AI.  Yeah right!  Maybe 100, 200 or perhaps 300 years from now it might be worthy of such fears, but at the moment it's not that far ahead of brain dead Siri.  It's no different than the fake promise of "self driving cars." That's not going to happen in my lifetime.  Truly autonomous driving means no human intervention is EVER needed, and you can drive in any situation, like mountain driving in the snow with sunlight reflections hitting the snow and blinding you occasionally.  Or driving on dark roads that aren't marked with paint.  Or driving off-road in the dirt.  Nope.  No matter what these car companies say, what they have now is little more than a joke.  It gives you a great first impression, but using the tech for a while shows how ridiculous it is.  

    The PROMISE of AI is great, but it's far, far in the future before we can sit back and experience true "intelligence" that's "artificial."  What we have now in the world of AI is barely useable.  It's more entertainment than anything else.

    I write all this to say that if Apple CHARGES MONEY for such digital stupidity, I certainly will not be lining up for a subscription, They have a LONG way to go before its worthy of dedicated fees and special charges.  We're very much in the R&D stage right now.  I say this in the hope of something far superior to what we have now, while at the same time, I am a realist too.  It's not great now, and it probably still won't be that great 10 to 20 years hence.  We need a real technological breakthrough to make a huge leap in usability.

    It doesn't have to be correct to be dangerous.

    It doesn't have to be correct for corporations to replace humans with LLMs.  It just has to be cheaper.

    It doesn't have to be correct to be used for election interference.  It just has to repeat the lies the politicians want.

    Was your rant out of a LLM?  It's only sort of triggering my "machine generated" sense, but there's a few bits that just seem a bit... off for a human.
    Wow, 3 different posters (who do not belong to the "defend Apple at any cost" or "ultra-aggressive" or even "aggressive" groups in this forum) attacking @jdw for sharing his experince with ChatGPT in what is apparently a very lengthy post. Just unbelievable. If it is too long for you to read, why can't you guys just ignore it and move on, instead of posting TL;DR and further attacking him for his response?
     :'( 
    williamlondon
  • Reply 32 of 39
    jdwjdw Posts: 1,408member
    Wow, 3 different posters (who do not belong to the "defend Apple at any cost" or "ultra-aggressive" or even "aggressive" groups in this forum) attacking @jdw for sharing his experince with ChatGPT in what is apparently a very lengthy post. Just unbelievable. If it is too long for you to read, why can't you guys just ignore it and move on, instead of posting TL;DR and further attacking him for his response?
    I sincerely appreciate your very kind words.  Thank you.

    But there is a bright side to everything.

    First, the replies enable me to identify a few more Gen-Xers in this forum, by their own admission.  It should be even more interesting to read their future compositions in this forum as a result of that newfound knowledge.

    Second, I agree with mattinoz who said this...
    mattinoz said:
    Oh think that’s a bit unfair. I mean in gen-x we sure have our share of “those people”.
    His observation of me being "a bit unfair" is sound.  I was being deliberately provocative.  Why?  Because my doing so tends to result in one of two things:

    1. Nobody says anything at all in response. (i.e., general apathy, a revelation that nobody cares)
    2. Key players get feathers ruffled and then suddenly READ what I wrote in order to formulate a response.

    It seems that (2) occurred, which has me rather pleased. No doubt LeVar Burton would be pleased as well. :smiley: (Yes, that is a bit cryptic, but people my age should know that that means. This is for the rest of you who may not understand.)

    The point here is that although younger generations have a greater problem with reading more than one sentence compared to older generations, I know full well from my elder cousins that the reading handicap can apply to Gen-X too.  Modern ways of reading and writing can corrupt any of us, regardless of age.

    Even so, I can't help but take note of the fact we all are commenting under articles that are quite lengthy.  A good number of AI articles tend to be longer than what I write, and yet, I can only assume that most people here do read the articles. However we don't see "TL;DR" remarks under every article, seeking to chastise the article for its length.  That means people are being SELECTIVE about what read, which also means their use of TL;DR isn't correct.  It's probably more like WA;DR (Wrong Author; Didn't Read).  Quite hilarious, really.

    Anyway, the most intriguing response of all though came from Victor...
    Was your rant out of a LLM?  It's only sort of triggering my "machine generated" sense, but there's a few bits that just seem a bit... off for a human.

    I actually take that as a complement. Thank you, Victor.

    I have an older female cousin who has nicknamed me "Mr. Data" since the 80's.  I always thought that to be a bit amusing because I rather took to Spock more than Data.  But having a logical mind is always the goal.  A lot of the time, our greatest enemy is emotion, and you get to see a lot of that in forums like this.

    All said, great conversation, folks.  I wish all of you a great week!
    edited August 13 muthuk_vanalingam
  • Reply 33 of 39
    blastdoorblastdoor Posts: 3,520member
    Was your rant out of a LLM?  It's only sort of triggering my "machine generated" sense, but there's a few bits that just seem a bit... off for a human.
    I wondered about this too. So I asked Microsoft Copilot (aka GPT4) and it had this to say: 

    Thank you for sharing this passage. Based on the content and style of the text, it appears to be written by a human rather than an AI. Here are a few reasons why:

    1. Personal Experience: The text contains personal experiences and emotions, which are typically difficult for an AI to generate authentically. For example, the author expresses frustration, mentions specific use cases, and refers to their own actions and reactions.

    2. Consistent Topic: The text maintains a consistent topic throughout, discussing the author’s experiences and opinions about AI. This is in contrast to AI-generated text, which can often drift off-topic.

    3. Specific References: The text contains specific references to products, companies, and technologies. While an AI could generate these references, the way they’re used here - to support the author’s argument and share personal experiences - is more characteristic of human writing.

    4. Opinions and Predictions: The text contains opinions and predictions about the future of AI. These show a level of subjective reasoning and foresight that current AI models typically do not possess.

    5. Complex Sentiment: The text expresses a complex sentiment towards AI - a mix of frustration, disappointment, but also recognition of its potential. This nuanced view is more typical of human writing.

    Remember, while these points suggest the text is likely human-written, it’s not a definitive conclusion. AI is continually evolving and improving, and the lines between human and AI-generated text can sometimes blur. But based on current AI capabilities, this text appears to be human-written. I hope this analysis helps! ߘꦬt;/p>




    edited August 14 avon b7
  • Reply 34 of 39
    Reply 33 was a very good argument in Chat GPT 4’s favor. That is definitely not the response that you would get from either Bing or Google Search.
  • Reply 35 of 39
    jdw said:
    blastdoor said:
    jdw said:
    ChatGPT4o, is...
     TL;DR
    I often forget Gen-Z has that handicap. :smiley: 

    For my fellow Gen-Xers out there, that cryptic acronym today's youth enjoy using (because their fingers lack the dexterity to type out a complete sentence) is:

    Too Long; Didn't Read

    I muse over the choice of a semi-colon there.  But hey... is the use of "TL;DR" really the best choice in a forum, which caters to people who know how to both read and write more than a single paragraph?  (The answer is "no" for our younger readers who may not know what a rhetorical question is.)

    If Gen-Z refuses to read, God help so-called "Gen-Alpha" and beyond.  

    Or maybe it will be the man-made digital god that will help them.

    It very well could be that Gen-Alpha won't even need to read or write because the AI of the future will be able to read-then-summarize all text for them, injecting those magical acronym-laced bits of brevity straight into their brains, then reading their minds to know what they want. Eventually, AI won't even need to read the contents of the ever-contracting human mind, opting instead to read facial expressions.  One brief smirk tells AI everything you need!

    The future looks bright, folks!
    Thank you, thank you, thank you, thank you. I had no fucking clue whatsoever what TL;DR meant. One poster mentioned that it’s been around for decades, but it is certainly not even close to as common as lol. I never use abbreviations in text messages and I hate reading them. I was also wondering about the semicolon, but from your post, it seems like semicolon is not standard practice. Even though I thought that this post was spot on, I definitely didn’t agree with your original post. It looks like you use ChatGPT-4 in a different way than I do. I would never use ChatGPT for specs on a TV for example. I would either go to the manufacturer’s website or Wikipedia. I find Wikipedia to be extremely useful, and use it on a daily bases. Also, you say that you hate ChatGPT-4, but it is absolutely undeniable that ChatGPT gets it right far more often than a normal search engine while also getting you those results faster. It is certainly a step in the right direction. I just feel like you are asking it questions that are better answered by other means. It sounds like you want to use ChatGPT for everything, when it’s not really designed for everything.
    williamlondon
  • Reply 36 of 39
    blastdoor said:
    Was your rant out of a LLM?  It's only sort of triggering my "machine generated" sense, but there's a few bits that just seem a bit... off for a human.
    I wondered about this too. So I asked Microsoft Copilot (aka GPT4) and it had this to say: 

    Thank you for sharing this passage. Based on the content and style of the text, it appears to be written by a human rather than an AI. Here are a few reasons why:

    1. Personal Experience: The text contains personal experiences and emotions, which are typically difficult for an AI to generate authentically. For example, the author expresses frustration, mentions specific use cases, and refers to their own actions and reactions.

    2. Consistent Topic: The text maintains a consistent topic throughout, discussing the author’s experiences and opinions about AI. This is in contrast to AI-generated text, which can often drift off-topic.

    3. Specific References: The text contains specific references to products, companies, and technologies. While an AI could generate these references, the way they’re used here - to support the author’s argument and share personal experiences - is more characteristic of human writing.

    4. Opinions and Predictions: The text contains opinions and predictions about the future of AI. These show a level of subjective reasoning and foresight that current AI models typically do not possess.

    5. Complex Sentiment: The text expresses a complex sentiment towards AI - a mix of frustration, disappointment, but also recognition of its potential. This nuanced view is more typical of human writing.

    Remember, while these points suggest the text is likely human-written, it’s not a definitive conclusion. AI is continually evolving and improving, and the lines between human and AI-generated text can sometimes blur. But based on current AI capabilities, this text appears to be human-written. I hope this analysis helps! ߘꦬt;/p>




    I just tried the same experiment. I have actually never tried Copilot before, so I decided to give it a shot by asking a very complicated question that would be impossible for either Bing or Google search to answer. This was my question. "Why is it that the F1 through F12 function keys on the Keychron keyboard funcion just like the Apple keyboard but the fn key on the keychron keyboard does not function like the fn key on the Apple keyboard?" and this was my response from Copilot. "Attempting to reconnect." Now I'm starting to understand jdw's hate. So then I asked the same question on the ChatGPT 3.5 app. The answer that I received was a very complicated answer, but I wouldn't call it lies. Nevertheless, I was not satisfied with the answer, so I asked it a different way. This was my second question. "Why does Apple not allow the fn key on 3rd party keyboards to function exactly like it does on Apple designed keyboards?" Well, I didn't like that answer either. I again wouldn't call the answer lies, but it was another complicated answer and was basically a bunch of guesses. ChatGPT thinks that it is mostly because Apple wants customers to buy their specific keyboards. Well, that answer did fill me with rage. If that is really true, then that would absolutely be monopolistic behavior. I don't know if it's true or not because ChatGPT was only guessing at the answer but I wonder if the fn key on the MX Keys for Mac function exactly the same as on an Apple keyboard. There is no reason to be angry at ChatGPT. Probably only Apple would know the answer to that question. Apple should be giving out the API for the fn key functionality. I have always defended Apple against the antitrust and monopoly lawsuit stuff but there is no excuse for Apple to not allow full functionality in 3rd party keyboards. 
  • Reply 37 of 39
    XedXed Posts: 2,800member
    blastdoor said:
    Was your rant out of a LLM?  It's only sort of triggering my "machine generated" sense, but there's a few bits that just seem a bit... off for a human.
    I wondered about this too. So I asked Microsoft Copilot (aka GPT4) and it had this to say: 

    Thank you for sharing this passage. Based on the content and style of the text, it appears to be written by a human rather than an AI. Here are a few reasons why:

    1. Personal Experience: The text contains personal experiences and emotions, which are typically difficult for an AI to generate authentically. For example, the author expresses frustration, mentions specific use cases, and refers to their own actions and reactions.

    2. Consistent Topic: The text maintains a consistent topic throughout, discussing the author’s experiences and opinions about AI. This is in contrast to AI-generated text, which can often drift off-topic.

    3. Specific References: The text contains specific references to products, companies, and technologies. While an AI could generate these references, the way they’re used here - to support the author’s argument and share personal experiences - is more characteristic of human writing.

    4. Opinions and Predictions: The text contains opinions and predictions about the future of AI. These show a level of subjective reasoning and foresight that current AI models typically do not possess.

    5. Complex Sentiment: The text expresses a complex sentiment towards AI - a mix of frustration, disappointment, but also recognition of its potential. This nuanced view is more typical of human writing.

    Remember, while these points suggest the text is likely human-written, it’s not a definitive conclusion. AI is continually evolving and improving, and the lines between human and AI-generated text can sometimes blur. But based on current AI capabilities, this text appears to be human-written. I hope this analysis helps! ߘꦬt;/p>

    I just tried the same experiment. I have actually never tried Copilot before, so I decided to give it a shot by asking a very complicated question that would be impossible for either Bing or Google search to answer. This was my question. "Why is it that the F1 through F12 function keys on the Keychron keyboard funcion just like the Apple keyboard but the fn key on the keychron keyboard does not function like the fn key on the Apple keyboard?" and this was my response from Copilot. "Attempting to reconnect." Now I'm starting to understand jdw's hate. So then I asked the same question on the ChatGPT 3.5 app. The answer that I received was a very complicated answer, but I wouldn't call it lies. Nevertheless, I was not satisfied with the answer, so I asked it a different way. This was my second question. "Why does Apple not allow the fn key on 3rd party keyboards to function exactly like it does on Apple designed keyboards?" Well, I didn't like that answer either. I again wouldn't call the answer lies, but it was another complicated answer and was basically a bunch of guesses. ChatGPT thinks that it is mostly because Apple wants customers to buy their specific keyboards. Well, that answer did fill me with rage. If that is really true, then that would absolutely be monopolistic behavior. I don't know if it's true or not because ChatGPT was only guessing at the answer but I wonder if the fn key on the MX Keys for Mac function exactly the same as on an Apple keyboard. There is no reason to be angry at ChatGPT. Probably only Apple would know the answer to that question. Apple should be giving out the API for the fn key functionality. I have always defended Apple against the antitrust and monopoly lawsuit stuff but there is no excuse for Apple to not allow full functionality in 3rd party keyboards. 
    On the one hand, you're right, these are all guesses, but all answers from these AI assistants can be classified as such because they're simply doing their best to guess what you mean and are spitting out the info they've been training on, which also includes how they've been training to search for how others have answered the same or similar question.

    Personally, I have been loving ChatGPT for dealing with Windows Server issues. Firstly, it seems impossible to get anyone from MS on the phone to help you, even if you are willing to pay for technical support. Second, these are questions that have both probably been asked a million times and are likely answered thoroughly in MS's tech documents. I could scour the internet reading those, but ChatGPT has been pretty great helping me remove a great deal of unnecessary chatter to get to a solution quickly.

    What you're asking is a very complicated question, which you mention, and may not something that's asked often or can be misunderstood, so being frustrated by the answer seems a bit excessive to me. I also think your first part of the sentence about F1 to F12 keys working fine is a good thing to include. For a human the reference would be fine — even required for context — but for ChatGPT it's just seems like noise, but I could be wrong. 

    Here's how I'd reword your query and this is what I get from ChatGPT:
    "Why doesn't the 'fn' key on Apple keyboards work on 3rd-party keyboards?"

    ChatGPT said:

    The 'fn' key on Apple keyboards is specifically designed to interact with macOS and Apple's hardware, enabling functions like adjusting brightness, volume, and accessing special features like Mission Control or the Launchpad. When it comes to 3rd-party keyboards, the 'fn' key might not work the same way for several reasons:

    Key Mapping Differences: Apple uses a specific key mapping for its keyboards, including how the 'fn' key interacts with function keys (F1-F12). Third-party keyboards may not follow the same mapping, meaning the 'fn' key might not send the correct signals to macOS.

    Lack of macOS Integration: The firmware and software on 3rd-party keyboards may not be optimized for macOS. Apple's keyboards have firmware that interacts directly with macOS to control certain functions. Third-party keyboards might lack this level of integration, causing the 'fn' key to either not work or function differently.

    No Dedicated Support: Many 3rd-party keyboards are designed with a focus on Windows or generic systems. As a result, the 'fn' key may be designed to work with Windows-specific functions, leading to incompatibility or different behavior on macOS.

    Custom Software or Drivers: Some 3rd-party keyboards may require custom drivers or software to properly map the 'fn' key for macOS functions. Without these drivers, the 'fn' key may not work as intended.
    To potentially resolve this, you can try:
    • Remapping Keys: Use software like Karabiner-Elements to remap keys on your 3rd-party keyboard to mimic Apple's functionality.
    • Check for macOS Compatibility: Some 3rd-party keyboards offer macOS-specific layouts or drivers, which may improve 'fn' key functionality
    Even then, that's not how I usually 'talk" to ChatGPT. If I ask it why questions it will focus on answering the why of it (or the reason), but if you ask it how questions it will focus on answering the how of it (or manner in which it may occur).

    "How to make the 'fn' key on Apple keyboards work on 3rd-party keyboards?"

    ChatGPT said:

    To make the 'fn' key on an Apple keyboard work properly on a 3rd-party keyboard, you may need to adjust settings or use software to remap keys. Here's how you can approach this:

    1. Check for Compatibility and Firmware Updates
    • 3rd-Party Keyboard Firmware: Ensure your 3rd-party keyboard has the latest firmware installed. Some keyboards may offer macOS-specific updates or configurations that improve compatibility with macOS features, including the 'fn' key.
    • MacOS Settings: Go to System Settings > Keyboard > Keyboard Shortcuts and check if any function keys are customizable for your specific keyboard.
    2. Use Karabiner-Elements to Remap the Keys
    • Install Karabiner-Elements: This is a powerful tool that allows you to remap keys on your keyboard, including the 'fn' key.
    • Create a Remap: You can set up a custom profile to remap the 'fn' key to behave as it does on an Apple keyboard.
    1. Open Karabiner-Elements.
    1. Go to the “Simple Modifications” tab.
    1. Select your 3rd-party keyboard from the list.
    1. Add a new item to remap the 'fn' key to the desired function key or action.
    3. Adjust Function Key Behavior in System Preferences
    • Function Keys as Standard Function Keys: In System Settings > Keyboard, you can enable the option to "Use F1, F2, etc. keys as standard function keys." This might allow the 'fn' key to work more predictably with a 3rd-party keyboard.
    • Modifier Keys: Under the same menu, you can remap modifier keys (like Control, Option, Command) to better match your Apple keyboard’s layout.
    4. Test with Different USB Ports or Bluetooth Settings
    • USB Ports: Try plugging the keyboard into a different USB port or use a powered USB hub. Sometimes, power delivery or USB communication issues can affect key functionality.
    • Bluetooth Settings: If you're using a Bluetooth keyboard, unpair and re-pair it with your Mac to see if this resolves any connectivity or mapping issues.
    5. Third-Party Software or Drivers
    • Keyboard Specific Drivers: Some 3rd-party keyboards offer their own software or drivers for macOS that can allow you to remap the 'fn' key and other keys for better functionality.
    • Explore Third-Party Software: In addition to Karabiner-Elements, other key remapping tools or automation apps might be able to address the issue.
    By taking these steps, you should be able to get the 'fn' key on your Apple keyboard to work properly with a 3rd-party keyboard on macOS.
  • Reply 38 of 39
    Yeah, I got pretty much the same answers as you listed but just didn’t copy them into this forum. It sounds like Apple isn’t making it very easy for 3rd party developers to duplicate the functionality of Apple keyboards. It really seems odd to me though because there are a lot of Apple specific keys that do work. F3 is Mission Control for example and F4 is Launch Pad. The Option and Command keys work as normal. The fn key isn’t too critical so I won’t worry about it. It mainly just makes it easier to bring up Emoji and Special Characters but I can just use ctr-cmd-space. Other than that, I really love my keyboard. I would have gotten the Banana switches, but they were out of stock, so I got the Baby Kangaroo 2 switches instead. I think it’s a wonderful experience. 
    williamlondon
  • Reply 39 of 39
    jdwjdw Posts: 1,408member
    I had no fucking clue whatsoever what TL;DR meant.
    I suspect you aren't the only one.  But outside common acronyms like FBI, CIA, and a handful of others that most English speakers (even outside the USA) know quite well, I really dislike acronyms in general because people who use them always foolishly assume everybody reading what they write is as hip as them and know the meaning.  I personally prefer to help people by NOT using that foolishness.  I want to write in a way that doesn't force people to Google my words because they are too cryptic for most people to know.

    I would never use ChatGPT for specs on a TV for example. I would either go to the manufacturer’s website or Wikipedia. 

    I actually did visit dozens of manufacturer's websites, and review sites, and forums, and Reddit, and Wikipedia, and countless other places.  But when you are comparing a lot of different TVs and Soundbars and seeking first-hand user reports, Googling it all yourself is very time consuming.

    The promise of AI like ChatGPT is that anyone could possibly use it to research things faster and more accurately than you could do yourself via Google.  But that promise doesn't live up to the hype in my own experience, and that's pretty much what my earlier posts were saying.  That goes far beyond a mere search for the right TV though.  

    I use ChatGPT4o daily and want to throw it out the window daily.  I see no improvements to it over time.  Every single time I ask for a link or it simply provides me with a link, I don't know if it's the Chrome browser or ChatGPT, but the link gets converted into a "link on text" (not a plain text URL, in other words); and whenever I click the text link it provides to me, nothing opens.  It's infuriating!  That forces me to request links in plain text.  It complies, but then when I copy/paste into a browser, the link is 100% wrong every single time.  The domain is correct, but it leads me to either a 404 (not found) or to an article talking about something different than it should.  That is maddening!

    Why ask for links?  Because I know the info it gives me is filled with errors most of the time, so I demand it give me its source info so I can check that out myself.  But unless it gives me a usable link to that source info, my intent to verify the accuracy of the ChatGPT4o info becomes impossible.

    On top of that, whenever it tells me bogus info that's located at a particular site it told me about, I then tell ChatGPT it is wrong and explain where it failed.  More often than not, it will apologize but then give me the same link again!  How stupid is that?  I've even tried to correct it up to 5 times, and it gives me the same incorrect info 5 times!  To me, that is brain dead.  I'm not seeing any intelligence at all.

    Oh sure, it will work with text blocks fabulously, shortening and summarizing.  But when you want to rely on it for web research assistance, it fails utterly.

    When I say I use it daily, I guess I can say that it's largely for my own personal research into the sheer stupidity of AI today.  I am fascinated about how many things it does so badly.  I guess I am sadomasochistic in that I keep coming back, hoping it will be improved over time and finally be able to help me.  But so far, that hasn't been true.

    I've even tried using it to modify or generate PIC assembly code (which I can write, being an electrical engineer). As you know, has nothing whatsoever to do with researching TVs.  Even then it gets it wrong, making suggestions that are not true.

    I've asked it about historical matters pertaining to vintage Macs, and it gets a fair amount wrong.  I know, because I've been a Mac fan since my 128K in 1984 when I was 13, and I've had a Mac magazine subscription since the 80's, starting out with MacUser, which later was folded into Macworld, which I continue to subscribe to, to this day.  Obviously, I read AppleInsider too.  I have most books on Steve Jobs and Apple, and I enjoy reading (unlike some in this forum).  So I know my stuff, basically, and I know when ChatGPT4o is telling me historically inaccurate info.  What's sad is that I can Google something in a single search which yields the right answer, yet ChatGPT failed to get that simply thing right.

    I could go on and on and on about all the topics I've tested using GPT4o, but some people would be bothered by that and write a reply back to me saying "TL;DR" or similar acronyms that only increase my blood pressure levels.

    All said, I am not impressed even though ChatGPT4o can do some things like text manipulations and translations quite brilliantly.  

    Let it also be know that I do NOT subscribe to the "your holding it wrong" sentiments.  ChatpGPT4o should NOT be only for highly restricted use cases.  It should be smarter than that.  But for now, it isn't.  Will it someday be more useful and accurate?  I sure hope so.  And I've written to OpenAI to give them my suggestions too.  I told them flat out that we should be allowed to contribute to its training.  There really can be a way to help train it while at the same time filter out all the nuts who may try to trick and deceive it.  That's where "intelligence" comes into play.

    While my time with ChatGPT4o has largely been a frustrating experience, I am hopeful that Apple Intelligence will actually be useful and largely error free.  That won't be true when using the ChatGPT integration, but for everything else, I'm pretty excited about what I see.  I hope the reality lives up to the hype.
Sign In or Register to comment.