TheX_fr

About

Username
TheX_fr
Joined
Visits
1
Last Active
Roles
member
Points
26
Badges
0
Posts
2
  • M4 iMac vs 2019 Intel iMac compared: Five years makes a big difference

    I'm still stuck at home with 3 iMac 27" because of the 27" too… And several friends I know are in the exact same situation… Waiting for almost 4 years the 27" (or slightly more but not 32") version. We were all hoping to see it happening a few months after the initial 24" release. But then from keynote to keynote desperately waiting for the announcement.
    If I take into account all the businesses I know that are in the exact same situation it's hundreds of them that are waiting for a replacement. Up to the point it's still the iMac we are seeing most often in TV news when they go interviewing people at their work …
    Come on Apple! I think that all 27" iMac owners still feel orphaned 
    9secondkox2williamlondonAlex1NopinionOzDazzaInJapanwatto_cobramacikebaconstang
  • Apple's study proves that LLM-based AI models are flawed because they cannot reason

    iOSDevSWE said:
    The article lacks fact checking and details like when were the tests conducted either OpenAIs models and which model was used. When I perform the request I get the following answer from chatGPT 4o:

    Question: ” Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday. Of the kiwis picked on Sunday, five of them were a bit smaller than average. How many kiwis does Oliver have?”

    Answer: “ Let’s break this down:

    • On Friday, Oliver picks 44 kiwis.
    • On Saturday, he picks 58 kiwis.
    • On Sunday, he picks double the number of kiwis he did on Friday, so he picks  44 \times 2 = 88  kiwis.

    The total number of kiwis he picks is:

    44 \text{ (Friday)} + 58 \text{ (Saturday)} + 88 \text{ (Sunday)} = 190 \text{ kiwis.}

    So, Oliver has 190 kiwis in total. The fact that five of the kiwis picked on Sunday are smaller doesn’t affect the total number.”

    Perfect answer!
    I wont elaborate about our own research on LLVM models and I'm not criticizing them as they already provide great help in day to day usages. So, sorry for the strong believers in this forum and everywhere else, but I confirm that LLVM models are just big statistical models as dumb as could be a sparse matrix with 300 billions columns.

    No need to go too far in scientific research and analysis to observe this fact. We asked chat gpt 4o, the full omniscient and omnipotent version according to OpenAI and Saint Altman, a very simple question that requires both logical reasoning and real language knowledge… Here's the answer:

    "Sure! Here are seven popular expressions with the first word having 7 letters and the second word having 5 letters:

    1. Lucky break
    2. Future plans
    3. Secret code
    4. Change form
    5. Digital age
    6. Genuine love
    7. Creative mind

    If you need more or something different, just let me know!"

    For sure we will let it know…

    baconstangcommand_fwilliamlondonraulcristianjas99watto_cobra