FileMakerFeller

About

Username
FileMakerFeller
Joined
Visits
69
Last Active
Roles
member
Points
2,434
Badges
1
Posts
1,546
  • Cook wanted Apple and Google to be 'deep, deep partners'

    avon b7 said:
    auxio said:
    avon b7 said:
    As an aside, I'm slowly seeing more and more YouTube ads (and longer) and constant nagging to upgrade to a paid subscription. I won't. I will simply stop using the service. Amazon Prime is going the same way. An irritating (and loud) ad every time I use a Fire Stick. More paid content mingling in with Prime Video subscription content. Price hikes across the board. WhatApp is about to get its backups integrated into Google Drive user storage. 

    I think we're not far off a tipping point and a potential unsubscription wave. 
    And there's the expectation that advertising-driven products have set up: everything should be free.

    The reality is that it costs money to produce content: movies, TV shows, music, news, etc. So how do the people creating that content get paid? Is what they're doing not valuable (especially good quality news IMO)?

    Back when the internet was first becoming a feasible way to distribute content (1990s and early 2000s), you had clever college kids creating technology companies which mass distributed content for free (ala Napster). Essentially an online version of CD/DVD bootlegging which made them quite rich at the expense of the people who created that content. And people not connected with those industries were happy because they could get things for free.

    Eventually law enforcement and legitimate digital storefronts (like iTunes) were set up to ensure that content creators could have a source of income from online distribution. However, flash forward to advertising-funded digital streaming (where content is no longer purchased) and you have a similar problem. Where content creators are being paid fractions of fractions of pennies per view. Essentially negotiated to be as little as possible by the streaming service companies, and propped up by early investors who were willing to take the short term hit to grow the services for the payoff of their shares afterwards. Now that industry has come of age and the share gains aren't as big, those investors are cashing out and all that money which was propping up those services needs to come from the content itself (or ads). Hence what we're seeing today.

    And now there's a backlash because the expectations have been set. This is exactly what I've been saying about the problems with advertising funded products.

    There is zero issue with getting something for 'free' in exchange for my 'data' and ads. 

    The issue is the amount of forced ad content. 
    In Australia there have been regulations for decades that limit the time that can be devoted to advertising for a TV broadcaster. Streaming needs something similar, and "free to play" games on mobile need to really be brought to heel - sometimes an ad will be shown every 30 seconds, and the advertisement runs for more than a minute!
    Alex1Nwatto_cobramuthuk_vanalingam
  • How Apple is already using machine learning and AI in iOS

    tundraboy said:
    mayfly said:

    The forefront of public perception regarding AI in 2023 is occupied by Microsoft's AI-powered Bing and Google's Bard.

    My experience is that ChatGPT is recognised and mentioned much more widely than these two.

    The hype about AI is overblown. When I first learned about decision trees (make a list of possible outcomes, assign them a probability and a cost, then calculate to find the "best" option) I was taught that the trick is to get the estimated probability as accurate as possible - that with experience your estimates will get more accurate. The current buzz is happening because people have figured out a way to analyse huge amounts of data and build a bunch of probability lookup tables in a short enough period of time to be feasible and at a low enough cost to be justifiable.

    At the end of the day, it's all just computation. The algorithms are not too complicated but the steps are computationally intensive and to understand how it works you need to be comfortable with matrix multiplication and statistics (e.g. this YouTube Video). The thing I really struggle to wrap my head around is why the machine doesn't have to show how it arrived at an answer - all of my teachers were very particular about that part of the process.
    The hype may be overblown, but you have to examine the underlying reason for that hype. It's not based on what we call artificial intelligence today. It's based on projections of the possibilities and consequences if a true artificial intelligence emerges. That would be a recursive software system that could change its own programming in order to improve itself. That would mean the program has a will of its own, and the means to impose that will. That's what the hype is about. And the implications are both inspiring and deeply concerning. Sure, guardrails can be put into place. But requires the builders of those guardrails to think of literally every possible scenario. You don't just have to be right most of the time. One oversight, just one, in such a recursive model, leaves it open to possibilities we can't even imagine.

    The great futurist Isaac Asimov proposed the famous "Three Laws of Robotics," and countless writers and movie makers have demonstrated how ineffective those guardrails can be against an advanced artificial intelligence.
     "That would be a recursive software system that could change its own programming in order to improve itself."

    Search on the term "ouroboros".  That creature is the apt metaphor for what you are describing, which is a logical impossibility masquerading as a logical paradox.  There are certain self-referential systems or operations that are just plain impossible.  For example, an ER doc suturing a laceration on his thigh is possible.  But a heart transplant surgeon performing transplant surgery on himself is not.
    There was research into software systems that could change their own code back in the 1980s and 90s. "Evolutionary programming" was one of the catch-phrases, IIRC. Point the system at a data source of algorithms (or, usually, pre-coded functions in various libraries), let it spin for a few million iterations, and look at the code that was generated once optimum performance was attained.

    The thing is, though, the definition of "optimum performance" was never decided by the software - it was simply following rules devised by the humans who set up the experiments. There is no agency, no creativity, just brute force iteration over every possible combination of pre-existing ideas. This is useful, but it is not intelligence.

    The chance of true intelligence emerging from the approaches taken thus far is zero. All we are building is a mimicry apparatus with a dogged persistence and speed that is beyond human capacity.

    The hype is completely unfounded.
    muthuk_vanalingam
  • Nothing kills iMessage bridge because it profoundly violated user privacy & security

    Moral qualms aside, the idea to log all messages as errors through Sentry is a clever hack. Not sure how that can be closed off.
    watto_cobraAlex1N
  • iPhone 16 to use graphene heat sink to solve overheating issues

    Could be interesting if Apple goes down this path for the desktop Macs - the Studio might be able to drop some height.
    watto_cobra
  • Driver of flipped Jeep saved by iPhone 14 Emergency SOS via Satellite

    I can't tell you how much I want the passenger's name to be Ken.
    watto_cobra