Marvin

About

Username
Marvin
Joined
Visits
123
Last Active
Roles
moderator
Points
6,717
Badges
2
Posts
15,487
  • Apple cancels California DMV permit for self-driving car testing

    MisterKit said:
    I don't see how a self driving vehicle could ever interact safely with the idiot drivers already on the roads. A lost cause. 
    One goal of self-driving vehicles would be to take all idiot drivers off the roads so nobody has to interact with them as well as elderly, distracted, inexperienced drivers.

    There was an Apple software engineer killed in a self-driving Tesla in 2018:

    https://www.theverge.com/2024/4/8/24124744/tesla-autopilot-lawsuit-settlement-huang-death

    The cost of even a single person dying as a result of a product mistake must weigh heavily on the people making self-driving vehicles.

    It's a worthy cause, transport would be vastly improved, safer, cheaper and more efficient with self-driving vehicles making the majority of journeys and they will be very useful for elderly and disabled people. They just need to be implemented exceptionally well, anything less will cost lives, even if proportionally fewer than human drivers.
    watto_cobra
  • Cheaper Apple Vision headset rumored to cost $2000, arriving in 2026

    DAalseth said:
    DAalseth said:
    Dropping EyeSignt is more than the screen on the outside. It’s the cameras that looked at the wearers face, and all of the processing overhead to assemble and ‘undistort’ the eyes into the image on the front. This all was more cost and processing overhead that did not adde to the users experience. This is a very good first step. 
    I wonder if some of those cameras might be needed for the digital avatar feature, but a lot of people would probably also be willing to give that up too if it meant a lighter and less expensive device. And some of the hardware will probably also just cost less over time, so they may not need to make too many sacrifices to produce a cheaper model. 
    As others have said elsewhere, it may not make sense to go to a “less powerful chip”. Now that the M4 is out, the M2 IS the less powerful chip. 
    If they are happy with M2-level performance, the iPhone chips will reach this soon and cost less:



    A18 Pro is just behind M2, A19 Pro on 2nm in 2026 will get even closer:


    This could save them $150. To hit a $2k price point, they need to get $1700 costs down to around $1000. Cutting the Eyesight feature will save around $100, maybe more.

    The number of AVP units sold are likely below 300,000.
    At a $2000 price point, they can sell 3m units, which is $6b.
    At a $1500 price point, they can sell 5m units, which is $7.5b. If they hit $2000, there will eventually be units available at $1500.

    Meta has a VR install base of over 20 million, active userbase around 1/3 of this.

    Within 2 years of having a more affordable headset, Apple could become the most widely used platform. This would need a focus on comfort and usefulness. Movie content is the most widely appealing use case.

    There's an 8K (same as AVP, dual 4K) headset that was announced recently but may not ship priced at $1899 and uses a headband for comfort like the PSVR and HoloLens:


    Distributing the weight across the top and sides of the head and away from the eyes and nose in a compact design would make it more widely appealing to wear on a regular basis.

    The ones in the video below weigh 1/3 the AVP, Apple can get to this kind of form factor with the 2nd model:

    williamlondonwatto_cobra
  • Apple's study proves that LLM-based AI models are flawed because they cannot reason

    LLMs aren’t sentient. They look for patterns in the query and then apply algorithms to those patterns to identify details that then are used to search databases or perform functions. LLMs can’t learn. If the data they search contains errors, they will report wrong answers. Essentially they are speech recognition engines paired with limited data retrieval and language generation capabilities.
    Apart from not being able to learn in real-time, this describes what people do too. At any given point in time without new information, the training available to an AI is of a similar nature to a person.

    Reasoning skills don't necessarily require real-time learning, that can be another pre-trained model (or code) that reformats queries before the LLM processes them.

    The paper suggests moving beyond pattern matching to achieve this but understanding varied queries is still pattern matching.

    The image generators have the same problem where very small changes in tokens can produce very different outputs, which makes it difficult to use it for artwork that uses the same designs like an illustrated book because the same character on each page looks different.


    This can be improved on using a control net which places constraints on the generation process. The video generators need to be stable from one frame to another and there's a recent video showing an old video game converted to photoreal video:


    For understanding language queries, people understand that a phrase like 'girls college' has a different meaning from 'college girls' because of training on word association, not through any mystical reasoning capability.

    Apple's paper doesn't define what they mean by formal reasoning and state that it differs from probabilistic pattern-matching. We know that brains are made of connections of neurons, around 100 trillion connections in some kind of structure and AI is trying to reverse-engineer what the structure of a brain is doing.

    To recreate what a brain is requires massive computational power and data, well beyond personal computer performance. Server clusters can get closer but getting the right models that work well in every scenario is going to take some trial and error. Humans have had a 50,000+ year head start:


    Modern AI is doing pretty well for being under 8 years old, certainly more capable than a human 8 year old.

    The main things that an AI lacks vs a human are survival instinct, motivations and massive real-time data input and processing, the rest can be simulated with patterns and algorithms and some of the former can be too. Some of the discussions around AI border on religious arguments in assuming there's a limit to how well a machine can simulate a human but there would be no assumption like this if an AI was to simulate a more primitive mammal, which humans evolved from.
    watto_cobra
  • Russian YouTubers keep showing off alleged M4 MacBook Pros in historic Apple leak

    rjharlan said:
    I can’t help but believe that this is a complete fake. There’s no way that two Russian influencers got their hands on two Macs that haven’t even been introduced to the market! Even the associates haven’t even been introduced to them yet. I think it’s much more likely that they’re searching for eyes on their channels. 
    There's a report that hundreds were being sold privately from a warehouse:

    https://www.tomshardware.com/laptops/apple-macbook-pro-m4-leakage-gets-serious-with-200-units-reportedly-up-for-sale-on-social-media

    Apple sometimes stocks products in store before launch so that they are available for order the same day as the announcement.

    If it's real, it's not like this would be a huge leak for them, it's a minor refresh. The M4 chip is already available in the iPad and most of the rest of the Mac is unchanged vs M3 other than the extra TB port on the 14" model, like the 16" model.

    If it was the Pro/Max models then it would be more interesting.

    This may also suggest an event soon, I doubt they'd ship boxes 3 weeks in advance but this could vary by country. Last year, they announced the event for the 30th October on the 24th. They usually do events on Monday/Tuesday so that leaves 14th, 15th, 21st, 22nd, 28th, 29th October. Earliest announcement would be today or tomorrow, latest would be 23rd, within 2 weeks.
    9secondkox2watto_cobra
  • Commemorating Steve Jobs and his continuing influence on technology

    nubus said:
    charlesn said:
    As for Vision Pro being a dud--that's hilarious for a product that has been in the hands of consumers for all of 32 weeks. Please cite fact-based sources for its "failure" that include Apple's actual internal projection for sales and how actual sales have fallen short of that number. 
    AVP was presented in June 2023 and 16 months later all we get are quarterly dino videos. The current hardware / pricing / positioning is a failure. Most developers and customers have long forgotten AVP and so have most of us at AI. AVP is what happens when a non-product, non-founder CEO decides to invent the future of computing. We have been here before.
    There are Apple patents for this product from 2006:


    The images in the patent look almost exactly like AVP.


    The problem is this technology is still at the large helmet stage, even after nearly 20 years since this patent was filed and the above video was made.

    What it really needs is a major technology breakthrough like how the multi-touch glass interface enabled the iPhone. There's a missing piece for AR and they are stuck with trying to shoehorn technology that's currently available to make it work.
    ronnDAalseth