Apple confirms WWDC 2024 keynote timing, but offers no more AI hints
Apple has confirmed it will be holding its usual keynote at WWDC on June 10. Here's what to expect from the company during that packed week.
Apple's 2024 WWDC is June 10
Apple has previously announced WWDC will be held from June 10, and has already strongly hinted that it will be an AI-centric event. Weeks away from the event itself, Apple has outlined the schedule for the event.
According to a Tuesday announcement, the keynote will happen at 10 a.m. PDT on June 10, which will detail the "groundbreaking updates" arriving on Apple's platforms. The address will be streamable via the Apple Developer app, the Apple TV app, Apple.com, and YouTube.
The Platforms State of the Union will happen a few hours later at 1 p.m. PDT. It will take a deeper dive into the changes in iOS, iPadOS, macOS, tvOS, visionOS, and watchOS.
Apple Developer Program members and Apple Developer Enterprise Program members will be able to take part in online labs throughout the event, and in live forums.
There will be over 100 technical sessions across the week, covering new technologies and frameworks.
Despite the new information on scheduling, Apple doesn't offer any new hints about content in the announcement. Then again, it's probably already said enough.
Operating Systems and Software
Apple's chief announcements will be its operating system updates, including iOS 18, iPadOS 18, watchOS 11, tvOS 18, and the next edition of macOS. With the release of the Apple Vision Pro, WWDC can also show off Apple's first major feature updates for visionOS.
The updates to iOS 18 will arguably be the main focus for users and developers alike. Following on from iOS 17, iOS 18 is rumored to include a big redesign of various UI elements, though it's unlikely that the Apple Vision Pro and visionOS will influence design alterations.
AI will apparently be a big element for WWDC 2024
Ai improvements to Siri and iOS in general is expected to be a big feature, with the iPhone maker continuing to invest resources into developing generative AI. Aside from expected Siri improvements, the AI changes can also feed into other apps, such as providing automated summarizing functions.
This is far beyond consumer usage, too, as Apple has been rumored to be including changes to its development tools. Test versions of a future Xcode release apparently include AI tools, such as the automated prediction and completion of blocks of code, and automated code generation for testing applications.
While few elements of what Apple will include aside from AI enhancements are reliably rumored, it does seem that Apple will be trying hard to minimize any initial beta bugs.
In November, development of new features for iOS 18, iPadOS 18, and macOS 15 had been put on pause in favor of a week of bug stomping. The releases may be more crucial to Apple than usual, due to a general need to catch up with others in the generative AI space.
Hardware
New devices almost always play second fiddle to software at WWDC. Following the iPad Pro release with M4 in May, it's not clear at all if the Mac will get an upgrade.
The MacBook Pro and iMac were last updated with M3 in October 2023, with the M3 Air arriving in early 2024. That leaves the Mac Studio, Mac Pro, and Mac mini.
Recent rumors suggest that the pro-level hardware won't see an update until M4 Max and Ultra are released. The current timing on those is the fall, at the earliest, with 2025 perhaps being more likely.
The most likely candidate for a M4 update is the Mac mini. Rumors for that still suggest the fall as well.
Also long-rumored but still absent are USB-C versions of the Magic Keyboard, Magic Mouse, Magic Trackpad, and AirPods Max. Timing is unclear on the release of those, though, and they don't seem like keynote-worthy products.
Read on AppleInsider
Comments
Clearly, ChatGPT doesn’t control other iOS apps like HomeKit, Apple Music, or Apple Maps (or any other app), but that is simply an implementation issue. Siri “works better” for that because it has this integration implemented.
Aside from implementation with apps and iOS, there is no comparison at all between Siri and ChatGPT.
Siri is like the cute little sister that was dropped on her head three too many times as a baby. Apple was able to train her to do some really basic tasks, but she will never be too smart without a brain transplant.
ChatGPT has the brain that Apple wants to transplant. Unlike Siri, who has basically learned some phrases and extremely elementary rules of language parsing, ChatGPT understands and can generate language better than the current US President.
I didn't mean to insult you, ChatGPT! You actually know language extremely well.
When Apple takes the apparent understanding of ChatGPT and replaces Siri’s Abby-normal brain with the nitrous-fueled brain from OpenAI, while leaving intact and improving all of Siri’s communication with the OS and various apps, well then. We’ve got something to talk about.
If you aren’t aware of OpenAI’s current level of achievement with ChatGPT, watch the following video. Not only does it understand you, but it can see as well. The text, voice input, voice output, and camera input are all on the same stack, so it’s incredibly fast and lifelike (I don’t know if graphic output is on the same stack or not as I didn’t see this specifically addressed).
https://vimeo.com/945586717?signup=true
Although I do have to admit that even if Apple chooses OpenAI’s ChatGPT over Google’s Gemini (which is trash), I don’t see how Apple is going to be able to implement this as an on-device solution. If they try to keep it all on-device, I think it will be crippled.
Although purely speculative, perhaps the direction they go is to implement ChatGPT’s tech reminiscent of Samantha in the movie Her as the conversational assistant while online (and enabled), and leave the fully on-device (crippled) version to those who do not have an online connection or who do not wish to use it because of data usage or privacy reasons.
Self-driving cats, harbors that can teach you physics like a professor, incredible AI art that you can create from a description. I thought to myself, “the world is changing!”
After trying all the above myself, my thinking became, “No… the world has already changed!”
I’m excited to see how this turns out!
So both will play a role. I for one am happy that Apple isn’t shoveling billions of dollars to Nvidia and is takibg a more cautious approach while waiting for the AI-hype bubble to burst.
But yeah Siri needs a radical rethink. If Steve Jobs was alive, he would set up a competitor to Siri in house and have then duke it out.
Companies that employ GenAI will certainly want to find ways to monetize it for their own financial benefit. The hope is that they'll only use it to enhance their product's functionality and user experience and not for more unsavoury things like deeper profiling of users for marketing purposes or to steer users towards answers and recommendations that are self serving or business driven, like Google does with its current search model. Can a Google GenAI, an Amazon GenAI, or a Meta GenAI really be trusted to always put their user's best interests above all else without colouring the results in ways that feed into their corporate profitability?
Regarding Siri, it seems like a lot of people are not happy with the current performance of Siri. I know what I don't like about Siri, sketchy voice recognition and weakness in establishing context beyond a narrow range, but it would be interesting to know where the pain points are with Siri across a wider range of users. As far as Apple coming up with an internal bake-off competition, which was a common practice inside Sun Microsystems, all they have to do as a baby step is to compare Siri against other competitors like Alexa, which I'm most certain they've already done. When evaluated across a broader scope, Alexa clearly kicks Siri to the curb. That means there's no value in doing a bake-off against the current Siri because it's not a worthy benchmark for comparison. If Apple's truly doing its best work here, even the current Alexa isn't a good enough benchmark. Apple doing something that's moderately better than today's Alexa isn't good enough.
After many years of using Siri in particular I think its primary role was voice based device control, especially on HomePod, AirPods, Home, and Apple TV. On iPhone and Mac Siri also gives us a taste of search and general Q&A. When Siri first arrived on the iPhone 4s it was largely a novelty and did a few parlour tricks that hinted at its future direction. In terms of AI and general Q&A Siri never seemed to progress far enough, even though it was fine for HomePod and AirPod control. Alexa evolved a lot further than Siri did both in its voice recognition and general Q&A and is not bad at all for things like search, home automation control, and general Q&A.
However, the reality was that Amazon invested a boatload of money in fleshing out a quite useful amount of capability that provided very little value to their bottom line. The whole purpose of building Alexa out to the extent they did was driven by hopes of harvesting value back through its involvement with Amazon's primary business, the store. At least Apple didn't spend that kind of money or raise false hope that Siri would be a force for driving profit. As long as Siri provided enough positive value without degrading the products that it was designed to support, it was not a bad investment and didn't have to go toe-to-toe with its more generally capable competitors.
As you've said, if Siri is the feature name given to Apple's next generation embedded AI capability it does require a radical rethink, but not because of its current weaknesses, but because its role and purpose within Apple's ecosystem has radically changed.