Google's Pixel 8 series offers extended software support & AI camera features
Google has finally lifted the curtain on its much-anticipated Pixel 8 and Pixel 8 Pro smartphones and the next generation of its wearable device, the Pixel Watch 2. Here's a look at what these new flagships bring to the table.
At Google's live event on Wednesday, the tech giant unveiled its latest flagship smartphones -- the Pixel 8 and Pixel 8 Pro, along with a Pixel Watch 2. From extended software support to groundbreaking camera technology and health features, Google promises that the Pixel 8 series will redefine expectations from a mobile device.
Google Pixel 8 and Pixel 8 Pro
The Pixel 8 and Pixel 8 Pro feature a refreshed design, including curvier edges like the iPhone 15 lineup. They each opt for classic glass and aluminum materials, instead of the rumored titanium material for the Pixel 8 Pro.
The Pixel 8 Pro's display is a 6.7-inch Super Actua display, and the Pixel 8 has a slightly smaller 6.2-inch Actua display, 42% brighter than the Pixel 7 screen. Under the hood, both phones run on Google's new Tensor G3 chipset, which may address the heat issues that plagued previous models.
The camera setup is a significant upgrade, with the Pixel 8 Pro featuring a 50-megapixel wide camera, a 48MP ultrawide sensor, a 48MP telephoto sensor, and a 10.5MP front camera. A unique feature is the built-in thermometer in the Pixel 8 Pro so users can measure temperature of themselves and other objects.
In comparison, the regular Pixel 8 features a 50MP wide camera, a 12MP ultrawide camera, and a 10.5MP front camera.
Both phones will be the first to ship with Android 14. Google also plans to extend software support for the Pixel 8 series, aiming for up to seven years of updates.
Pixel Watch 2
Meanwhile, Google upgraded the new Pixel Watch 2 with the Qualcomm Snapdragon W5 chipset. Qualcomm's most recent wearable system-on-chip (SoC) is manufactured using Samsung's 4nm process technology and features four Cortex-A53 cores with a clock speed of 1.7GHz.
Google's new Pixel Watch 2
The new chip is expected to offer a significant boost in both performance and energy efficiency when compared to the first Pixel Watch's Exynos 9110 chip, which was built on a 10nm process and has only two Cortex-A53 cores operating at 1.15GHz.
The Pixel Watch 2 also has a larger 306mAh battery, which is a 4% increase from the first version's 294mAh battery. Although the improvement is minor, the new Snapdragon chip and the 306mAh battery ensures the wearable can go over 24 hours on a single charge.
Google is leaning heavily into health insights with the new Pixel Watch 2. Three new sensors have been added, as well as Fitbit's Body Response feature with a new continuous electrodermal activity sensor.
Pixel Buds Pro
Finally, Google is updating its Pixel Buds Pro headphones with features that include conversation detection, which is similar to the conversation awareness option on Apple AirPods. It can detect when a person is speaking and automatically lowers the audio playback volume.
The latest update for Pixel Buds Pro adds support for Bluetooth Super Wideband. According to Google, when these earbuds are paired with a Pixel phone, the enhanced bandwidth improves the voice quality on calls for the listener.
Google has updated the Pixel Buds Pro
Google is introducing a low-latency gaming feature that reduces audio delay by 50% for specific games. Users can also check out "listening stats" so they know how loud their music has been playing with suggestions to lower the volume for hearing health.
Pricing & Availability
The Pixel 8 smartphone starts at $699 and is set to ship later on October 12, coming in Rose, Hazel and Obsidian colors. Meanwhile, the Pixel 8 Pro starts at $999 in Porcelain, Bay and Obsidian colors.
Meanwhile, the Pixel Watch 2 starts at $349, the same price as the first model, with an LTE version available for $399. It also ships on October 12.
Pixel Buds Pro are available for $199.99 in Bay, Porcelain, Charcoal, Fog, Lemongrass, and Coral colors.
Read on AppleInsider
Comments
The whole presentation was filled with contrived ways of how someone might use AI like asking for recipes, moving a tent in a photo, zooming in. People can do AI photo editing with Photoshop.
I'm guessing some of the lack of interest was due to the fact that this is all about Pixel products, which only a small fraction (<2%?) of the Android audience owns.
https://www.trustedreviews.com/news/googles-total-pixel-sales-barely-compare-to-what-samsung-sells-in-a-year-4272550
"Google would need 60 years to sell as many phones as Samsung sells in one."
Meh features that few people will use on devices few people will own. Yawn.
Most Android users still left with poor software updates, poor hardware integration, lack of access to features across the whole ecosystem. Still, it's good to know that Apple's dominance in the industry still bothers Android people enough to post on an Apple forum to try and hype up these underwhelming improvements.
Have to laugh at 'years ahead' when it comes to AI. Everybody and their dog is incorporating AI models into their software every 5 minutes.
It's not changing the Android world, just the Pixel 8, it's a software guarantee for that hardware, which hardly anybody buys.
their Video enhancement feature also looks promising and may even finally be able to compete with the iPhone’s video
https://www.androidpolice.com/google-pixel-8-pro-best-video-feature-magically-processed-cloud/
I wouldn't trust sending private videos to any tech company, certainly not an ad company known for multiple privacy violations. The bandwidth requirements to upload 4K footage means either the footage is heavily compressed to begin with or not feasible to upload/download over cellular. People will most likely be stuck with a loading bar for an hour, get fed up waiting and never use the feature again.
This won't compete with recording ProRes HDR to an SSD and being able to do high-end color grading on it. Apple gives people practical features they will use, companies like Google just need bullet points for their marketing pages and more ways to collect personal data.
So as far as what quality you get back from an uploaded video no one knows. It may be amazing, crappy, or somewhere in between. In between is my expectation which would result in a no-work and no-skill-needed improvement over the original. What I suspect is the end-game is eventually moving this on-device, but to begin with testing various algorithms, and encouraging user feedback. Google seems to be all-in with on-device AI processing. and I expect this one will make its way there at some point.
The question was whether this would be comparable to iPhone video like ProRes log HDR, which is over 1000Mbps bitrate. Answer is no, due to math.
Privacy is covered in their terms:
https://policies.google.com/terms#toc-permission
"This license allows Google to:
They are an ad company, these are standard terms. Some people would be ok uploading private images and videos to an ad company, I'd rather keep it offline.
It would be better to have an app on the desktop to do this processing with more control over the output, no wasted bandwidth, no privacy issues.
The Google feature is not meant to be a tool for professionals. I would have thought that to be obvious. Google is not competing with you as an experienced video editor or you as a movie-maker. We already have dozens of professional desktop tools to do this, and no need for yet another one. Why should we have to learn "all about the light" and the tools and the menus, and the settings, and what the acronyms mean to get better results? Make it easy for Grandma and Grandpa. Isn't that supposed to be one of the goals of Apple software, "keep it simple, stupid"?
For the quick stuff I've been using Topaz Video AI which does a very acceptable job for unpaid work, but I have to download the video to my desktop (takes time), have a basic understanding of the settings (more time), and some knowledge of color-grading (more reading and research), then do some trial and error (more time) for the best results. It's still time-consuming even with somewhat automated software.
What Google is doing AFAICT is creating a convenient way for non-professional people to give themselves a better quality video than they captured with little to no effort or PP education required. Sounds like something a lot of consumers would appreciate access to, particularly after this initial testing period. Trying to compare this to professional tools is way off the mark, Marvin. It's not supposed to be for professionals who already know what they're doing and where they want to end up, nor does Google imply it is.
EDIT: You portray a 1.5GB file upload as being a major roadblock to testing this out. At least in my case with a 500mb fiber speed I could do so in under 30 seconds. No biggie here, and I have zero doubt the videos will only be uploaded when the user has a wi-fi connection.
My recollection is that Apple has never offloaded imagining or video processing to the cloud, so in some ways, this was advantageous to Android OS early on, and later, to the Pixel. Siri was certainly an exception to that, as were other Apple services, and yet, Apple didn't seem to suffer from adding features when the silicon was there to allow it, presumable given consumers ability of extrapolate Apple's advantage in silicon.
One of the constant talking points of the Android OS world, is how Android almost always has features before iPhone does, and that is certainly true, but it is also true that Apple, with a limited number of models, stresses the supply chain even to add a folded lens to its current iPhone 15 Pro Max, which will sell in the ten's of millions of units, sales that no other flagship phone model would ever see.
While I'm happy to see Google's success with the Pixel, it's still apparent to me that there is notable differentiation between iPhone's video capability, and Pixel Phone's, a great example of consumer choice at work.
I feel sorry for the people who bought a now-all-but-abandoned Pixel 7. Shoulda held out, suckers!
Their official company statement isn't confusing and doesn't sound like a goal: "We’re COMMITTING to providing seven years of software support for the Pixel 8 and Pixel 8 Pro, including the latest Android operating system, security updates, and ongoing Feature Drops." ...
...and yes, they are also committing to repair parts availability for seven years.
How does that compare to the number of years Apple guarantees software updates?
As for the Pixel 7 you're closer (not close) to being correct, just less hand-wringing as they're hardly abandoned. It gets its last guaranteed security updates in 2027 and full operating system revisions in 2025 unless Google decides to up it.
https://support.google.com/pixelphone/answer/4457705?visit_id=638321965455210411-1447137010&p=pixel_android_updates&rd=1#zippy=,pixel-pro
https://blog.google/
https://browser.geekbench.com/mobile-benchmarks
What it explains is why Google is sending the information to the cloud, and then ricocheting that info back to the phone their current processor which is actually is way behind Apples 11 Pro iPhone, which is depending on how you count it is a 4 to 5 year old processor, and Google certainly doesn’t match Apple when it comes to the OS integration with software and they certainly don’t out design Apple in hardware.
Well, they’re not even doing their own hardware they’re just pretending, and that’s not even taking into consideration third-party programs like Black Magic cam or DaVinci Resolve or Final Cut Pro or any other third party software in the Apple ecosystem.
Since Google is 4 to 5 years behind do they even have LiDAR in their smartphone? By the way, the 11 Pro does not have LiDAR wasn’t added until Version 12 of the iPhone probably has something to do with the processors speed, but every iphone after has LiDAR, and all the iPad Pro’s after 2020 after also have LiDAR built in, which helps in portrait mode when taking still pictures LiDAR allows, precise measurement without guest work, which the Samsung phones and the Pixel phone do not have, they just guess at the distance and use fake AI to make up the difference, Apple including LiDAR will play a big part in spatial video recording, which will be very important with the Apple Vision Pro next year, both the 15 Pro and 15 Pro ultra have the capability of recording spatial video, which is coming in a software upgrade before the intro of the Apple Vision, Pro next year.
Before looking at the numbers with all the hype about the Pixel and the Samsung smartphones, I thought they were closer but they’re not closer, they appear to be at least four or five years behind Apple. The numbers, don’t even equal the 11 Pro iphone and that includes both Samsung, Google, and every other Android smartphone.
I think Apple’s path is pursuing on phone/local computing, and not ET phone home, burn battery life and hope you have a signal computing with their mobile devices, but that only works if you can get Meta to relinquish CPU cycles and not burn up the phone first.
https://gizmodo.com/what-is-lidar-and-why-would-you-want-it-on-your-phone-1843162463 The comment section is priceless usual what do you need this for?
Google decides to take a different path with Elon? Who also doesn’t like LiDAR either. (Could Google path be different because their processors don’t have the power?) Google decided to roll their own version ARCore, but their current processors they don’t appear to be fast enough, remember they don’t even have 11 Pro iPhone capability yet, Apple didn’t put LiDAR in until the 12 iPhone.
https://blog.google/products/pixel/google-pixel-8-pro/ The Pixel eight apparently doesn’t have LiDAR either, in fact it’s a pretty pedestrian smartphone upgrade.
But it does have Video Boost coming, sent it to our servers to process and then you can get it, your picture back.
https://www.techradar.com/news/6-ways-android-phones-could-use-the-iphone-12-pros-lidar-scanner-tech
Other Android phone makers have tried but their implementation (single pulse) is bad, when compared to Apple (multiple pulse scanner), their current processors apparently don’t have the power either, and they don’t have the software chops to make it happen on a consistent basis with all the many other processes going on in the background?Sounds like a job for Apple.