- Last Active
ericthehalfbee said:gatorguy said:ericthehalfbee said:
This is a very basic example what the multiple tweeters in the HomePod can do to create a wider soundstage and also to eliminate problems with phase (and will look familiar to anyone who watched the HomePod video).
When sound is reflected off the rear walls, it has a longer path to take to get to the listener than sound coming directly from the speaker. In my example above, the total time for sound to reach the listener is 2.5ms from the front driver and a total of 8.0ms from the rear drivers. When the sound reaches your ears it could be perfectly in phase, completely out of phase or most likely, somewhere in between.Phase can be adjusted mechanically (physically changing the position of a driver or speaker) or electrically (through digital time delay). Obviously, the HomePod uses digital time delay to adjust phase.
In the example above, the sound to the rear tweeters would be sent out normally. However, the sound to the front tweeter would be delayed by 5.5ms. This delay allows the rear sound to "catch up" to the direct sound such that by the time it's reflected off the rear walls and starts moving forward it will end up being in phase with the sound from the front tweeter. All the possible issues that can arise with sounds being out of phase are thus eliminated. Since the HomePod also has 6 microphones, calculating this delay time would be fairly straightforward. A few clicks or other test tones played through the individual tweeters can be measured by the microphones so the HomePod can determine exactly how far away it is from any walls and set the appropriate delay time accordingly.
It's important to note that what I described above isn't actually beamforming. It's a well established method of using time delay to control phase and improve sound quality. Beamforming is more complex but still relies on the same basic principles (precisely controlling phase to multiple drivers to direct where sound goes).
When Apple says beamforming it's not a marketing term. The HomePod has the proper hardware and layout to enable beamforming. All the rest (Sonos, Google, Amazon) don't.
Now having said all that I personally expect the Google Home Max to be noticeably bass-heavy just as the original 2016 Google Home was. I'm not a fan of boom-sound myself and I've no doubt many others would agree. Fortunately for those owners there is an included equalizer now. It's certainly possible the Home Max may suffer some muddiness.
Apple will of course have a well-thought-out Home Pod with very good sound and well matched to other Apple products. It may well bring in more revenues than other mid-range smart-speakers by a significant margin, which at the end of the day is all this is about: More profits.
Sonos One with Alexa (other voice assistant support coming soon) is what I would expect to have the better features for most folks and at least comparable overall sound to the Home Pod if not a bit fuller, particularly for the price which is significantly undercutting both Apple and Google. That's before the expected Play:5 replacement this next year that will likely support Amazon Alexa, Google Assistant and Apple's Siri at launch.
Another plus for some Homepod competitors is the much better cross-platform and 3rd party support, services like Spotify, Pandora, iHeart Radio, Amazon Music, TuneIn and others, that won't be supported by Apple. IMHO If you're not already deep into the Apple ecosystem there are better featured products for you than the Home Pod, but for dedicated Apple fans the Home Pod might be (and probably will be) the best choice among the three as long as your music sources only from Apple Music/iTunes or your personal library.
And again this is just personal opinion but at some point Apple's Homepod will have to support some 3rd party streaming. As is they are too limited IMO, assuming of course that the product they eventually ship is the same as the one they demo'd and spec'd back in the summer.
Everything you said in regards to phase and beamforming is, again, completely false.
My only problem is trying to explain complex ideas so the layperson can understand. Your comments about phase make this abundantly clear.
It baffles me how you're both rude and insulting to almost everyone you might disagree with. You'll attract more bees with honey than vinegar.
False. Smart Sound is Google's brand for changing the volume to ambient sounds. It and Sonos TruePlay adapt sound to the room, but are not on the same level as what Apple is doing with its A6 powered HW. But really, if Google could sell hardware it would be, wouldn't it? Or is it just "showing other companies how to do things" again? Really impressive how you run those goalposts around.The Google Home Max includes a new feature dubbed Smart Sound, which taps into Google’s machine learning expertise to tailor the audio experience to your tastes—and even your physical surroundings. Smart Sound will dynamically adjust the Google Home Max’s audio output based on its physical surroundings; if you move it from a crowded corner spot to an open kitchen counter, the speaker will automatically change how it sounds for optimal audio quality. Over time, Google’s AI will learn context about your home and adjust sound based on that, too. Examples given onstage included raising the volume if the Google Home Max detects your dishwasher running in the background, or lowering it during those bleary morning hours. Google claims the voice recognition will discern the different people talking to it, and thus create custom playlists tailored to particular music tastes.
Do you mind clarifying what Apple is doing different and/or better?
VRing said:muthuk_vanalingam said:lkrupp said:gatorguy said:
They'll spin into "it doesn't matter". As the AI author alluded to both phones are so capable no one will notice in actual use.The slightly ahead is in fact a statistical tie considering sampling, I just hate false facts.. Those few benchmarks rely on cores and the Note still has more than the X.In fact, these benchmarks are probably least relevant at all to an actual user.
3DMark uses the same physics engine as popular games, so it's actually quite relevant to real world performance.
Futuremark (the makers of the 3DMark benchmark)
- Sling Shot Extreme: 3602
- Sling Shot Extreme Unlimited: 3975
- Sling Shot Extreme: 3595
- Sling Shot Extreme Unlimited: 3969
- Sling Shot Extreme: 3577
- Sling Shot Extreme Unlimited: Not Tested
- Sling Shot Extreme: Not Tested
- Sling Shot Extreme Unlimited: 4068
As a number of sites use the "Unlimited" benchmark, I also posted those results to show consistency with the results on Futuremark's official page. Additionally, here are some numbers to compare for the iPhone X in this benchmark.
Futuremark's iPhone X scores:
- Sling Shot Extreme: 2681
- Sling Shot Extreme Unlimited: 3175
I'm curious to see what the long term performance is. In other words, does it heavily throttle after a couple of minutes of use? Would the scores of the benchmarks see drastic changes on the 2nd or 3rd run? It's hard to say as some of the benchmarks, such as Geekbench, will give a pause to prevent thermal throttling. Are the numbers the Galaxy and iPhone outputting even sustainable beyond a couple minutes of real world use?
I think you're forgetting about Vulkan which launched a year or two back on Android. Most major engines already support it (Unreal, Unity, CryEngine, Source 2, etc.).chasm said:Apple’s use of Metal (which nearly all developers are going to take advantage of, giving Apple a significant win now and going forward).
Since when did having more cores make something "technically" faster? Your logic doesn't make any sense.sflocal said:It's an absolute embarrassment that Samsung with more cores does much worse than an iPhone with what is technically a "slower" chip.