Apple's Siri integrates sound clips including animals, musical instruments and vehicles
Released in tandem with iOS 14.3, Apple on Monday introduced a new feature to Siri that enables the virtual assistant to play back short audio clips when queried about specific sounds.

Spotted by CNBC, the new capability is somewhat limited, though that could change if Apple continues to build out Siri's sound library. Apple says there are hundreds of options, but finding what audio is supported boils down to trial and error.
As of this writing, Siri can reproduce animal noises, sounds from musical instruments, and audio recordings of vehicles. The publication found these questions trigger the new audio clip feature:
The feature appears to be an extension of Siri's Knowledge base and is presented as "Knowledge" on iOS 14.3. Queries result in a pop up pane that provides a brief
Wikipedia summary of the requested item and an audio clip that automatically plays when invoked. Users can listen to the audio again by tapping the play icon.

Spotted by CNBC, the new capability is somewhat limited, though that could change if Apple continues to build out Siri's sound library. Apple says there are hundreds of options, but finding what audio is supported boils down to trial and error.
As of this writing, Siri can reproduce animal noises, sounds from musical instruments, and audio recordings of vehicles. The publication found these questions trigger the new audio clip feature:
- "Hey Siri, what does a humpback whale sound like?"
- "Hey Siri, what does a toy poodle sound like?"
- "Hey Siri, what does a harp sound like?"
- "Hey Siri, what do firetrucks sound like?"
The feature appears to be an extension of Siri's Knowledge base and is presented as "Knowledge" on iOS 14.3. Queries result in a pop up pane that provides a brief
Wikipedia summary of the requested item and an audio clip that automatically plays when invoked. Users can listen to the audio again by tapping the play icon.

Comments
Siri has terrible understanding, just processing each word at a very basic level rather than the sentence as a whole. Asking it to for example "Add chicken and mushroom pies to my shopping list" usually (but not always) results in "Ok, I've added those two things". But then when I want it to add more than one separate item, for example "add chicken, mushrooms and milk to my shopping list" it adds them as one item. Its understanding really has been in alpha since day one. Considering Apple's smarts in AI elsewhere, its weird that Siri is the worst performer of the class. Another example, today I tried to get my HP to facetime audio with my dad, and it was insistent it should play some obscure song instead. 4 attempts and it finally called him. Why?!
I've had a fun morning with this. I didn't have a shopping list so I created one and started playing.
I tried:
and it worked.
So then I tried:
and it added it as a single item.
Mmm. Then it occurred to me that the problem might be that I didn't have a list called "my shopping list" so I deleted the entry and tried again with this:
Note that I missed the my from the phrase because that's not the name the shopping list.
And it worked, correctly adding 3 separate items to the list.
So I wiped the items and tried again with
And it didn't work. I had a single item on the list.
So I tried again with:
And it worked! Three items on the list.
So, I wiped and tried again with:
I expected it to fail, but it worked. And I haven't managed to get it to fail since. Not only that, but it worked first time when I tried it on my other devices.
My best guess is that after a couple of tries, it eventually worked out what I was getting at, and then learned to accept both variations of the phrase.
Pity I don't need a shopping list after all that!
My results with Siri have been getting better the more I use it. I suspect the problem might be that Siri learns better from some people than it does from others. Not sure why. It could be that working in software for many years, I might have a better intuitive understanding of what's going on behind the scenes. Interesting that I didn't try using my shopping list when I knew that wasn't the name of the list, though adding the (needless) extra word is probably standard human behaviour.
Still, Siri learned what was going on after a couple of tries.
Fascinating.
Good thinking.
Mrs Rayz2016 reckons this will be a godsend for homework assignments.
As you say, this is leading to something much bigger.
Let's look at the real reason that Apple coughed up some serious coin for Shazam. As well as being able to identify songs by listening to them, Shazam was also working on a similar visual search technology when Apple bought them out.
Furthermore, a couple of sites have noted an increasing in activity from Apple's web crawlers, which they believe is being used to build up their own search index.
So my own guess is that Apple is looking to bring something else in-house: a Siri-based search engine that works can work with the camera and the microphone as well as through text and multimedia elements on screen, and without sending your personal details to folk you don't know.
Maybe that's where Shazam tech is going? Hopefully.
Of course we shouldn't have to say things in a contrived robotic way, otherwise it's no better than Classic Mac's speakable items.