Apple sponsors Dublin conference on speech processing

Posted:
in iOS

Apple is a main sponsor for the 24th Interspeech Conference, dedicated to spoken language computing, and will present proposals for activating Siri without a spoken command, plus better recognition for people with severe speech impairments.

Dublin Convention Center (Source: official site)
Dublin Convention Center (Source: official site)



Even as Apple is dropping the requirement to say "Hey" before verbally invoking Siri, its Machine Learning researchers are already looking to go further. At the Interspeech Conference in Dublin's official dates from August 20, 2023 to August 24, 2023, Apple staff will be taking part in panels including:


  • Trigger-less voice assistants

  • Spatial Audio learning

  • Voice assistants for people with Dysarthric speech



Trigger-less voice assistants sounds like the kind of idea behind some Apple patents on having Siri respond when looked at. But Apple argues in its proposal for the conference, that we already have Siri able to respond without "Siri," or "Hey, Siri" spoken commands.

"[For example, smartwatches] have now incorporated trigger-less methods of invoking VAs, such as Raise To Speak (RTS), where the user raises their watch and speaks to VAs without an explicit trigger," says Apple. "Current state-of-the-art RTS systems rely on heuristics and engineered Finite State Machines to fuse gesture and audio data for multimodal decision-making."

"However, these methods have limitations, including limited adaptability, scalability, and induced human biases, continues Apple. "In this work, we propose a neural network based audio-gesture multimodal fusion system that... better understands temporal correlation between audio and gesture data."

Dysarthric speech



Dysarthric is a condition where the muscles used for speech are weakened from any number of different conditions, and it becomes significantly harder to speak.

"We propose a query-by-example-based personalized phrase recognition system that is trained using small amounts of speech, is language agnostic, does not assume a traditional pronunciation lexicon, and generalizes well across speech difference severities," says Apple in the proposal that will be presented at the conference.

"On an internal dataset collected from 32 people with dysarthria, this approach works regardless of severity," continues Apple, "and shows a 60% improvement in recall relative to a commercial speech recognition system."

Before the conference



Apple is also one of many companies sponsoring The Young Female Researchers in Speech Workshop, which runs a day before the conference on Saturday, August 19, 2023.

"The workshop aims to promote interest in research in our field among women who have not yet decided to pursue a PhD in speech science or technology," says the official site, "but who have already gained research experience at their universities through individual or group projects."

This workshop will take place at Dublin's prestigious Trinity College. The 24th Interspeech Conference is being held at the Convention Center, Dublin.

Read on AppleInsider

FileMakerFeller

Comments

  • Reply 1 of 1
    That part about Dysarthric speech is interesting; I'd like to see a larger sample size to be sure it's a meaningful result but given the possible applications to stroke survivors I'm excited by this development.
    watto_cobra
Sign In or Register to comment.