Watch: Rumors of a Siri home speaker heat up as Apple heads toward WWDC

2»

Comments

  • Reply 21 of 29
    MarvinMarvin Posts: 15,322moderator
    volcan said:
    This doesn't really surprise me -- google owns a massive, searchable database of web metadata. That's its core business. Siri doesn't sit atop that data.. 
    It is not like it is a obcure historical data point where Google would have an advantage. This is stuff that was posted last month or last week. Why is Siri not aware of this?
    Siri asks multiple information sources for results e.g Yelp for local businesses, ESPN for some sports scores:

    http://appleinsider.com/articles/12/03/26/espn_collaborating_with_apple_appears_ready_to_add_sports_scores_to_siri

    Apple would need to partner with a source for local events and they'd need to protect location info in some cases. They probably try to avoid passing too much info to 3rd party search engines. The more sources they add of course, the more that Siri has to decide which to use in any given scenario, which can cause its own problems like where numbers can be sent as dates for events or calculations to Wolfram Alpha.

    In the case of the nearby events above, Siri should have been able to determine that events so far away were not nearby. If it didn't find anything within say a 10 mile radius, it should have said it didn't find anything.
    Siri has issues that need to be addressed first before it's ready to launch as a separate product category.
    Fixing nonsensical responses would be an improvement. That's highlighted in the following video:



    "Siri, how big is the Serengeti"
    "No problem, show me pictures of Spaghetti"
    "How big is the Serengeti!"
    "More pictures of spaghetti, more pictures of spaghetti!"
    "Sorry I don't see spaghetti in your contacts"

    There should be a way to determine if a reply is nonsensical based on typical phrases that are used. If Siri constructs a response that isn't commonly used e.g spaghetti in contacts, it can have a weighting system to cull responses that fall below a given threshold. Locally held information would help too like if someone is in the kitchen with the phone, it would be more likely that certain questions would be asked compared to being in the car. It's less likely that someone would ask for recipes in the car so if there's confusion over a term, it can use that extra info to help. It can use info entered elsewhere on the device like in Safari and keep all the data locally. Maybe Apple can hire some language experts to come up with a weighting system to assess the plausibility that a response is going to be acceptable.
  • Reply 22 of 29
    SoliSoli Posts: 10,035member
    Marvin said:
    "Siri, how big is the Serengeti"
    "No problem, show me pictures of Spaghetti"
    "How big is the Serengeti!"
    "More pictures of spaghetti, more pictures of spaghetti!"
    "Sorry I don't see spaghetti in your contacts"

    There should be a way to determine if a reply is nonsensical based on typical phrases that are used. If Siri constructs a response that isn't commonly used e.g spaghetti in contacts, it can have a weighting system to cull responses that fall below a given threshold. Locally held information would help too like if someone is in the kitchen with the phone, it would be more likely that certain questions would be asked compared to being in the car. It's less likely that someone would ask for recipes in the car so if there's confusion over a term, it can use that extra info to help. It can use info entered elsewhere on the device like in Safari and keep all the data locally. Maybe Apple can hire some language experts to come up with a weighting system to assess the plausibility that a response is going to be acceptable.
    This is one of the great things Amazon did out of the gate with Alexa. You can see and hear every response you've made using the app or web portal, and its reply. You can then tell Amazon if the reply was in/correct and even comment on it.
  • Reply 23 of 29
    k2kwk2kw Posts: 2,075member
    Marvin said:
    volcan said:
    This doesn't really surprise me -- google owns a massive, searchable database of web metadata. That's its core business. Siri doesn't sit atop that data.. 
    It is not like it is a obcure historical data point where Google would have an advantage. This is stuff that was posted last month or last week. Why is Siri not aware of this?
    Siri asks multiple information sources for results e.g Yelp for local businesses, ESPN for some sports scores:

    http://appleinsider.com/articles/12/03/26/espn_collaborating_with_apple_appears_ready_to_add_sports_scores_to_siri

    Apple would need to partner with a source for local events and they'd need to protect location info in some cases. They probably try to avoid passing too much info to 3rd party search engines. The more sources they add of course, the more that Siri has to decide which to use in any given scenario, which can cause its own problems like where numbers can be sent as dates for events or calculations to Wolfram Alpha.

    In the case of the nearby events above, Siri should have been able to determine that events so far away were not nearby. If it didn't find anything within say a 10 mile radius, it should have said it didn't find anything.
    Siri has issues that need to be addressed first before it's ready to launch as a separate product category.
    Fixing nonsensical responses would be an improvement. That's highlighted in the following video:



    "Siri, how big is the Serengeti"
    "No problem, show me pictures of Spaghetti"
    "How big is the Serengeti!"
    "More pictures of spaghetti, more pictures of spaghetti!"
    "Sorry I don't see spaghetti in your contacts"

    There should be a way to determine if a reply is nonsensical based on typical phrases that are used. If Siri constructs a response that isn't commonly used e.g spaghetti in contacts, it can have a weighting system to cull responses that fall below a given threshold. Locally held information would help too like if someone is in the kitchen with the phone, it would be more likely that certain questions would be asked compared to being in the car. It's less likely that someone would ask for recipes in the car so if there's confusion over a term, it can use that extra info to help. It can use info entered elsewhere on the device like in Safari and keep all the data locally. Maybe Apple can hire some language experts to come up with a weighting system to assess the plausibility that a response is going to be acceptable.
    Heard about this CNBC report on the local radio.  
    http://www.cnbc.com/2017/05/20/siri-vs-google-assistant-on-iphone.html

    Intesting comparison between Siri and Google Assistant using the same hardware.
  • Reply 24 of 29
    SoliSoli Posts: 10,035member
    k2kw said:
    Marvin said:
    volcan said:
    This doesn't really surprise me -- google owns a massive, searchable database of web metadata. That's its core business. Siri doesn't sit atop that data.. 
    It is not like it is a obcure historical data point where Google would have an advantage. This is stuff that was posted last month or last week. Why is Siri not aware of this?
    Siri asks multiple information sources for results e.g Yelp for local businesses, ESPN for some sports scores:

    http://appleinsider.com/articles/12/03/26/espn_collaborating_with_apple_appears_ready_to_add_sports_scores_to_siri

    Apple would need to partner with a source for local events and they'd need to protect location info in some cases. They probably try to avoid passing too much info to 3rd party search engines. The more sources they add of course, the more that Siri has to decide which to use in any given scenario, which can cause its own problems like where numbers can be sent as dates for events or calculations to Wolfram Alpha.

    In the case of the nearby events above, Siri should have been able to determine that events so far away were not nearby. If it didn't find anything within say a 10 mile radius, it should have said it didn't find anything.
    Siri has issues that need to be addressed first before it's ready to launch as a separate product category.
    Fixing nonsensical responses would be an improvement. That's highlighted in the following video:



    "Siri, how big is the Serengeti"
    "No problem, show me pictures of Spaghetti"
    "How big is the Serengeti!"
    "More pictures of spaghetti, more pictures of spaghetti!"
    "Sorry I don't see spaghetti in your contacts"

    There should be a way to determine if a reply is nonsensical based on typical phrases that are used. If Siri constructs a response that isn't commonly used e.g spaghetti in contacts, it can have a weighting system to cull responses that fall below a given threshold. Locally held information would help too like if someone is in the kitchen with the phone, it would be more likely that certain questions would be asked compared to being in the car. It's less likely that someone would ask for recipes in the car so if there's confusion over a term, it can use that extra info to help. It can use info entered elsewhere on the device like in Safari and keep all the data locally. Maybe Apple can hire some language experts to come up with a weighting system to assess the plausibility that a response is going to be acceptable.
    Heard about this CNBC report on the local radio.  
    http://www.cnbc.com/2017/05/20/siri-vs-google-assistant-on-iphone.html

    Intesting comparison between Siri and Google Assistant using the same hardware.
    I haven't analyzed all the results, but this one stuck out as being misleading. Based on where they get the data, the value could be off. There are a lot of factors that make whatever DB they source potentially off. This is likely not a fail from either company.
    "What's Tesla's market cap?"
    Winner: Double fail


  • Reply 25 of 29
    Marvin said:
    Fixing nonsensical responses would be an improvement. That's highlighted in the following video:



    "Siri, how big is the Serengeti"
    "No problem, show me pictures of Spaghetti"
    "How big is the Serengeti!"
    "More pictures of spaghetti, more pictures of spaghetti!"
    "Sorry I don't see spaghetti in your contacts"

    There should be a way to determine if a reply is nonsensical based on typical phrases that are used. If Siri constructs a response that isn't commonly used e.g spaghetti in contacts, it can have a weighting system to cull responses that fall below a given threshold. Locally held information would help too like if someone is in the kitchen with the phone, it would be more likely that certain questions would be asked compared to being in the car. It's less likely that someone would ask for recipes in the car so if there's confusion over a term, it can use that extra info to help. It can use info entered elsewhere on the device like in Safari and keep all the data locally. Maybe Apple can hire some language experts to come up with a weighting system to assess the plausibility that a response is going to be acceptable.
    Context is key. Oddly enough, I don't remember Siri being this bad when it launched in the iPhone 4S and received improvements in the 5, 5c and 5s. Perhaps with all the fuss surrounding voice assistants these days, it will get the fixes it needs to stay relevant.
  • Reply 26 of 29
    MarvinMarvin Posts: 15,322moderator
    Soli said:
    This is one of the great things Amazon did out of the gate with Alexa. You can see and hear every response you've made using the app or web portal, and its reply. You can then tell Amazon if the reply was in/correct and even comment on it.
    Apple has it for Maps for incorrect data so it makes sense to have something similar for Siri. If someone manually flags an incorrect response, Apple can have people to listen to the audio sample and determine what it should have been interpreted as then figure out how to adjust Siri to produce that output. They might notice a lot of complaints from certain regions that have accents Siri isn't picking up properly. The test AI did in an earlier article shows a very obvious incorrect interpretation:

    http://appleinsider.com/articles/17/05/18/google-assistant-arrives-on-the-iphone-but-isnt-going-to-replace-siri-soon



    Siri picked up "Los Angeles class faster text Mikey" instead of "Los Angeles class fast-attack submarine". The sentence doesn't make any sense at all. Google may have picked it up wrongly too if the audio was bad but their search engine has the "Did you mean" feature that searches what it thinks you meant and most of the time gets it right (the video above parodies this with the Surfbort query vs surfboard). Bing has done the same in the image as the results don't match the query and if Google had been the search engine selected, it might have come up with the same results as Google Assistant on the right but Siri should have done that correction first because it could mean the difference between sending a query to a search engine or a more specialized source of information.

    Fast edits would help too so not having to delete letter by letter but just swipe the incorrect words and repeat them more clearly so it knows what was intended at that portion of the phrase.
    Context is key. Oddly enough, I don't remember Siri being this bad when it launched in the iPhone 4S and received improvements in the 5, 5c and 5s. Perhaps with all the fuss surrounding voice assistants these days, it will get the fixes it needs to stay relevant.
    It could be personal training at fault in some cases with Siri. Over time it adjusts to the user but it may pick up bad habits in the process. It can be reset:

    https://www.macobserver.com/tmo/article/ios-deleting-siris-training-data

    That could make it worse for some people as years of training would be removed but if it's bad data then it's making Siri worse than starting over.
    Rutherford
  • Reply 27 of 29
    tallest skiltallest skil Posts: 43,388member
    Speaking of improving search functionality, Spotlight can’t search inside RTFD documents. I mean, what the fucking hell is THAT about?!
    SpamSandwich
  • Reply 28 of 29
    Marvin said:
    It could be personal training at fault in some cases with Siri. Over time it adjusts to the user but it may pick up bad habits in the process. It can be reset:

    https://www.macobserver.com/tmo/article/ios-deleting-siris-training-data

    That could make it worse for some people as years of training would be removed but if it's bad data then it's making Siri worse than starting over.
    I've never heard of this. This is pretty informative. Thanks for letting me know!
  • Reply 29 of 29
    MacProMacPro Posts: 19,727member
    larrya said:
    mkrewson said:
    I'm sorry, I stopped watching when he mispronounced iOS... 😳
    Yeah, what the hell. Who says that?
    LOL - to be fair, we never pronounced DOS as D-OS
    No, IN UK we pronounced it D R OSS ;)
Sign In or Register to comment.