Former Apple employee claims Steve Jobs would have 'lost his mind' over Siri

1457910

Comments

  • Reply 121 of 189
    tallest skiltallest skil Posts: 43,388member

    Quote:

    Originally Posted by ascii View Post

    I would have delayed the release of Siri until it could run purely client side.


     


    You'd have had them hold off until 2018?

  • Reply 122 of 189
    irelandireland Posts: 17,799member

    Quote:

    Originally Posted by tikiman View Post


    I guess there's little point in mentioning that 95% of the time Siri works just fine for me...



    For you.

  • Reply 123 of 189
    ani4aniani4ani Posts: 4member


    So for those who have stopped using Siri, the reason for the 4S is?

  • Reply 124 of 189
    asciiascii Posts: 5,936member

    Quote:

    Originally Posted by Tallest Skil View Post


     


    You'd have had them hold off until 2018?



    I believe it already does the speech recog on the client side, that is surely the most CPU intensive part? Extracting the meaning from the words would be more data intensive, but would that DB be more than a few gigs? I am not so sure client side would be 4 years away.


     


    Of course if the specific query requires network access to answer (e.g. weather or map server) then there's no way to avoid the network, and perhaps that was Apple's thinking, that most queries would be like that, so might as well shift some of the processing to server side too. Or perhaps with such a new technology they wanted the freedom to make quick changes for the early versions, and once the kinks are ironed out they will go full client side. But that doesn't change the fact that delays are more noticeable with speech interfaces than GUIs. Then again they did call it a beta.

  • Reply 125 of 189
    jragostajragosta Posts: 10,473member
    ireland wrote: »
    For you.

    And for lots of people.

    Why don't you show the percentage of people who are having problems with Siri?

    No one said it was perfect. But all the people whining that it's 'defective' need to show that it fails a very large percentage of the time - and no one has done that.
  • Reply 126 of 189
    tallest skiltallest skil Posts: 43,388member

    Quote:

    Originally Posted by ani4ani View Post

    So for those who have stopped using Siri, the reason for the 4S is?


     


    Are you insinuating that a phone with a brand new processor, brand new graphics, brand new cellular telephony, brand new microphone hardware, and brand new antennas has ABSOLUTELY NOTHING to offer but STT/TTS software?

  • Reply 127 of 189
    rogifanrogifan Posts: 10,669member
    charlituna wrote: »
    Indeed. Almost everything folks are saying he wouldn't have done was started in the thick of his time at Apple. Things they said he'd freak about were his pet projects.
    Steve started the whole Siri thing. And probably oversaw every step because when he trusted MobileMe to the tem, they failed. He understood that you can't train a voice recognition system without billions of samples of all kinds of voices. That's why they need users. But on the flipside that also means issues with overloaded servers etc. same as you can be standing basically under a cell tower with five glowing bars but if that tower can only handle 1000 calls at a time nd you are 1001, you won't be able to make a call. As the man once said, you can't change the laws of gravity, even if you are Apple? But unlike these naysayers Apple and Steve understand such things
    Yeah I'm getting so sick of the "Steve would never have let this happen" or "if Steve was around Apple wouldn't be doing X". There are things Steve got credit for that probably weren't his idea, and screw ups that happened on his watch that people now seem to forget about. The man was amazing, a genius. But he wasn't perfect - no one is.
  • Reply 128 of 189
    desuserigndesuserign Posts: 1,316member

    Quote:

    Originally Posted by ascii View Post


    I believe it already does the speech recog on the client side, that is surely the most CPU intensive part? Extracting the meaning from the words would be more data intensive, but would that DB be more than a few gigs? I am not so sure client side would be 4 years away.


     



    Where did you read that?


    My understanding is that there is some pre-processing done on the client side (on the S4's specifically provisioned processor) so that the data sent to the Apples servers is smaller and more manageable. But it was my understanding that most of the actual analysis was done at the server (since its Beta, it can be optimized as it improves.) Do you know for sure?

  • Reply 129 of 189
    I was able to get Siri to hear "remind me to put the gazpacho on ice in an hour" after the third attempt.
  • Reply 130 of 189
    solipsismxsolipsismx Posts: 19,566member
    desuserign wrote: »
    Where did you read that?
    My understanding is that there is some pre-processing done on the client side (on the S4's specifically provisioned processor) so that the data sent to the Apples servers is smaller and more manageable. But it was my understanding that most of the actual analysis was done at the server (since its Beta, it can be optimized as it improves.) Do you know for sure?

    The only tech in the 4S that relates to Siri, that I'm aware of, is from EarSmart. It's noise cancelation tech.

    Remember that Siri needs to convert speech to text with first-tier contextual comprehension (e.g.: knowing is you mean to, two or too) via their Dragon Dictation license that runs on their servers before the second-tier contextual comprehension in the Siri "brain" can process the text and determine what course of action it's to take based on that text.

    If you have no internet connection you can't even make a voice call or ask what song is playing, which are functions that previous iPhones had locally. You can enable that base functionality on the 4S but you will have to completely disable Siri to get that to kick in.

    Unfortunately there system, for better or worse, is not able to make first-tier contextual determinations that will either process the request locally or send to Apple's servers for analysis. As much as I would like this to happen I see issues with it so I'm not expecting this to occur in the future.



    PS: One thing I'd like Apple to work with linguists to implement is a speech matrix that has the user read a carefully worded paragraph to lets the user's Siri profile have a leg up on the way you likely pronounce various words.

    I'd also like Siri to use the SOUND field option in vCard. No, I don't mean the phonetic name (X-PHONETIC-FIRST-NAME, X-PHONETIC-LAST-NAME) option that allows a user to type a phonetic spelling of a name that is useful for various languages. This feature would show up when you press edit and would simply record the name of your contact as you say it and then send that waveform to your user file on Siri's servers.
  • Reply 131 of 189
    asciiascii Posts: 5,936member

    Quote:

    Originally Posted by DESuserIGN View Post


    Where did you read that?


    My understanding is that there is some pre-processing done on the client side (on the S4's specifically provisioned processor) so that the data sent to the Apples servers is smaller and more manageable. But it was my understanding that most of the actual analysis was done at the server (since its Beta, it can be optimized as it improves.) Do you know for sure?



    You are right, the audio is simply compressed with Speex on the phone and then sent to the server which does everything. Sorry, I could have sworn I read somewhere that the speech is processed on the device.

  • Reply 132 of 189
    rigelianrigelian Posts: 44member


    Seriously.  Those hour reminders are important for me.  When I'm working and get completely locked in to what I'm doing getting a reminder to go to a meeting is absolutely useful.  Sense of time literally stops if you're in a working groove.  

  • Reply 133 of 189
    tallest skiltallest skil Posts: 43,388member

    Quote:

    Originally Posted by ascii View Post

    You are right, the audio is simply compressed with Speex on the phone and then sent to the server which does everything. Sorry, I could have sworn I read somewhere that the speech is processed on the device.


     


    The big question… and we can't know unless Apple does it… is Siri on OS X.


     


    If they drop a bombshell this WWDC and give us Siri in Mountain Lion, we'll be able to see if the processing is done entirely on the computer and stuff is just sent to the servers to get proper… whatever done on the back end.


     


    If that's the case, then that will give us a metric for when iOS Siri on-device processing will be feasible, as well as when it won't require an Internet connection at all.

  • Reply 134 of 189
    lamewinglamewing Posts: 742member

    Quote:

    Originally Posted by tikiman View Post


    I guess there's little point in mentioning that 95% of the time Siri works just fine for me, but I don't ask it to find "funky" names of locations, songs, etc. that I know it would be hard to understand. I must be in the minority. 



    It is rather embarrassing when Siri cannot do exactly what is in the Apple TV advertisements. 

  • Reply 135 of 189
    lamewinglamewing Posts: 742member

    Quote:

    Originally Posted by tokenuser View Post


    Siri is beta? How long was GMail "beta"? (Don't bother asking Siri - I'll give you the answer ... over 5 years).


    Beta doesn't mean bad. It just means not final.


    If it was an Alpha release ... then there'd be issues.I 



     


    Apple and its supporters have made derogatory comments about Google providing beta software, so why is it okay for Apple to the very same thing?

  • Reply 136 of 189
    solipsismxsolipsismx Posts: 19,566member
    rigelian wrote: »
    Seriously.  Those hour reminders are important for me.  When I'm working and get completely locked in to what I'm doing getting a reminder to go to a meeting is absolutely useful.  Sense of time literally stops if you're in a working groove.  
    Before Siri that capability was there but it was too inconvenient to use despite the ability to set up in under 30 seconds. That doesn't seem like a big deal but with Siri, as you know, you can do it while walking down a hallway, without looking up except to verify the request. For something that was already simple Siri has removed a barrier that makes it considerably more convenient.

    The big question… and we can't know unless Apple does it… is Siri on OS X.

    If they drop a bombshell this WWDC and give us Siri in Mountain Lion, we'll be able to see if the processing is done entirely on the computer and stuff is just sent to the servers to get proper… whatever done on the back end.

    If that's the case, then that will give us a metric for when iOS Siri on-device processing will be feasible, as well as when it won't require an Internet connection at all.
    A few avenues to look for. Will Apple add Siri's Dragon Dictation backend access to Mountain Lion? If so, will be to all machines running ML or just certain machines, like new Macs that have the EarSmart (or some other HW) tech in them? If the former have these machines been silently updated with the needed HW in for this service?

    I'm not so sure Siri will come to ML if they haven't added it to the iPad (3). I'm only expecting the voice dictation for ML and that it will come to all Macs that can run ML.

    Whilst there is substantial processing power over an iDevice in all Macs I don't think it's enough to properly run Apple's backend. Plus, we're not talking about all of Siri, just Dragon Dictation speech-to-text processing.

    On top of that the EarSmart HW isn't needed as a 'PC' is likely to have the same interference as a smartphone. I can think of a dozen very valid scenarios where background noise would be an issue for a Mac in various situations but I don't think the need is the same so I'm guessing it likely won't have that HW even though I hope it does.
  • Reply 137 of 189
    tallest skiltallest skil Posts: 43,388member

    Quote:

    Originally Posted by SolipsismX View Post

    I'm not so sure Siri will come to ML if they haven't added it to the iPad (3). I'm only expecting the voice dictation for ML and that it will come to all Macs that can run ML.


     


    Computers are more stable. Desktops in particular, but also laptops. 


     


    The iPhone get Siri because it's forced to have data everywhere. OS X could get Siri because it's assumed to have Internet everywhere, either as an immobile desktop or as a laptop opened at home, at work, or in between.


     


    The iPad isn't guaranteed Internet access, but it's as mobile as the iPhone. People don't crack their MacBooks open in the middle of a store and try to balance and use it, but they would the iPad.


     


    It's terrible, but until we can get full system-side processing, I think the iPad will be left high and dry. It could be full system-side on the Mac, full server-side on the iPhone… and nothing on the iPad.

  • Reply 138 of 189
    povilaspovilas Posts: 473member


    Siri is basically a bunch of if this then that or else that. The more you use it the better it becomes, at least in theory. So unless we get some real AI this is the best you can get now on any mobile OS. No one said it’s perfection, but it’s a step forward from what you had before Siri.

  • Reply 139 of 189
    povilaspovilas Posts: 473member

    Quote:

    Originally Posted by Ireland View Post


    For you.



    You don’t expect it to understand micks do you, because they don’t understand each other.

  • Reply 140 of 189
    drdoppiodrdoppio Posts: 1,132member

    Quote:

    Originally Posted by Povilas View Post


    You don’t expect it to understand micks do you, because they don’t understand each other.



     


    That's not a very nice thing to say, and is incorrect as far as I have witnessed... but then again, there's this:


     


Sign In or Register to comment.