Apple's first beta of iOS 6.1.1 aims to strengthen Maps in Japan

2»

Comments

  • Reply 21 of 29
    vorsosvorsos Posts: 302member


    Either way, it's being synthesized from instructions on the device, not over the network. High quality voices in OS X are around 1GB each.

  • Reply 22 of 29
    vorsos wrote: »
    Either way, it's being synthesized from instructions on the device, not over the network. High quality voices in OS X are around 1GB each.

    Not on my OSX:
    1000
  • Reply 23 of 29

    Quote:

    Originally Posted by jragosta View Post




    Quote:

    Originally Posted by SolipsismX View Post



    I might have to revise my previous statement about Apple Maps being the best app and that the server-side data is where it's lacking as I'd have thought many of those items could have been updated on the server at any time.




    It sounds like a combination of things. I think you're right that the better data could have been fixed from the server side. OTOH, one of the things they mention is better pronunciation - which would probably require an iOS update.


     


    I just started playing with the new (beta) Maps release on my iPad 3 -- my iPad 4 has the (current) released version.  It is odd that there are other places with better maps, including flyovers) than just Tokyo.


     


    AFAIK, there is no compelling reason that the current maps could not use the same data as the beta maps.  In fact, it would appear that Apple is maintaining at least 2 sets of [some] data and returning whichever matches the iOS release.


     


    This is kind of surprising -- there is 3D flyover for a town with a population of 45,882 (2011).

  • Reply 24 of 29
    solipsismxsolipsismx Posts: 19,566member
    I've just read an entire semester's worth of computational linguistics yet still I can't figure out exactly which speech synthesis type is used in the iPhone.

    Here are some good jumping off points for those interested:


    [LIST]
    [*] http://en.wikipedia.org/wiki/Computational_linguistics
    [*] http://en.wikipedia.org/wiki/Speech_synthesis
    [*] http://en.wikipedia.org/wiki/English_phonology#Phonotactics
    [*] http://en.wikipedia.org/wiki/Phonetics
    [/LIST]


    One interesting note for those that have seen me request that Siri establish how each user says a phoneme for various parts of speech by having the user read a carefully crated paragraph into the mic I found [URL=http://www.auburn.edu/academic/education/reading_genie/phoncount.html]this site[/URL]. Of the ?42 phonemes in the English language the writer states he says 28 of them in about 3 seconds with this sentence, "He stuck in his thumb, and pulled out a plum."

    Now Siri's job would be more difficult because it's more than just getting you to record each phoneme but recording various differences in how we say difference words. One obvious example would be how Americans and the British say the letter 'Z'. Siri needs to know what you mean by what you say. As expected American English Siri will print out "Zed 10" when I say |z?d| but when I switch Siri to British English it will print out "Z 10".
  • Reply 25 of 29

    Quote:

    Originally Posted by SolipsismX View Post



    I've just read an entire semester's worth of computational linguistics yet still I can't figure out exactly which speech synthesis type is used in the iPhone.



    Here are some good jumping off points for those interested:





    One interesting note for those that have seen me request that Siri establish how each user says a phoneme for various parts of speech by having the user read a carefully crated paragraph into the mic I found this site. Of the ?42 phonemes in the English language the writer states he says 28 of them in about 3 seconds with this sentence, "He stuck in his thumb, and pulled out a plum."



    Now Siri's job would be more difficult because it's more than just getting you to record each phoneme but recording various differences in how we say difference words. One obvious example would be how Americans and the British say the letter 'Z'. Siri needs to know what you mean by what you say. As expected American English Siri will print out "Zed 10" when I say |z?d| but when I switch Siri to British English it will print out "Z 10".


     


     


    Siri:  "Sol... Now, what a good boy are you!"

  • Reply 26 of 29
    philboogiephilboogie Posts: 7,438member
    solipsismx wrote: »
    One obvious example would be how Americans and the British say the letter 'Z'

    [Video]
  • Reply 27 of 29
    vorsosvorsos Posts: 302member






    PhilBoogie View Post





    Vorsos View Post

    Either way, it's being synthesized from instructions on the device, not over the network. High quality voices in OS X are around 1GB each.


    Not on my OSX


    Those are stock voices. Alex is larger since IIRC it was new for OS X 10.5. But there have been higher quality voices released since.

  • Reply 28 of 29
    philboogiephilboogie Posts: 7,438member
    vorsos wrote: »
    Those are stock voices. Alex is larger since IIRC it was new for OS X 10.5. But there have been higher quality voices released since.

    Wow. I could drink a Case of Korsakov and won't remain on my feet; totally forgot about that, while only 6 months old news. Thanks for replying, thanks for the link.
  • Reply 29 of 29
    It doesn't look like this was part of the most recent update. Can anyone confirm one way or another?
Sign In or Register to comment.