So how is it any different than voice actions of today, since that can already perform Google search queries?
Just a guess, but I think that they will add natural language capabilities. The hardest part about using a search engine is formulating the correct query. You can get wonderful results or poor results depending on some subtle aspects of what you ask for.
I can imagine an unsophisticated user telling the Google search engine that he is hungry, and having a list of nearby restaurants returned to him. "I need to eat" might produce the same list. If you were to type any of that, the results would not be what the user intended. Parsing language for intent is a big step up.