Apple patents 3D gesture UI for iOS based on proximity sensor input
The U.S. Patent and Trademark Office on Tuesday published an Apple patent for a method of generating and manipulating a three-dimensional object on a computing device, with the process controlled by special gestures made above a touchscreen's surface.

Source: USPTO
Apple's U.S. Patent No. 8,514,221, titled "Working with 3D objects," describes a graphical interface that enables a user to "generate and manipulate 3D objects using 3D gesture inputs." More specifically, the interface can be a computer assisted design (CAD) application running on a computer with a touch-sensitive surface, such as an iPad.
The document refers to a device that can detect the location of fingers with a combination of capacitive touch sensors and proximity sensors embedded in the display. These two components can be separate, or the capacitive sensors themselves can act as proximity sensors by measuring the capacitance of a nearby finger.
The patent deals with the creation of 3D objects "extruded" from 2D images. In order to generate the 3D renderings, a user can manipulate a 2D object with conventional touchscreen gestures, such as pinching. Instead of continuing with established iOS gestures, the invention deviates by introducing a third axis of control.
For example, to extrude a triangular prism out of a triangle, a user can touch the two-dimensional object in three places and "pull" or "lift" up, away from the screen's surface. Here, a user can hover for a predetermined amount of time to trigger the extrusion, which will generate a triangular prism with a cross section corresponding to the 2D triangle and a height proportional to how far a user "pulled" their fingers from the screen.

To indicate the end of a 3D gesture input, or deselection of an object, the application can be programmed to detect any number of 3D gestures, such as a user spreading their fingers or quickly moving them away from the screen. An object can be reselected by touching and holding on the screen for a certain amount of time.
Adding to the 3D interface, users can augment the "pull" mechanism with other motions to create various shapes. For example, instead of the triangular prism described above, a pyramid can be generated from a 2D triangle touching the screen with three fingers and pulling away from the surface while pinching. Alternatively, a user can use a pull and spread action to create a frustum, or the top section of a cone or pyramid that is left behind after being cut off from its base.

Further, modifications can be performed with "pinch and pull" or "pinch and push" gestures, which extrude or push in a portion of a 3D object, respectively. In another embodiment, objects can be rotated around the z-axis with a twisting motion.
One interesting application described in the document is dubbed "sculpting" mode. This embodiment treats the 3D object as if it were made of clay, or some other easily malleable material. Depending on finger movement, a user can make indentations, stretch, or squeeze the shape so that one area is made smaller and another larger. A "pinch-twist-and-pull" gesture can also be used to break off a piece of the clay rendering.

In yet another use scenario, textures, colors and other surface attributes can be selected and tuned by hovering over a screen's surface. These gestures are largely made perpendicular to the device's display, with selected attributes corresponding to height away from the screen. For example, an object can be made brighter when a finger hovering over a "Brightness" UI region is moved farther away from the display. Slide bars, buttons, and other assets may act as alternative controls.

Finally, the patent makes mention of using 3D or stereoscopic glasses to make the experience even more engaging. Also noted are further UI implementations such as iconography, graphical assets and general usability considerations.
Technology similar to that described in Apple's patent is just now making its way to the mass market in the Leap Motion Controller. As Leap is a computer peripheral, use cases are somewhat limited. For now, the system relies largely on third-party software makers to create interesting apps that make use of the motion-based control scheme.

Illustration of dice creation using 3D manipulation patent.
If deployed at the system level of iOS, Apple's tech could turn an iPad into a hybrid touchscreen/motion controlled device, bringing a more direct control experience than Leap can offer. In some ways, a possible iPad implementation is to Leap as what the touchscreen is to the mouse.
Apple's 3D object creation and manipulation patent was first filed for in 2012 and credits Nicholas V. King and Todd Benjamin as its inventors.

Source: USPTO
Apple's U.S. Patent No. 8,514,221, titled "Working with 3D objects," describes a graphical interface that enables a user to "generate and manipulate 3D objects using 3D gesture inputs." More specifically, the interface can be a computer assisted design (CAD) application running on a computer with a touch-sensitive surface, such as an iPad.
The document refers to a device that can detect the location of fingers with a combination of capacitive touch sensors and proximity sensors embedded in the display. These two components can be separate, or the capacitive sensors themselves can act as proximity sensors by measuring the capacitance of a nearby finger.
The patent deals with the creation of 3D objects "extruded" from 2D images. In order to generate the 3D renderings, a user can manipulate a 2D object with conventional touchscreen gestures, such as pinching. Instead of continuing with established iOS gestures, the invention deviates by introducing a third axis of control.
For example, to extrude a triangular prism out of a triangle, a user can touch the two-dimensional object in three places and "pull" or "lift" up, away from the screen's surface. Here, a user can hover for a predetermined amount of time to trigger the extrusion, which will generate a triangular prism with a cross section corresponding to the 2D triangle and a height proportional to how far a user "pulled" their fingers from the screen.

To indicate the end of a 3D gesture input, or deselection of an object, the application can be programmed to detect any number of 3D gestures, such as a user spreading their fingers or quickly moving them away from the screen. An object can be reselected by touching and holding on the screen for a certain amount of time.
Adding to the 3D interface, users can augment the "pull" mechanism with other motions to create various shapes. For example, instead of the triangular prism described above, a pyramid can be generated from a 2D triangle touching the screen with three fingers and pulling away from the surface while pinching. Alternatively, a user can use a pull and spread action to create a frustum, or the top section of a cone or pyramid that is left behind after being cut off from its base.

Further, modifications can be performed with "pinch and pull" or "pinch and push" gestures, which extrude or push in a portion of a 3D object, respectively. In another embodiment, objects can be rotated around the z-axis with a twisting motion.
One interesting application described in the document is dubbed "sculpting" mode. This embodiment treats the 3D object as if it were made of clay, or some other easily malleable material. Depending on finger movement, a user can make indentations, stretch, or squeeze the shape so that one area is made smaller and another larger. A "pinch-twist-and-pull" gesture can also be used to break off a piece of the clay rendering.

In yet another use scenario, textures, colors and other surface attributes can be selected and tuned by hovering over a screen's surface. These gestures are largely made perpendicular to the device's display, with selected attributes corresponding to height away from the screen. For example, an object can be made brighter when a finger hovering over a "Brightness" UI region is moved farther away from the display. Slide bars, buttons, and other assets may act as alternative controls.

Finally, the patent makes mention of using 3D or stereoscopic glasses to make the experience even more engaging. Also noted are further UI implementations such as iconography, graphical assets and general usability considerations.
Technology similar to that described in Apple's patent is just now making its way to the mass market in the Leap Motion Controller. As Leap is a computer peripheral, use cases are somewhat limited. For now, the system relies largely on third-party software makers to create interesting apps that make use of the motion-based control scheme.

Illustration of dice creation using 3D manipulation patent.
If deployed at the system level of iOS, Apple's tech could turn an iPad into a hybrid touchscreen/motion controlled device, bringing a more direct control experience than Leap can offer. In some ways, a possible iPad implementation is to Leap as what the touchscreen is to the mouse.
Apple's 3D object creation and manipulation patent was first filed for in 2012 and credits Nicholas V. King and Todd Benjamin as its inventors.
Comments
I dunno... seems awkward. Talking about CAD software, how accurate could something like this be? If you have to go back and edit the extrusion depth anyway, what have you really gained?
Phenominal this from Apple.
I become really emotional when I see the amount effort this company puts in for helping consumers and developers alike.
With recession never seeming to end, so many sharp and ambitious people losing jobs, iOS and OS X ecosystem has given a lifeline to many!
It is almost like a XBox Kinect, combined with a touch screen. Very innovative from Apple.
Don't let the patents use case deceive u.
Those are just explained for the patents how this functionality actually make it into different parts of iOS is what's really going to be interesting.
Quote:
Originally Posted by Bhaskar Bhat
Phenominal this from Apple.
I become really emotional when I see the amount effort this company puts in for helping consumers and developers alike.
With recession never seeming to end, so many sharp and ambitious people losing jobs, iOS and OS X ecosystem has given a lifeline to many!
+1
Then again, right this moment, Google & Samsung copy machines in the form of humans are busy watching some sci-fi movies from 1920's to prove that this was prior art and they have the right to copy it.
He went on to complain about how IBM, Hewlett-Packard, Xerox, Google, Microsoft and others all outspend Apple on basic research.
When I look at the research that Apple has done and is doing I realize that this company is spending its money and time extraordinarily wisely. It does not have to do everything that everybody else is doing nor in the ways they are doing in order to succeed.
There have been three or four patents recently focused on pico projectors and glass. I have been wondering why such research on glass and projectors was being done. I have also been thinking there is more to this than how I was visualizing the technologies being used.
I read about this patent on Patently Apple last year and again in the past few days while trying to answer my questions.
Seeing this published patent answers my questions: All of the technologies are being researched and developed together!
I see this patent and think of the holograms that were in the movies Ironman and Ironman 2 that helped Tony Stark manipulate 3-D images with his bare hands using gestures. This goes well beyond Minority Report where gloves were worn for gestures to work.
I see how far this technolgy can go when remembering how Tony Stark designed the glove for his armored suit by sticking his hand in the image and then raising his hand and turning it to see how would fit.
In the unforgettable words of Phil Shiller: Can't innovate, my ass!!!
Go, Apple! Go, go, go!!!!!!
http://www.google.com/patents/WO2009124181A3?cl=en
http://www.google.com/patents/WO2010144050A1?cl=en
http://www.google.com/patents/US20130167092
in China:
http://www.google.com/patents/CN101866243A
and including some older patent applications from Apple too:
http://www.cnn.com/2011/10/28/tech/innovation/apple-patent-3d-gestures-ipad-wired
Apple's latest patent application for gestures is yet another unique embodiment, and iOS may be the ideal space for it. Those IronMan scenes may not be all that far off.
EDIT: Here's one of the older 3D gesture patents I noticed. Mitsubishi, 17 years ago!
http://www.google.com/patents/US6002808?dq=3D+gesture+control+patent
IF Samsung can implement the technology in a different manner than Apple then there should be no problems. At the same time, DO NOT DO IT BECAUSE APPLE IS DOING IT!
While reading over some of Apples patents related to pico projectors, I came across a reference to Mitsubishi and had a "Hmmm..." moment.
Of course I cannot easily find the reference, but will post it when I do.
If I could use a plucking action right over the iOS keyboard to have the keyboard go from the screen onto a surface in the form of a laser projection.
Or even better if I could pluck a photo or mail or sound track from my iPhone / iPad and move it into my iWatch
Or even cooler pluck a pie chart from a real physical book around u and drop it on a Pages doc open on the iPad. And if a 3D holographic projection could be displayed of the object being dragged into the iPad it gives a more realistic sense of blending the real world with the digital world.
But the best idea of them all would be if the proximity sensor field that spans a few inches over a screen could be enlarged to a few meters sphere around it, allowing the phone to track ur body movements and then do something fancy
FEH!
Stylus and touch screen all the way, and 3D gestures for spinning 3D objects around in educational and scientific applications only.
Homeowners could generate 3-D models of their houses based on aerial photographs, and add the 3-D models to a map application.
By providing a convenient and intuitive way to generate and modify 3-D objects based on 2-D objects and photographs, a 3-D model of a community or an entire city could be generated through the cooperation of the residents of the community or city, each individual using the device to modify computer-generated 3-D models of buildings or landscapes that the individual is familiar with.
When I thought about how Apple could implement this capability. I remembered Wi-Fi Slam technology maps indoor buildings using ambient Wi-Fi signals. Something similar could be done to map communities.
If Tim Cook's Apple could introduce this technology in the 2013 - 2014 period of product releases he hinted were coming, it would change the face of Apple for the next ten years. Which would be a little after the time his current ten-year contract at Apple expires.
Talk about exciting!
This is NOT a new invention. Apple engineers saw the movie "Minority Report" and copied everything from there.
Next time you're using a PC of any kind, including Macs, pay attention to that keyboard -> mouse -> keyboard movement and see how often you do it and how long it takes. The amount of time wasted can be absolutely enormous. Yet we do it because "well, there's no other way." I think there might be a better way, somewhere in the distant future.
Originally Posted by kharvel
This is NOT a new invention. Apple engineers saw the movie "Minority Report" and copied everything from there.
Come off it. Why do people think they can say crap like this?
Quote:
Originally Posted by Johnny Mozzarella
I could see this being great for iOS in the car. Rather than having to hit a very percise button which requires taking your eyes of the road, you could do a simple gesture to switch views or change tracks.
For years people have been using gestures while driving.
Quote:
Originally Posted by kharvel
This is NOT a new invention. Apple engineers saw the movie "Minority Report" and copied everything from there.
Hahaha, its funny but true
The interesting thing is the Samsung galaxy S4 has some basic controls with gestures, 1 finger to preview & hand waves to change page (they don't work very well) but could this bite apple?.