Thursday, March 21, 2013

Maker Bot & Leap - from floating 3D hologram to printed out Iron Man suit

When the MakerBot becomes common place in the future, I may be able to, literally, email you a chair.


So what's a MakerBot? It's pretty much a 3D printer, you give it a 3D model and it will separate it into layers and print it out for you in plastic. You can print things like model prototypes for engineering projects, or print out 3D models of characters you've made in Maya. This is great for rapid prototyping. You can print out anything, but in reality the objects have to obey the law of gravity. Not everything that is printed will hold up against gravity and you'll know right away.

Three dimensional modelling software like AutoCAD and Maya exemplify direct manipulation in their design by allowing for the user to deform and change a virtual and  continuous representation of the model. However, even though 3D modelling software uses direct manipulation design heuristics, something like Maya can still be so very un-intuitive to use. Wouldn't it be more incredible if you could move virtual vertices in reality? Something similar to what is seen in the Iron Man movie, where Tony Stark pushes and pulls portions of a virtual representation of his Iron man suit.


(Noessel, 2013)
The nice thing about sci-fi is that things can move from science fiction to science fact, and the technology has already arrived. With release of the Leap finger tracking device, the ability to use hand gestures to control the pointer on a computer (or multiple pointers even) is so intuitive we're probably all going to throw away our mice in the near future. I would very much like the ability to have more than one mouse pointer to help me move multiple points in Maya, speeding up the modelling process. Not to mention this thing uses infrared LED technology (Baldwin, 2012) which has been around for decades.

Of course not all actions in a 3D modelling suite can be performed in reality. An example would be that it would be easier to give the user an undo button to press when they make a mistake while modelling, or when someone gets in their way (same sensation as destroying a sand castle but with the reversibility!). How would a user gesture they wish to undo an action? A gesture doesn't really exist for this intention, and mainly comes in the form of someone verbally communicating that they wish to undo. Voice recognition is always a possibility, but we're kind of getting out of scope here.

There are some flaws in the Leap Motion though, such as the fact you need to hold your hands over the sensor (Baldwin, 2012). This can be very tiring to do more long period of times, and a system that merges the Kinect and Leap together would be easier to use. This way the whole body can be detected, and the fingers precisely detected while allowing for the user to hold their hands in sight of the sensor, but in a comfortable position.

In the future, everyone could become designers, engineers, or sculptors with the use of these technologies. They can create without any restraints, and prototyping can be quick and extremely cheap if you never print it out (pay your electricity bill and that's it). If anyone ever wants to make their idea real though, they simply have to push a button and out pops their 3D in reality.

Baldwin, R. (2012, May 05). Why the leap is the best gesture-control system we’ve ever tested. Wired, Retrieved from http://www.wired.com/gadgetlab/2012/05/why-the-leap-is-the-best-gesture-control-system-weve-ever-tested/

Noessel, C. (2013, March 01). What sci-fi tells interaction designers about gestural interfaces. Smashing magazine, Retrieved from http://uxdesign.smashingmagazine.com/2013/03/01/sci-fi-interaction-designers-gestural-interfaces/

Pettis, B. (Producer) (2012). The makerbot replicator 2 - announcement [Web]. Retrieved from http://www.youtube.com/watch?v=3o6pcbhylmQ

No comments:

Post a Comment