Thursday, March 21, 2013

Maker Bot & Leap - from floating 3D hologram to printed out Iron Man suit

When the MakerBot becomes common place in the future, I may be able to, literally, email you a chair.


So what's a MakerBot? It's pretty much a 3D printer, you give it a 3D model and it will separate it into layers and print it out for you in plastic. You can print things like model prototypes for engineering projects, or print out 3D models of characters you've made in Maya. This is great for rapid prototyping. You can print out anything, but in reality the objects have to obey the law of gravity. Not everything that is printed will hold up against gravity and you'll know right away.

Three dimensional modelling software like AutoCAD and Maya exemplify direct manipulation in their design by allowing for the user to deform and change a virtual and  continuous representation of the model. However, even though 3D modelling software uses direct manipulation design heuristics, something like Maya can still be so very un-intuitive to use. Wouldn't it be more incredible if you could move virtual vertices in reality? Something similar to what is seen in the Iron Man movie, where Tony Stark pushes and pulls portions of a virtual representation of his Iron man suit.


(Noessel, 2013)
The nice thing about sci-fi is that things can move from science fiction to science fact, and the technology has already arrived. With release of the Leap finger tracking device, the ability to use hand gestures to control the pointer on a computer (or multiple pointers even) is so intuitive we're probably all going to throw away our mice in the near future. I would very much like the ability to have more than one mouse pointer to help me move multiple points in Maya, speeding up the modelling process. Not to mention this thing uses infrared LED technology (Baldwin, 2012) which has been around for decades.

Of course not all actions in a 3D modelling suite can be performed in reality. An example would be that it would be easier to give the user an undo button to press when they make a mistake while modelling, or when someone gets in their way (same sensation as destroying a sand castle but with the reversibility!). How would a user gesture they wish to undo an action? A gesture doesn't really exist for this intention, and mainly comes in the form of someone verbally communicating that they wish to undo. Voice recognition is always a possibility, but we're kind of getting out of scope here.

There are some flaws in the Leap Motion though, such as the fact you need to hold your hands over the sensor (Baldwin, 2012). This can be very tiring to do more long period of times, and a system that merges the Kinect and Leap together would be easier to use. This way the whole body can be detected, and the fingers precisely detected while allowing for the user to hold their hands in sight of the sensor, but in a comfortable position.

In the future, everyone could become designers, engineers, or sculptors with the use of these technologies. They can create without any restraints, and prototyping can be quick and extremely cheap if you never print it out (pay your electricity bill and that's it). If anyone ever wants to make their idea real though, they simply have to push a button and out pops their 3D in reality.

Baldwin, R. (2012, May 05). Why the leap is the best gesture-control system we’ve ever tested. Wired, Retrieved from http://www.wired.com/gadgetlab/2012/05/why-the-leap-is-the-best-gesture-control-system-weve-ever-tested/

Noessel, C. (2013, March 01). What sci-fi tells interaction designers about gestural interfaces. Smashing magazine, Retrieved from http://uxdesign.smashingmagazine.com/2013/03/01/sci-fi-interaction-designers-gestural-interfaces/

Pettis, B. (Producer) (2012). The makerbot replicator 2 - announcement [Web]. Retrieved from http://www.youtube.com/watch?v=3o6pcbhylmQ

Thursday, March 7, 2013

Augmented Reality

From the Merriam-Webster online dictionary the word augment means to make greater, stronger, or more intense and the word reality just means something real. Something that is a more intense version of reality seems to make the phrase augmented reality sound like an oxymoron. How could something be even greater and more intense than real life? Maybe the phrase should of have been coined assisted reality or digital overlay software or something else since much of AR is seen as a way to assist humans. Maybe in the future AR will simply be reality, and then we can talk about the intensity.

What is augmented reality? What augmented reality isn't is something like the Matrix, where reality is completely replaced by a digital world (Di-di-di Digimon).

Here's one possible example in the distant future:

http://youtu.be/i93_rRdnYvA?t=18m30s (please click the link since the whole episode is 23 minutes long, and if it doesn't jump straight to the 18:30 minute mark please try to navigate there)

A city that has no substance other than a rock base, the rest of the city was all digital and rendered to its inhabitants through their digital terminals. Some of the inhabitants were never actually ever in the city and only existed through proxy, because they could not move from life support. I will say that his show "Fractale" has some bias against AR and stars a main character who likes to live in the good old ways, in a real brick house and hoards real objects. (Spoilers: they also sort of destroy the AR system, and humans have a few years to relearn all their skills to return to reality). This is a very extreme example of augmented reality, where the whole entire world is covered by a digital overlay, but does give you and idea of what it's about and what it can become.

Augmented reality consists of using a computer to take in input from reality, processing it, enhancing it, and then outputting the enhanced version to the user. Enhancing is the most ambiguous part, as most AR is seen as enhancing the image (sight is our most dominant sense anyways), but can be used to enhance sound and possibly other inputs that we cannot even sense. There are digital sensors for radioactive substances, ultraviolet light, heat, and in a sense night vision glasses are a form of AR since their creation. Though night vision glasses completely blanket your vision with heat sensors, and is not so intuitive for normal people to understand how to use.

Nowadays AR is mainly used to supplement reality rather than replace it. Google glass is something that comes to mind when talking about augmented reality. It overlays extra information to supplement experiences in reality, such as information concerning buildings that you see in front of you. The good things about this is that it will be context sensitive, and also very powerful for education and maybe even advertising. It is also designed for a normal user, and tries to be as well hidden and immersive as possible; only pops up when you need it and disappears when you don't. UI designers for the Google glass project could learn a few things from UI designers of immersive games.

Augmented reality in gaming makes me think of the 3DS with their AR cards, and each card activates a mini game that the user can player. Rather than using AR to supplement the user with information, AR is simply a game overlaid in the foreground and reality acts are a background for it. The interaction with reality is kind of lack luster with AR games at the moment, as you need specific cards that are used to represent digital entities in the real world. These cards can be placed anywhere in reality, but the user is always constrained to using these cards to play the game. The next step would probably be to scan real life objects and simply make games out of those, allowing for all sorts of things to be augmented and turning anything into a game.

Merriam-Webster. (2013). Augment - definition and more from the free merriam-webster dictionary. Retrieved from http://www.merriam-webster.com/dictionary/augment

Merriam-Webster. (2013). Reality - definition and more from the free merriam-webster dictionary. Retrieved from http://www.merriam-webster.com/dictionary/reality

Thursday, February 7, 2013

Human Augumentation

There was a dispute over whether the "blade runner" (Oscar Pistorius) should be allowed to compete in the Summer 2012 Olympics, even though he qualified for the able body competitions. The reason was due to the concern that his prosthetic legs could be giving him an advantage over other athletes. Maybe those prosthetic legs give him more spring in his sprint? Does this somehow give him an unfair advantage? (Though some athletes have longer legs than others...)

This brings us to the idea of human augmentation, so please turn your attention to this video of Sarif Industries, world leader in human augmentation.


Buy a prosthetic arm... become the next football star! That is, unless they ban you from entering the league. But human augmentation is something we've already be exposed to, take for example glasses that are used to enhance vision. Glasses, unlike the prosthetic limbs promised by Sarif, are used to correct something that is seen as broken, while Sarif's augmentations are there to replace your limbs with something "better". We as humans always want to better ourselves, one of our prime goals would be to eliminate aging, disease, and other biological factors that limit our ability ([H]+). One way to do this is to replace all our biological functions with machine functions, the other way would be to bio-engineer DNA.

One thing that was talked about in class was human memory, something that is extremely faulty, almost like if human memory was a computer, each episodic memory when stored was like a lossy super-compressed video file. Every time you open that video file, the file gets more corrupted, and soon your memory of an event is completely wrong. However whenever you remember an event it always feels correct, unless you look at a photo of the actual event, and you notice differences in your memory and the photo recording. This could be remedied by human augmentation with... cyber-brains (GITS reference)! Rather than storing memories, you store actual files, and not only would this eliminate the fuzzy information, but would also speed up access time. No more "hmming" and "haaing" as you try to retrieve that specific memory, access the file instantly! No more forgetting things, not when everything is saved onto a 500 terabyte hard-drive connect to your head! Of course before we can do this, we'll have to decipher what every portion of the brain is used for, and what brain activity means what. Not to mention, having a computer attached to your brain, or replacing your brain, poses some serious questions about the continuation of your consciousness. It may be a death and rebirth whenever someone goes to get a cyber-brain installed.

Similarly other human senses can be enhanced using technology, but this all rides on the ability for us to understand the human brain. We cannot interface with these sensory replacements if they do not send the proper signals to the brain. So augmentation of sensory organs will still be a long way away.

There is also a funny thing about the human eye, have you ever wondered why humans have a blind spot in their eye? The reason is because it was hereditary, and something we inherited from our ancestors. The blind spot does not actually need to exist for us to have correct sight, even though our brains have already adapted to it. The giant squid eye, does not actually have a blind spot, because rather than having the light sensitive cells pointing backwards (in the human eye), the light sensitive cells point forward and the optic nerves are connected from the back, eliminating the blind spot within their eye (PBS).

PBS (2012). Giant squid [Television series episode]. In Inside Nature's Giants. Public Broadcasting Service. Retrieved from http://video.pbs.org/video/2247683791

Thursday, January 31, 2013

Biometric Controllers

Let's talk about Dr. Lennart Nacke's most favorite thing, as stated by him in class: biometric controllers. A biometric controller would be a controller that reads impulses sent by your brain, or more familiar things such as your heartbeat rate, your facial expressions, and eye-tracking or head tracking. These controllers can allow for more intuitive inputs to be sent into the game, and also some inputs you were not even aware of. This can change the game play in ways that were never imagined before. A horror game that can measure your heartbeat would know what things scare you the most, when to give you a break, and when to throw in the monsters. Facial expression technology may be seen in MMORPGs, allowing for games like Second Life to be even more realistic, as players can communicate with more than just text, but their real life facial expressions. You may have to fake a laugh in the real world if you typed "lol" even though the joke you just heard was stale. Dating deception on all levels of reality! Eye tracking and head tracking can make first person shooters so much more easy to control. There won't be a need for that really awkward second analog stick used to turn the camera in-game, just turn your head, or focus your eyes on the thing you want to see. As Gabe Newell put it, biometric inputs allows for more bandwidth between the player and the game, as doing these things comes naturally and does not interrupt your game (Sottek & Warren, 2013).

I was actually allowed to try NeuroSky's MindBand (pictured below) during the summer, when the development team for AntiMatter was integrating it with their game. The readings were a bit unusual to understand, they connected the "concentration" reading to the max number of bullets the player could shoot per second. I'm not sure what constitutes concentration, but thinking about nothing seemed to ramp it up all the way to 100 for me.

Picture of the MindBand (NeuroSky, 2011)


Judging from the experience I had with the MindBand, biometric controllers still seem a long ways off before they will be able to reliably give you good readings. However, VentureBeat believes that 2013 will be the year when all these technologies will coalesce and build the foundation of "NeuroGaming" as they call it (Lynch, 2013). The success of these technologies will be up to game designers, as they will be the ones to tweak the parameters, allowing for the technology to be actually usable. I for one am probably most excited for head tracking, a controller that probably can't go wrong.

Lynch, Z. (2013, January 17). Let the neurogames begin. Retrieved from http://venturebeat.com/2013/01/17/let-the-neurogames-begin/

NeuroSky. (2011). Neurosky mindband europe. Retrieved from http://www.home-of-attention.com/en/shop/1/flypagetpl/shopproduct_details/4/itemid-12

Sottek, T. C., & Warren, T. (2013, January 8). Exclusive interview: Valve's gabe newell on steam box, biometrics, and the future of gaming. Retrieved from http://www.theverge.com/2013/1/8/3852144/gabe-newell-interview-steam-box-future-of-gaming

Wednesday, January 23, 2013

Technological Singularity

Please turn your attention to this sci-fi short movie called R'ha.


The premise is one we've seen many times before in movies like: Terminator, The Matrix, and TRON: Legacy. The premise being that a sentient AI decides to take over and/or destroy all of humanity, with different degrees of cruelty and success. So what makes the short film above different? This was made entirely by a single student named Kaleb Lechowski, 22, in 6-8 months (sources vary). Another thing that stands out is that the protagonist is alien, which brings an idea to mind. Is it possible advanced alien races have already destroyed themselves? Will our reliance on technology bring about the same fate? But now that's getting off-topic, we're here to talk about Human-Computer Interactions (HCI).

In the above short the alien was able to communicate to the robot through speech, so the Artificial Intelligence(AI) has speech recognition. This may have been due to the fact it would have been less dramatic and extremely awkward if the captive had to use a keyboard to type in answers that the computer prompted as text on a screen (though in Matrix this happens with great dramatic effect near the beginning of the movie). However, speech recognition allowed for speedy communication between alien and machine. The AI also shows a deep understanding of the alien's thought process, which in HCI is actually very desirable. Wouldn't it be nice for a computer to continuously learn from your mistakes, and therefore be able to adapt to your common mistakes, quickly correct them for you, and therefore speed up productivity? Of course in the case of the short film, the AI uses this advantage to trick the protagonist, and lead the machines straight to the race's refuge point. (Though I have to wonder, with such advanced technology wouldn't the AI have access to mind reading, thus making this whole charade kind of pointless?)

Let's now move to the virtual world of Oz within the movie Summer Wars, in a much more familiar solar system.



Oz is accessible from multiple platforms from computers to cellphones to a Nintendo DS clone, making the virtual world extremely portable, and the controls must be translated to these very different interfaces. The controls are very important, as there is also a gaming community within Oz. However, it may be impossible to translate all the in-game controls, and many hardcore gamers in Oz are restricted to using a keyboard and mouse (I assume so since the only gamer uses the keyboard throughout the movie). The rendering must be done in the cloud, because smart phones and other portable systems probably don't have enough processing power to render millions of avatars. (Spoiler: which happens to be required during the movie when 20 million gather in a certain place for a certain awesome reason). This means that the player's view into the world of Oz is pre-rendered on the server and streamed to them. (Or it may be the case that it is the future and all cellphones have an nVidia Geforce GTX+ 9999). The world of Oz also has instant language translation, allowing for people all over the world to access this community. Streaming graphics and instant language translation are some extremely accommodating features that allow for Oz to be accessed by anyone on any platform, regardless of technology specs and language barriers. This universal usability may be one of the reasons why Oz has so many users from all over the planet (or because it's just the setting for the movie).

The trailer above is misleading, Summer Wars also features a program that was created by humans and accidentally ends up being hellbent on destroying them. The program is a genetic algorithm that likes to continuously learn from its surroundings, by playing games. It probably uses a form of machine reinforcement learning, but now we're getting off-topic again and talking about AI. But wait... AI I believe is a very important part of HCI, and will be even more important as the technology improves. Not only does AI allow for easier, faster, and more efficient ways for interfacing with humans, it has the potential to build upon itself. Take for example voice recognition, this is a part of AI which allows for speech to be recognized and a set of instructions can be built for the program using only speech. This is a very fast and intuitive way for humans to interact with computers, because it is so similar to interacting with other humans. AI learning also allows for instruction sets to be implied, and the computer do things that you never asked directly for, but the computer learnt are regular occurrences. An example would be commanding your computer to make you some coffee. After doing this for a week at around a similar time, say 6PM, the computer implies that coffee is probably needed around 6PM everyday, and makes it anyways, even though you did not ask. "Why, thank you," you'd probably say to your computer, as it kindly produces a mug of coffee as you arrive home. As technology improves, human to computer interaction becomes human to robot interaction and then human to pseudo-human interaction.

As we continuously improve AI, we may reach a point where AI can in itself propagate better AI, this creates the technological singularity. Technological singularity may be the end of mankind itself, or the beginning of the infinite expansion of knowledge and understanding. Computers that understand humans even better than humans understand themselves, something that can answer all of our questions. This would be an infinitely more awesome way for humans to interact with computers.

Saturday, February 18, 2012

Games teach you Life!




This game is about living at the poverty line in North America, and pretty depressing. The whole point of the game is to make it through the month without starving, losing all your money, or getting sick. The game likes to throw a lot of random events at you, such as you bringing a pet into your down town apartment, your kid needing something, or being fired from your job, which happens a lot. Actually the game likes to give you an event every single day, which is a bit unrealistic I think, but the events are quite believable (though condensed into a month). Also you're a single parent, but you can't collect child support from the mother/father, so I'll assume they are dead, or that the USA has different laws concerning child custody.  None the less child support comes in monthly and doesn't really help in the span of the month that the game takes place in.

Now that I'm done ranting, my experiences with the game. The game is about meaningful choices, but the meaning comes from your morals, and the difference between what you can do, what you want to do, and what you cannot do. You are a single parent so your child will want things from you, though not as regularly as if it were a spoiled child, actually your child is really well behaved from what I've seen (blame my 7 year old brother... but he's a good kid). I felt that I had to at least try to give some happiness to your child in the game, so the first few play-throughs I'd buy them the present, the field trip and also give them the extra $3 for lunch, and what not. The first few play-throughs though I was unable to pay for their club activities, which made me kind of sad. Also note I never bought them that $5 ice cream when the ice cream truck came around, felt it to be a bit spoiled. On my fifth play through I actually got really excited that my child may have been gifted, and even more excited that I was able to pay for the materials for them to continue their education. I hope my child gets a future brighter than mine, err... not that I have a real child at the moment, maybe in the future.

On my first 2 play-throughs I was fired from both of the jobs I had, a week into the job. I found this to be really detrimental, but somehow I got through the month on both tries. All subsequent play-throughs I never got fired, due to knowing what gets you fired, or that the event never came up.

The three "life lines" on the side of the game took be three play-throughs before I even noticed they were there. The use of space in the game is a bit sparse, but I assume it's because of my monitor's resolution being much bigger than the default resolution needed to play the game.

Really nice game, that opens up this perspective to the player.


You really only get one chance. Seriously. Unless you get a new computer.



Every day that I could check, I checked to see if the player could jump off the top of the building. None of the days allowed for the player to jump off the building.

For my one play-through of the game, since the game only allows for one play-through (unless you wipe your cookies and domain or something) I went to work for all the days, except the day you witness your co-worker jump off the building. I ended up with the ending where on the last day the main character creates the cure, and saves himself, though I'm not sure if he saves his daughter (her eyes are closed when they are in the park).

The game felt very unusual on the first day, when it is announced that the world would end in six days, but when you go outside to read the newspaper it said on it that you had discovered the cure for cancer. All and I the wish to repeat this one chance is probably what the game wanted the player to feel, and I sure feel it.




The other games were either too easy or simply nice, but didn't evoke as much emotion. American Dream was really easy, and definitely not a good representation of stock market, but that's not the point any ways. It just made money and partying seem to easy, maybe it is? As I Lay Dying had a weird feel to it since well the girlfriend was not exactly in shock that her boyfriend had just killed himself, albeit accidentally. But the whole game revolves around transporting his corpse, which didn't really stick out for me. Prior I did not understand. I played it twice with two different endings, and I still did not understand fully, but understood a bit better since I saw the two different endings. The End of Us felt really nice, but felt more like an experience rather than a game, since there was no goal. Flight I only skimmed through since I had seen a friend play it before. It was simply a throw, and upgrade game. Distance, I didn't get into it much, but it seems to be on a similar strand as the End of Us.

Thursday, February 9, 2012

Create Your Own Magic Card

Here are the three Magic: The Gathering cards I created:

Wisp Faerie is a very basic monster, I wanted it to be extremely weak since it comes and goes like a wisp from a candle wick. Since we can't have any effect monsters I just made it a 1 White mana monster, giving it a cost of 3 in total and I put 1 more into defence than attacking since it's a wisp, a fluff of air.



The Guardian of Embersky is inspired by the Studio Ghibli film Tales of Earthsea which is based on the series of books by Ursula K. Le Guin. Since it's a dragon I assumed it had even toughness and power (due to dragons in Pokemon doing this with their stats) so I made it a 6/6, and since it's a Red creature it will have a total benefit of 13, so 1 Red mana and 9 Colourless mana balances the cost of the card (1 Base + 2 Red + 9 Colourless + 1 Bonus for having over 5 mana cost).


The third card Earthborn Priestess is a dryad that is more geared towards being defensive, but since calculating extra effects were not taken into consideration I couldn't make it a defender card and give it a power of 0. Since it was a Green monster and has a mana cost over 5 it has a cost of 9, so I made the power 1 and toughness 8 to balance the cost.


That is all.