Capstone Week 2: Ideas & Research
for context see week 1: What is Capstone?
Ideas
I had four main ideas I focused on this week. We pitched ideas to the class and got feedback from our peers.
- Interactive Projection Experience: This was my original idea which I was unsure would be possible because one of our criteria for picking a topic is that it has to be doable in our time span and that we wouldn’t have to learn more than possible. This would be a projection that users can go in front of and it responds to their movement. I haven’t decided the topic application for this yet, but it would be an interesting experience and could possibly correspond to some sort of exhibit in a museum, aquarium, or science center type setting.
- Plant Care App: possibly specifically succulents? Users would log what types of plants they have and then there would be a status for each plant. Users can log when they last watered it, how much sun its getting, and what condition it is in each day and then the app tells them what they should do to maintain each plant. For example, a reminder that today it needs water, or a warning that this plant is getting too much sun. Each plant logged would have it’s own little profile with basic instructions to start from too. It helps users to manage plants.
- Educational Kids App: This would be an app for kids, probably on the iPad, where they can learn about different animals in different habitats. They could interact with different sections and pull up mini-game interactions that help them learn about each animal in a fun and entertaining way.
- Food Decision App: You and whoever you’re with connect and then the app shows you food nearby and you select what you would be open to by selected yes, no, or maybe. Then if you both/all choose yes on something it stops and says you should go there.
Research
I ended up choosing my first idea after some research. This is the idea I considered over the summer and was very excited about. However, as the decision process came closer I had concerns about whether this is a doable idea or if it is overly ambitious. Through my research I learned that this interaction would likely be doable because interactive projections can be created with an Xbox Kinect (see image above) as the sensor. The programs I found to do this are SimpleKinect and Processing. Luckily, I happen to have one of these already. This allows me to skip the complex part of building the experience and focus more on coding and designing the actual interaction. Here are some examples of what this could be… aquarium, photo gallery, timeline wall.
SimpleKinect Features
- Auto-calibration.
- Specify OSC output IP and Port in real time.
- Send CoM (Center of Mass) coordinate of all users inside the space, regardless of skeleton calibration.
- Send skeleton data (single user), on a joint-by-joint basis, as specified by the user.
- Manually switch between users for skeleton tracking.
- Individually select between three joint modes (world, screen, and body) for sending data.
- Individually determine the OSC output url for any joint.
- Save/load application settings.
- Send distances between joints (sent in millimeters). [default is on]
Experimentation
I also got time to experiment with a Leap Motion sensor which functions similarly to the idea I have for a projection. It allows users to interact with the computer with just their hand(s) in the air rather than a mouse and keyboard. The default game for this is a 3D space where the user can place blocks onto robot bodies to make little robots at a disco. I would likely apply this in a 2D scenario since I only know the very basics of 3D modeling and animating.
Through my Leap Motion research I learned you can use it on your desktop in general and not just the specific given apps. I wanted to make this work with a project I have already created for a kiosk to see if I could get interactions like that to function in the future. In order to do this I needed to get another program. I found GameWave to be the best option from the ones I tried. I originally struggled with this because the GameWave program was processing my movements but not not tracking my hand and moving the mouse accordingly. Eventually I found a list of apps it was most compatible with and got it working with Minecraft as a tester. For GameWave hand interactions you have to input what motions should do what actions. Once I learned the basics through applying it to Minecraft, I was able to use it to interact with the drag and drop functions from a previous project. This makes me feel confident I could design something else to function in a similar way that would be able to be used with this type of technology.
How Does it Work?
Through my research I found that this all functions through an infrared camera (as seen in the Kinect or Leap Motion Sensor) which uses mapping to determine where a hand/body is. Then special software can map a skeleton onto the hand/body and track movement. On the hand it detects the center of the palm as the mouse and registers a closed hand as a click by sensing that the “mouse” is now covered. This whole process can be seen in the motion visualizer on the screen in the image above. I’m not sure exactly how it functions for a whole body yet, but I did learn that the screen/surface may be divided up into sections and so the camera registers when that section has content (i.e. a person) in it and then reacts accordingly based on the programmed motion. The basic version of this I found had a skull which looked at and followed a person as he walked into different zones of the projection. You can see this example here.
In the Kinect specifically, there is an infrared camera and an RGB camera. In older versions these are two separate cameras but in version two, which is what I have, it is a registered image which means the infrared and RGB camera are aligned and don’t have to be adjusted to line up later. This version also measures through what is called time of flight. This means it sends infrared light out and times how long it takes to come back. That time measurement is converted to a distances and is the way it knows how far away objects are. Similar to echolocation but with light instead of sound.
Moving Forward
Through the research and experimentation process I learned what I can and cannot do and how infrared camera interactions function. For the next few weeks my main goal is to see if I can get the Kinect functioning as a sensor connected to my laptop and then see about projection. I wanted to have the SimpleKinect software functioning during Labor Day weekend, but I had to order the Kinect to PC adapter online because no stores had one at an in-store location. This delay is a minor setback so I hope to learn SimpleKinect the next weekend. I would also like to determine the exact application of what subject will be projected, but I don’t think I will be able to know specific interactions until I learn what I can do.
I’m eager to continue and excited about what this project has the potential to become. The more I research the better I feel about my ability to achieve this goal. However, I am also a little nervous about making the experience actually work once I figure out the Kinect sensor set up.