Sunday, October 6, 2013
Update: Bluetooth-Controlled Arduino Car
After receiving my Bluetooth module a few weeks ago, I've been working on getting it set up with my Arduino board, and finally have a cool video to share with all of you. This video shows a simple plastic platform with two DC motors that are being controlled by an Adafruit motor shield for my Arduino board. The motor shield also has two ports open to receive and transmit Bluetooth signals, respectively. The Arduino board is programmed to perform a set of basic actions depending on which command, '1' through '5', that I send it. My next project is to write up a Glass app that takes in the motion of the wearer's head and translates that into movement commends for the car. Hope you enjoy the video.
Monday, August 12, 2013
Update: Working with Glass
Sorry it's been so long since I updated this blog. I finally got my hands on Google Glass just over two weeks ago. In that time things have moved a lot quicker now that I can actually test out some code. Google is working on a Glass Development Kit, but in the meantime has encouraged developers who can not rely on the Mirror API to use the Android Development Kit to write their programs. Any applications written with the Mirror API currently cannot access the device hardware, such as the Bluetooth receiver, and rely on hitting a server for each action, which would be dangerous for wheelchair control. The following images are all screenshots from my Google Glass using Det Ansinn's launcher.
The first program I wrote for Glass was basic application to test moving between different actions in the Glass UI. I'm starting to play around with voice recognition and the acelerometer as two separate hands-free input mechanisms to make up for the lack of eye-facing camera. The place that would have the camera, according to Google's patent (US 8,235,529), instead has a light sensor, but Catwig's teardown makes it seem as though that could be replaced by a small camera in a future release. For the time being, I'll just abstract out the input so that the user can pick which one is best for them and plug in eye-tracking as another option when it becomes available.
The other program I wrote was a driver for listening to the Bluetooth receiver for input and sending out basic commands over Bluetooth. I installed this same program on both an old Android phone and Glass and was able to get them to speak with each other. The next step will be to get the small Arduino car I set up to receive the Bluetooth signals and move its motors based on the received messages. This should take a bit of time given the added complexity of getting Bluetooth setup on Adruino, but it'll a ton of fun. It's also a major step towards getting this up and running. Once I can translate messages sent from Glass into motor movements, the next steps are refining the voice and accelerometer input mechanisms and coming up with a polished UI experience for the user.
I'll post soon about the progress I'm making towards getting the car moving around.
The other program I wrote was a driver for listening to the Bluetooth receiver for input and sending out basic commands over Bluetooth. I installed this same program on both an old Android phone and Glass and was able to get them to speak with each other. The next step will be to get the small Arduino car I set up to receive the Bluetooth signals and move its motors based on the received messages. This should take a bit of time given the added complexity of getting Bluetooth setup on Adruino, but it'll a ton of fun. It's also a major step towards getting this up and running. Once I can translate messages sent from Glass into motor movements, the next steps are refining the voice and accelerometer input mechanisms and coming up with a polished UI experience for the user.
I'll post soon about the progress I'm making towards getting the car moving around.
Wednesday, May 29, 2013
May 29th Update
I was hoping to have something more definitive to post this week about getting Glass, but it seems as though I'll have to wait until next week. The good news is that last Wednesday Google announced that it has begun to send invitations to the contest winners. They've told us that we will all get our invitation within two weeks from that point, and I've seen many of my fellow Explorers post about getting their invitation already. I'm looking forward to getting mine.
In the interim, I've also begun to write some eye-tracking code using the OpenCV libraries. What I've seen so far seems to indicate that this iteration of Glass does not have eye-tracking, so I'll need to modify my pair to perform the necessary functions. I'm still confident that Google will add this functionality in soon given the patent they issued, and the potential it has to dramatically increase the usability of their hardware. In the mean time however, I'll get something together that will work well enough for me to have a viable testing environment for the my application to drive the wheelchair. This will be done by putting an off-the-shelf camera into the neck of Glass and writing my own eye-tracking library with OpenCV.
The OpenCV library is very powerful and has a great community. Getting working eye-tracking code for a simple camera has proven really straightforward so far. I hope to have some demos to post soon.
In the interim, I've also begun to write some eye-tracking code using the OpenCV libraries. What I've seen so far seems to indicate that this iteration of Glass does not have eye-tracking, so I'll need to modify my pair to perform the necessary functions. I'm still confident that Google will add this functionality in soon given the patent they issued, and the potential it has to dramatically increase the usability of their hardware. In the mean time however, I'll get something together that will work well enough for me to have a viable testing environment for the my application to drive the wheelchair. This will be done by putting an off-the-shelf camera into the neck of Glass and writing my own eye-tracking library with OpenCV.
The OpenCV library is very powerful and has a great community. Getting working eye-tracking code for a simple camera has proven really straightforward so far. I hope to have some demos to post soon.
Sunday, May 19, 2013
Getting Started
First and foremost, thank you to my contributors. Due to a successful IndieGoGo campaign, I'm fully prepared to get started on this project. So many people have shown such enthusiasm for this idea, that I'm really excited to get this into the hands of those who need it. A huge source of that enthusiasm has come from Life Labs at United Cerebral Palsy. They've been extremely supportive and helped add legitimacy to my campaign by throwing their support behind it. Additionally, they've agreed to help advise me along the way and we're currently working together to get set up with an affiliate who may be able to get me access to a wheelchair for prototyping my designs. I've been very grateful for their support in this endeavor.
I'm setting up this blog as a means for communicating with those interested in the project as the time goes on. It will serve as a centralized location for anyone to get updates on the progress of my work, as well as read about the challenges and successes along the way. Perhaps most importantly, this blog will help me set goals and keep focused on this project. It can be difficult to stay true to a side project while also balancing work and life, so forcing myself to make weekly posts to this blog will help keep me on track.
Google still has not set a release date for the Explorer edition of Glass for the contest winners, but I'm eagerly waiting on their announcement In the meantime I've been able to work off of what the other wave of Explorer's have posted and with the code Google has made available. My first few posts will be just about working on code in anticipation of what we're getting, but hopefully soon I'll be able to work with Glass itself. I look forward to hearing people's comments and suggestions along the way, and can't wait to start sharing my progress.
I'm setting up this blog as a means for communicating with those interested in the project as the time goes on. It will serve as a centralized location for anyone to get updates on the progress of my work, as well as read about the challenges and successes along the way. Perhaps most importantly, this blog will help me set goals and keep focused on this project. It can be difficult to stay true to a side project while also balancing work and life, so forcing myself to make weekly posts to this blog will help keep me on track.
Google still has not set a release date for the Explorer edition of Glass for the contest winners, but I'm eagerly waiting on their announcement In the meantime I've been able to work off of what the other wave of Explorer's have posted and with the code Google has made available. My first few posts will be just about working on code in anticipation of what we're getting, but hopefully soon I'll be able to work with Glass itself. I look forward to hearing people's comments and suggestions along the way, and can't wait to start sharing my progress.
Tuesday, April 9, 2013
Repost: UCP Life Labs Throws Support Behind Google Glass Eye-Control App for Wheelchairs
Originally Posted at Life Labs on April 9, 2013
When I heard about the Google Glass Explorer contest, I thought about project ideas that could help people by using the unique features of this new augmented-reality technology. I remembered a project that some fellow students did during a robotics class that I took in graduate school. It used eye-tracking technology to remotely control the motors on a vehicle. After confirming that Google planned to embed eye-tracking technology in their new product, I realized this idea could work for applications such as wheelchairs.
My plan is to provide feedback about the wearer’s surroundings, including obstacles and suggested paths, and enable him or her to control the wheelchair with eye movements. The original student project used patterns of a user’s eyes being opened or closed to change between types of motion. For my project, I want to use subtle yet deliberate movements of the eye to let the user interact seamlessly with the surrounding environment. I think this technology could be life-changing for persons with disabilities. I hope that being able to work on this project with the support of the Google Glass Explorer program will help make it a reality.
I received my Bachelor of Science in Mechanical Engineering from Worcester Polytechnic Institute and then focused heavily in Robotics while receiving my Masters of Science in Mechanical Engineering from Tufts University. When I began to pursue a career as a Software Engineer at Wayfair, I still maintained a strong passion for Robotics on the side. I’m extremely excited about benefits wearable computing will have in health care and can’t wait to starting working on a project that will provide people with access to mobility they may have not had before.
After I wrote up my idea and posted it on Google Plus with the #ifihadglass hashtag, some fellow Wayfairians tweeted about it. Walter Frick of BostInno saw a tweet, did an interview with me, and then wrote an article about it. The story was then picked up by Popular Science and aggregator sites, such as HackerNews. Now that the project has won Google’s Glass Explorers contest, I have started an IndieGoGo campaign to raise enough funds to get started on a prototype right away. If you would like to help bring this project into reality, please visit the IndieGoGo campaign and consider giving any small donation.
When I heard about the Google Glass Explorer contest, I thought about project ideas that could help people by using the unique features of this new augmented-reality technology. I remembered a project that some fellow students did during a robotics class that I took in graduate school. It used eye-tracking technology to remotely control the motors on a vehicle. After confirming that Google planned to embed eye-tracking technology in their new product, I realized this idea could work for applications such as wheelchairs.
My plan is to provide feedback about the wearer’s surroundings, including obstacles and suggested paths, and enable him or her to control the wheelchair with eye movements. The original student project used patterns of a user’s eyes being opened or closed to change between types of motion. For my project, I want to use subtle yet deliberate movements of the eye to let the user interact seamlessly with the surrounding environment. I think this technology could be life-changing for persons with disabilities. I hope that being able to work on this project with the support of the Google Glass Explorer program will help make it a reality.
I received my Bachelor of Science in Mechanical Engineering from Worcester Polytechnic Institute and then focused heavily in Robotics while receiving my Masters of Science in Mechanical Engineering from Tufts University. When I began to pursue a career as a Software Engineer at Wayfair, I still maintained a strong passion for Robotics on the side. I’m extremely excited about benefits wearable computing will have in health care and can’t wait to starting working on a project that will provide people with access to mobility they may have not had before.
After I wrote up my idea and posted it on Google Plus with the #ifihadglass hashtag, some fellow Wayfairians tweeted about it. Walter Frick of BostInno saw a tweet, did an interview with me, and then wrote an article about it. The story was then picked up by Popular Science and aggregator sites, such as HackerNews. Now that the project has won Google’s Glass Explorers contest, I have started an IndieGoGo campaign to raise enough funds to get started on a prototype right away. If you would like to help bring this project into reality, please visit the IndieGoGo campaign and consider giving any small donation.
Wednesday, March 27, 2013
Repost: Proposal to Control Wheelchairs with Google Glass
Originally Posted at Wayfair Engineering on March 27, 2013
When our company’s co-founder encouraged all of our Engineering department to participate in the Google Glass Explorer contest, I thought about project ideas that could help people by using the unique features of this new augmented-reality technology. I remembered a project that some fellow students did during a robotics class that I took in graduate school. It used eye-tracking technology to remotely control the motors on a vehicle. After confirming that Google planned to embed eye-tracking technology in their new product, I realized this idea could work for applications such as wheelchairs.
My plan is to provide feedback about the wearer’s surroundings, including obstacles and suggested paths, and enable him or her to control the wheelchair with eye movements. The original student project used patterns of a user’s eyes being opened or closed to change between types of motion. For my project, I want to use subtle yet deliberate movements of the eye to let the user interact seamlessly with the surrounding environment. I think this technology could be life-changing for persons with disabilities. I hope that being able to work on this project with the support of the Google Glass Explorer program will help make it a reality.
I wrote up my idea, posted it on Google Plus with the #ifihadglass hashtag, and some fellow Wayfairians tweeted about it. Walter Frick of BostInno saw a tweet, did an interview with me, and then wrote an article about it. You can read the full story at these links:
BostInno: http://bit.ly/X7nD6b
Popular Science write-up: http://bit.ly/16unjVS
When our company’s co-founder encouraged all of our Engineering department to participate in the Google Glass Explorer contest, I thought about project ideas that could help people by using the unique features of this new augmented-reality technology. I remembered a project that some fellow students did during a robotics class that I took in graduate school. It used eye-tracking technology to remotely control the motors on a vehicle. After confirming that Google planned to embed eye-tracking technology in their new product, I realized this idea could work for applications such as wheelchairs.
My plan is to provide feedback about the wearer’s surroundings, including obstacles and suggested paths, and enable him or her to control the wheelchair with eye movements. The original student project used patterns of a user’s eyes being opened or closed to change between types of motion. For my project, I want to use subtle yet deliberate movements of the eye to let the user interact seamlessly with the surrounding environment. I think this technology could be life-changing for persons with disabilities. I hope that being able to work on this project with the support of the Google Glass Explorer program will help make it a reality.
I wrote up my idea, posted it on Google Plus with the #ifihadglass hashtag, and some fellow Wayfairians tweeted about it. Walter Frick of BostInno saw a tweet, did an interview with me, and then wrote an article about it. You can read the full story at these links:
BostInno: http://bit.ly/X7nD6b
Popular Science write-up: http://bit.ly/16unjVS
Subscribe to:
Posts (Atom)