Image source

Last night, I received an interesting email from Ajay Kohli. I met Ajay at the CodeChix Glass workshop back in February. He’s a Med Student, working at Kaiser, as well as a Glass Explorer. When I met him, he was very excited about the possibilities for using Glass in hospitals to assist physicians with all sorts of things, in particular during surgery. Anyways, his email mentioned that he was working with a cochlea surgeon, and they were discussing a rather interesting application for Google Glass. Apparently, bone conduction is a viable method to enable some people with hearing loss to hear, and there is currently a class of hearing aids that uses bone conduction. What’s interesting is that Google Glass uses bone conduction with its built-in speaker. Ajay was wondering if I knew of any Glassware or source code that could enable Glass to be used as a hearing aid.

I looked into it a bit, and initially found an old project that didn’t work. I figured that it probably wouldn’t be much effort to write something to do this, and I was not wrong. Here’s the entire code (sans resources) for my hearing aid Glassware:

You can check out the full project, and download the apk to try it out for yourself on GitHub.

Update

David Callaway had his father, who is hard of hearing and uses hearing aids, test this out. According to him, it actually works!


I recently ran a workshop at Andreessen Horowitz, which was an introduction to Google Glass. It was a lot of fun, so I figured that I would create a screencast out of the content. This is the second of two posts on the subject. This one is discussing the Mirror API Java Quick Start. Part 1 may be found here.

Related Material:


I recently ran a workshop at Andreessen Horowitz, which was an introduction to Google Glass. It was a lot of fun, so I figured that I would create a screencast out of the content. This is the first of two posts on the subject. This one is discussing the Mirror API at a high level. Part 2 may be found here.

Related Material:

Next, take a look at part 2, walking through the Java Quick Start for Mirror


… or any other smart wearable?

There have been several recent posts on how Google Glass is a failure, written by so-called tech experts. Here’s a quick list, I’m sure you can find more:

On top of that, one third of early wearable adopters have stopped wearing them already. What’s going on here, I thought that wearables were going to be the next big thing?

They are, but we’re not there yet.

Right now, we are at the beginning. It doesn’t surprise me at all that wearables are not proving useful enough to use on a daily basis, the truth is they’re not useful enough yet.

What’s more, the article discussing the decline in use was looking at activity tracking devices. It doesn’t surprise me that people might lose interest in activity tracking after doing it for a few months and not seeing much improvement. I don’t log into my Basis dashboard very often these days, though I do wear the device daily. Regardless, I think that it is more useful if we focus on smart wearables, and specifically Glass here.

Let’s have a brief history lesson.

Consider the case of smartphones, when they were popularized several years ago. When Android launched, standalone GPS units were relatively expensive, and the iPhone didn’t have a decent navigation solution. Android came onto the scene and provided a really, incredibly useful app, Maps, which included real-time navigation. (Actually, I don’t remember exactly if Maps provided this out of the gate, but I do remember buying a standalone GPS app on Android, where you could download cached maps for $30 on my G1.) Both products also opened the door to over-the-top messaging and calling products that have driven prices for those services down across the board. Both platforms also opened up new markets for casual gaming, which barely existed before. They allowed you to read the news on the train without needing to buy a newspaper. There are countless other things that smartphones are doing these days that people really wouldn’t want to live without.

That said, it took time for people to realize these use cases, and to begin to rely on their phones more and more. The performance out of the gate with the G1, and I assume the original iPhone, left a lot to be desired. The hardware limitations made it difficult to imagine a world where we have more people coming online for the first time with mobile phones than computers. I had an idea that they could be that useful, but until recently, I hadn’t seen a compelling enough combination of hardware and software to get me to reach for my phone over my laptop (or even my tablet). Glass (and other smart wearable platforms) will have to get over the same hump.

Frustrations with Glass are not failures, they are opportunities.

It’s true that Glass doesn’t have any/enough killer apps yet. There are a handful of cool apps for Glass, LynxFit, WinkFeed, Shard, my app Ceramic Notifier, and of course the camera, Google+, Hangouts, Google Now, and Gmail. For Glass to be a good value proposition to consumers, there need to be at least a few things that you can do with Glass that you simply could not do without it.

It will probably be at least a couple of years before we’ve had enough time to build out the ecosystems for these platforms, and to even think about what these killer apps might be. The truth is that these new wearable platforms are very different from smartphones. They may run similar software stacks, but people interact with them completely differently. What I’m trying to get out of the Explorer program is to learn what these behaviors are going to look like. I want to understand this new paradigm from the perspective of the user, along with all the frustrations.

Frustrations with Glass are not failures, they are opportunities.

Why buy Google Glass, or other smart wearables right now?

My answer is if you’re a developer, looking for a billion dollar opportunity, this is the space to be in. Otherwise, wait a little bit. Smart wearables still be there in six months or a year, and they’ll be better.


NPR ran a story on distracted driving and legal challenges with Google Glass on All Things Considered

The piece highlights some of the legal issues, states proposing legislation that would mark wearing Glass on the road as distracted driving. Unfortunately, the politicians do not seem to understand Glass, and the Explorer that they interviewed was allowing himself to be distracted by Glass while driving, reading off Field Trips information and claiming that it was OK because the screen was transparent.

My personal view is that it is a lot easier to not be distracted by Glass than it is to not be distracted by your phone while driving. Where Glass allows you to quickly dismiss unnecessary information, where the phone requires more interaction to get and dismiss the same information. From there, it depends on what sort of information you’re getting. A lot of times, with an email or text message, Glass can read you aloud what the content is, so that you don’t need to look at the screen. Responding also does not require you to be looking at the screen.

The device can be either much less distracting, or as distracting as a phone, depending on how you use it. I think that the Glass team needs to tackle this problem head on, and make sure that Explorers have no illusions about what constitutes a distraction. They also need to do a better job of communicating all of these things to the public. Finally, they should add some APIs, for developers, around being able to determine if navigation mode is enabled. From there, developers might choose not to surface information, or delay information based on ETA. E.g., never me Field Trips, or news while driving.


These devices look interesting, and I’m very excited about smart wearables shipping with built-in heart rate monitors. However, it looks like Samsung has totally failed on the UX front. These things look incredibly cumbersome to interact with.

Specifically, in clip of setting the wallpaper, when the wallpaper is selected, why are they returning the user to the wallpaper selection screen, as opposed to going straight back to the home screen?

How many new gestures are we going to need to learn to change the wallpaper?

Viewing a text message requires the user to open up the notification, instead of delivering the content in the notification view, and presenting actions there. This means that you need two taps to start recording a response.

In the S-Voice demo, that’s about the most verbose voice interaction I’ve seen. The amount of back-and-forth required to send a text message is obnoxious. You have to issue a ‘send’ command to actually send out the message. Why are they making status messages look like text messages from the person you’re trying to communicate with?

The Fit’s screen is facing the wrong direction. Good luck reading it. You’d need to hold your arm straight out to not see the thing at a funny angle.

I’m certainly excited about the future of wearables, and I think Samsung has some good ideas here, but the execution seems sub-par. 


I just published my first package on npm! It’s a helper for S3 that I wrote. It does three things, it lists your buckets, gets a URL pair with key, and deletes media upon request.

The URL pair is probably the most important, because this allows you to have clients that put things on S3 without those clients having any credentials. They can simply make a request to your sever for a URL pair, and then use those URLs to put the thing in your bucket, as well as a public GET URL, so that anyone can go get it out.

var s3Util = require('s3-utils');
var s3 = new s3Util('your_bucket');
var urlPair = s3.generateUrlPair(success);
/**
    urlPair: {
        s3_key: "key",
        s3_put_url: "some_long_private_url",
        s3_get_url: "some_shorter_public_url"
    }
*/

Deleting media from your S3 bucket:

s3.deleteMedia(key, success);

Or just list your buckets:

s3.listBuckets();

I had previously written about using the AWS SDK for node.js here. It includes some information about making sure that you have the correct permissions set up on your S3 bucket, as well as how to PUT a file on S3 using the signed URL.


This post is intended to serve as a guide to building a GDK app, and I will be using it as I build one live on stage. The code that will be discussed here is mostly UI, building the Activity and Fragments, as opposed to dealing with lower level stuff, or actually building GIFs. If you’re interested, the code for building GIFs is included in the library project that I’ll be using throughout, ReusableAndroidUtils. The finished, complete, working code for this project can be found here.

Video

Project setup (GDK, library projects)

Create a new project.

Select Gradle: Android Module, then give your project a name.

Create a package, then set the API levels to 15, and the ‘compile with’ section to GDK, API 15.

Let’s do a Blank Activity.

If gradle is yelling at you about local.properties, create local.properties file with the following:

sdk.dir=/Applications/IntelliJ IDEA 13.app/sdk/ 

Or, whatever location your Android SDK is installed at, then refresh Gradle.

You might need to re-select GDK. If your project is broken and not compiling. Highlight the module, and hit ‘F4’ in Intellij/Android Studio.

Now the project should be building.

Hello world & run

Let’s start out by just running ‘hello world’. I’m adding this as a module, instead of creating a new project. As such, the manifest is not set up correctly, and needs to be modified:

Make sure it runs.

Fullscreen & run

Update the manifest to change the app’s theme to be NoActionBar.Fullscreen:

Make sure it runs.

Voice launch & demo

Turn the activity into an immersion:

I found this great blog post on adding voice actions. Add the voice_launch.xml:

And the string to strings.xml:

Run, and test saying OK Glass, launch the app.

Image capture & demo

Refactor structure to make the fragment its own file, ImageGrabFrag. Now, we are going to add some code to deal with the camera. The Android documentation on the Camera APIs is quite useful. Here are a couple of StackOverflow discussions that might be helpful for common Camera problems.

Next, we need to add a SurfaceView to main fragment.

Now, we need to add a dependency on my library project, ReusableAndroidUtils, which contains a lot of helper code. To do this, clone the repo and import it as a module into your project. After you’ve imported it, update your build.gradle file to reflect the addition:

Next, I’m going to pull in a bunch of code for ImageGrabFrag. Here’s what we have at this stage for that file:

Finally, let’s update the manifest with the required permissions.

Copy that in, make sure it all compiles, and try running. At this point, you should get a garbled preview that looks all funky:

What’s happening here? Well, Glass requires some special parameters to initialize the camera properly. Take a look at the updated initCamera method:

Ok, almost there. Now, we just need to capture an image. Take note of the added takePicture() call at the end of initCamera(). When we initialize the camera now, we’re going to try taking a picture immediately. Now we are running into an odd issue.

Try launching the app from the IDE, then try launching it with a voice command. When you launch from your IDE or the command line, it should work. If you launch from a voice command, it will crash! There are a couple things that we need to change. First, there’s a race condition going on right now. We are currently initializing the camera after the SurfaceHolder has been initialized, which is good, because if that wasn’t initialized, the camera would fail to come up. However, when we launch with a voice command, the microphone is locked by the system, listening to our voice command. The Camera needs the mic to be unlocked, because we might be trying to record a video. Thus, we get a crash.

The error, specifically, is the following:

Camera﹕ Unknown message type 8192

There are several discussions about this issue floating around:

The thing to do is to delay initializing the camera a bit:

Try running again, and it should work this time.

Multi-Image capture

This part’s simple. We just modify our image captured callback, and add a counter.

We’re also going to add a static list of files to the main activity, to keep track of them. We’ll need that later.

Combine to gif

Now we need another fragment. We’re going to do a really simple fragment transaction, and we’ll add an interface to help keep things clean. Here’s the interface:

We need a reference to it in the ImageGrabFrag:

Then we need to change what happens when we finish capturing our last photo:

Here’s the final version of the ImageGrabFrag, before we leave it:

Alright, time to add a new fragment. This one needs a layout, so here’s that:

And here’s the fragment that uses that layout:

While there is a fair bit of code here, what’s going on is pretty simple. We are kicking off an AsyncTask that iterates through the files, builds Bitmaps and adds those bitmaps to an ArrayList of Bitmaps. When that’s done, we can move on. Here’s a link to a StackOverflow discussion that explains how to build gifs.

For any of this to work, we need to actually do the fragment transactions. So, here is the updated Activity code:

Now we can run, and it will do stuff, but on Glass, it’s not going to be a very good experience. For one thing, you won’t have any idea what’s going on, as the user, while the gif is being built. For that, let’s do two things. First, we should keep the screen on. Update the Activity's onCreate method:

Let’s also add a bit of eye candy to the fragment. I want to flip through the images as they’re being worked on. I also want a ProgressBar to be updated until the builder is finished.

Run it again, that’s a little better, right?

Static Card

First thing that I’d like to do is to insert a Card into the timeline with the image. Now, this works, but it’s not great. Currently, StaticCards are very simple. They do not offer much functionality at the moment, as the API is still being fleshed out. Hopefully soon it will be updated. For now though, let’s do what we can:

As a note, if you have not already, make sure that Gradle is set to build with the GDK:

Viewer

The StaticCard is OK, but not great for a viewer. We made a gif, not a single image. Let’s build a viewer. First thing is that we’ll add our last Fragment transaction method to the Activity.

The viewer itself is very simple. It’s just a WebView that is included in the library project. We don’t even need a layout for it!

There’s a bit of an issue with not being able to see the whole thing. I didn’t dig into figuring out how to fix that issue. There are other ways of getting gifs to display, but I couldn’t get them to work in a reasonable amount of time.

Getting the gif off Glass

This section almost warrants its own discussion, but basically the issue is this, without writing a bunch of code, and uploading the gif to your own server, there’s not really a nice, official way of sharing the gif. Ideally, that StaticCard that we created would support MenuItems, which would allow us to share out to other Glassware with a Share menu.

Luckily, I found this great blog post that provided a nice work-around. It describes how to share to Google Drive, and with only a few lines of code. Here’s the code:

It’s not the sort of thing that I’d feel comfortable releasing, however, it is enough to finish up here.

Wrap-up

GlassGifCamera code on GitHub.


The following are videos from the Hacking Glass 101 meetup, held at Hacker Dojo on 2/18/2014.

My talk on Mirror API, Mirror from Android and node.js:

Here are some extra resources from my talk.

Demo of Play Chord GDK Glassware, by Tejas Lagvankar:

Lawrence had some technical difficulties with his GDK presentation, but here’s what he was able to cover:

Dave Martinez wrapped everything up for us:

Big thanks to Sergey Povzner for filming and uploading for us!


I recently started playing with Crashlytics for an app that I’m working on. I needed better crash reporting than what Google Play was giving me. I had used HockeyApp for work, and I really like that service. My initial thought was to go with HA, but as I started looking around, I noticed that Crashlytics offers a free enterprise level service. No down-side to trying it!

I gave it a shot, they do a nice job with Intellij and Gradle integration for their Android SDK, so setting up my project was quite easy. I tested it out, and again, it was very simple and worked well. The reporting that I got back was quite thorough, more than what anybody else that I’m aware of gives you. It reports not just the stack trace, but the state of the system at the time of your crash. If you’ve ever run into an Android bug, this sort of thing can really come in handy.

But, then I ran into an issue. I had some thing that was acting funny, so I pinged the Crashlytics support. I was pretty sure that it was an Android bug, but hadn’t had time to really nail down what the exact problem was. After a short back and forth, I let them know that I’d try to dig in a little more when I had time, but that I was busy and it might not be until next week. The following day, I received a long, detailed response, that included Android source code, to explain exactly the condition that I was seeing. I was floored. They had two engineers working on this, figuring out exactly what the problem was, and what to do with it. I don’t think that I could imagine a better customer service experience!

As a note, I have no affiliation with Crashlytics outside of trying out their product for a few days. Their CS rep did not ask me to write this. I was so impressed that I wanted other people to know about it.