Ok Glass, Treat an incident (2/2)

Following our previous article, we are still working on MBAAS and GPS guiding. In this article, we focus on retrieving the information saved in Parse and then talk about the geo-point search.

This time, we will talk about treating an incident already registered. First, we need to choose among the declared incidents, and then go to its location. To do that, we retrieve from Parse the ten closest incidents still in progress. When they are retrieved, the user can choose among them the one he wants to treat. He is then directed to a map displaying his current location and the incident location. On this view he can also access the incident description and a related picture. Then he can go through to a simple guiding screen. At this point, the user can only see the distance as the crow flies and the incident general direction. He can also access a map that shows his current location and the incident location. However, the map won’t actualize as the user moves.

This time the speech recognition understands what I said on the first trial. Oh, again don’t try to search for any car there is none.

Retrieve closest incident with Parse

Incident_retrieved

We begin our flow by searching for the ten closest incidents that are still in progress. Each of them is displayed in its own card as follow: in first its id, then its type and finally its creation date. The cards are bundled in a ScrollView as for the type choosing part in the report application. We also retrieve the first report and the incident picture. Since we cannot display them right away, we pass those to the next screen.

public void onLocationChanged(Location location) {
	// Called when a new location is found by the network location provider.
	mUserLatitude = location.getLatitude();
	mUserLongitude = location.getLongitude();

	mLocation = new ParseGeoPoint(mUserLatitude, mUserLongitude);

	ParseQuery<ParseObject> query = ParseQuery
			.getQuery(Poc_TreatAnIncident_Constants.INCIDENT_LOCATION);

	query.whereNear("location", mLocation);
	query.whereEqualTo(Poc_TreatAnIncident_Constants.INCIDENT_TREATED,
					false);
	query.setLimit(10);

	query.findInBackground(new FindCallback<ParseObject>() {

		@Override
		public void done(List<ParseObject> objects, ParseException e) {
			if (e == null) {
				Card card;
				mCards = new ArrayList<Card>();
				String _footer = "/" + objects.size();
				ParseObject po;

				for (int i = 0; i < objects.size(); i++) {
					card = new Card(IncidentSearchActivity.this);
					po = objects.get(i);
					mClosestIncidents.add(po);

					card.setText(dispalyIncidentInfo(po));
					card.setFootnote((i + 1) + _footer);
					mCards.add(card);
				}

				mCardScrollView = new CardScrollView(
						IncidentSearchActivity.this);
				IncidentTypeCardScrollAdapter adapter = new IncidentTypeCardScrollAdapter();
				mCardScrollView.setAdapter(adapter);
				mCardScrollView.setOnItemClickListener(mClickAdapter);
				mCardScrollView.activate();
				setContentView(mCardScrollView);
				mLocationManager.removeUpdates(mLocationListener);
			} else {
				Toast.makeText(IncidentSearchActivity.this,
						"Failed to retrieve incidents", Toast.LENGTH_SHORT)
						.show();
			}
		}
	});
}

Here, we retrieve the ten nearest incidents with findInBackground() with the query whereNear(). We specify that we only want the ten closest with setLimit().

Additional information: location, picture and description

incident_map

On this activity the user can see where the incident is located and he can consult its description by swiping backward. The picture of the incident is also available by swiping forward when the map is displayed. As in our previous application, the map comes from Google static maps. Here, we didn’t use a card because it doesn’t seem to be in any way able to display only an image, apart in background. On cards you can only have text, a background image or a mosaic of pictures on the left side of the card.

Direction with Glass

To guide the user to the incident scene, we chose to give only a general direction and the distance as the crow flies. To provide this guiding we have a mark that moves depending on the Glass camera orientation regarding the incident.

direction front When the user is heading in the rigth direction the mark moves from one side of the screen to the other in accordance to the incident direction. direction left When the direction to the incident isn’t in front of the user but on his left. direction right When the direction to the incident isn’t in front of the user but on his right.

Caution: the camera is located on the movable arm used to tune the angle of the prism. This might introduce a small difference regarding the real angle.

Provided with only this indicator, it would be pretty hard for the user to direct himself solely with this information. Thus, we provide him with the possibility to check his position. By swiping backward, the user can access a map where his current location and the incident location are displayed. As for all of our maps, this one is also a static one and will not be refreshed as the user is moving. It will only refresh when the user will check again his position.

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/orient"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context="com.example.orientingtest.DirectionActivity$PlaceholderFragment" >    
    <ImageView
        android:id="@+id/incident_on_left"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_alignParentLeft="true"
        android:layout_centerVertical="true"
        android:src="@drawable/mleft"
        android:visibility="gone" />

    <ImageView
        android:id="@+id/incident_on_right"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_alignParentRight="true"
        android:layout_centerVertical="true"
        android:src="@drawable/mright"
        android:visibility="gone" />

    <ImageView
        android:id="@+id/incident_in_front"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_alignParentLeft="true"
        android:layout_centerHorizontal="false"
        android:layout_centerVertical="true"
        android:src="@drawable/mfront" />
    
    <ImageView android:id="@+id/map_current_position"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:visibility="gone" />

    <TextView
        android:id="@+id/information"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_alignParentTop="true"
        android:layout_centerHorizontal="true"
        android:layout_marginTop="@dimen/card_margin"
        android:ellipsize="end"
        android:singleLine="true"
        android:text="@string/swipe_left_for_info"
        android:textAppearance="?android:attr/textAppearanceSmall" />
    
    <TextView
        android:id="@+id/distance"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_alignParentBottom="true"
        android:layout_centerHorizontal="true"
        android:layout_marginBottom="@dimen/card_margin"
        android:ellipsize="end"
        android:singleLine="true"
        android:text="@string/wait_result"
        android:textAppearance="?android:attr/textAppearanceSmall" />
</RelativeLayout>

At initialization, the default orientation is north. You need to move to refresh your real orientation. There can be some time before it properly refresh.

public void directionTo(double heading) {
	float bearing = MathUtils.getBearing(mUserLatitude, mUserLongitude,
			mIncidentLatitude, mIncidentLongitude);
	float differentialAngle = (float) (bearing - heading);

	if (differentialAngle < 0)
		differentialAngle += 360;

	// The view displays 90 degrees across its width so that one 90 degree head rotation is
	// equal to one full view cycle.
	float pixelsPerDegree = (mTipsContainer.getWidth() - 68) / 90.0f;

	double distancem = 1000 * MathUtils.getDistance(mUserLatitude,
			mUserLongitude, mIncidentLatitude, mIncidentLongitude);

	mTipsView.setText(mDistanceFormat.format(distancem) + " m");

	if ((differentialAngle >= 45 && differentialAngle <= 180)) {
		mLeftMark.setVisibility(View.GONE);
		mRightMark.setVisibility(View.VISIBLE);
		mMarkFront.setVisibility(View.GONE);
	} else if ((differentialAngle > 180 && differentialAngle <= 315)) {
		mLeftMark.setVisibility(View.VISIBLE);
		mRightMark.setVisibility(View.GONE);
		mMarkFront.setVisibility(View.GONE);
	} else if ((differentialAngle >= 0 && differentialAngle < 45)
			|| differentialAngle == 360) {
		mLeftMark.setVisibility(View.GONE);
		mRightMark.setVisibility(View.GONE);
		mMarkFront.setVisibility(View.VISIBLE);

		mMarkFront.setLayoutParams(positionL);
		RelativeLayout.LayoutParams pos = positionL;
		int margin = (int) (((mTipsContainer.getWidth() - 68) / 2) + (pixelsPerDegree * differentialAngle));

		pos.setMargins(margin, 0, 0, 0);
		mMarkFront.setLayoutParams(pos);
	} else if (differentialAngle > 315 && differentialAngle < 360) {
		mLeftMark.setVisibility(View.GONE);
		mRightMark.setVisibility(View.GONE);
		mMarkFront.setVisibility(View.VISIBLE);

		mMarkFront.setLayoutParams(positionL);
		RelativeLayout.LayoutParams pos = positionL;
		int margin = (int) ((pixelsPerDegree * (differentialAngle - 315)));

		pos.setMargins(margin, 0, 0, 0);
		mMarkFront.setLayoutParams(pos);
	}
}

If differentialAngle is between 0 and 180 degrees, the incident is on the left of the user. It is only true once we have normalize differentialAngle between 0 and 360 degrees.

Intent intent = new Intent(Intent.ACTION_VIEW);
intent.setData(Uri.parse("google.navigation:q=48.649469,-2.02579"));
startActivity(intent);

Instead of our rough guiding, it also would have been possible to give a turn by turn guiding but we wanted to test how the Glass handles direction and orientation by ourselves. This intent came from people analyzing the log of the Glass embedded command “Get direction”, so it might change in future updates, making this possibly not reliable (see How can I launch Directions from GDK).

Once the user is within ten meters from the incident, he can use a tap interaction to continue with the flow.

Picture and report after the incident is treated

After the incident is treated, the user must take a picture of the solved incident and issue a final report.  The user takes the picture as in the previous applications. He can preview it and retry until he is satisfied. In the same way, we use the speech recognition to record the user final report after the picture has been taken. We use the same implementation as for our previous application. The report and the picture are then saved in Parse with the saveInBackground() function.

Conclusion

With this application we have been able to test out the orientation sensor, alongside with the location providing features of Glass. We have also seen that it is really simple to retrieve data from Parse even with just a location constraint. The complexity is nicely hidden behind a nice and friendly function.

This article concludes our work on location and MBAAS. We will continue to work on Glass and we will come back to you when we have more to share.

Ok Glass, report an incident (1/2)

Here is a new article where we are going to continue our trip with Google Glass. If you missed the previous one, it’s this way (in french).

This time, we decided to explore the possibility to use Glass with a MBAAS (Mobile Backend As A Service). To complete this next trial of Glass capabilities we also wanted to be able to give direction with Glass and test its GPS tracking features. Here we chose to use Parse as our MBAAS one of the reason was the fairly easy use of geo-points to search in the database after they are saved.

In this article, we will work on an application that helps the user to report an incident around him. When a user encounters an incident, he can launch the application and choose the kind of problems he is meeting. In our case, we consider that an incident is something like a theft, a deterioration or a defacement. After he has made his choice, he is able to take a picture of the scene and add a first vocal report. Then the location is stored and a map indicates the incident location so the user can confirm it.

I have a terrible accent and a stuffy nose, so the two mixed together make it often nearly impossible for Glass to understand me. And don’t search for the car, it isn’t in the picture.

Scrolling through a set of types on Glass

Like all Glassware our application has a voice command as a launch trigger. Here the voice trigger is “Report an incident”. It is also possible to launch the application with the touchpad, just like in the previous video.

scrollView-first-encounter

The first step to report an incident is to specify its type. We limit the user to a certain set of types, so it eases the incident declaration. Given that, it appears that we couldn’t use the speech recognition.

Speech recognition gives to the user too much freedom, making it tricky to recognize the incident type we want. So, we decide to use a set of predefined cards grouped in a ScrollView that the user could interact with to browse through the set. The user navigates through the set by swiping to choose a card, and then use a tap interaction in order to select one.

private List<Card> mCards;
private CardScrollView mCardScrollView;

private AdapterView.OnItemClickListener mClickAdapter = new AdapterView.OnItemClickListener() {
	public void onItemClick(AdapterView<?> parent, View view, int position,
			long id) {
		Intent _toPictureActivity = new Intent(TypeAnIncidentActivity.this,
				PictureMainActivity.class);
		_toPictureActivity.putExtra(
				Poc_DeclareAnIncident_Constants.INCIDENT_TYPE,
				Poc_DeclareAnIncident_Constants.INCIDENT_TYPES
						.get(position));
		startActivity(_toPictureActivity);
	}
};

@Override
protected void onCreate(Bundle savedInstanceState) {
	super.onCreate(savedInstanceState);

	createCards(); //create cards to be displayed 
	mCardScrollView = new CardScrollView(this);
	IncidentTypeCardScrollAdapter adapter = new IncidentTypeCardScrollAdapter();
	mCardScrollView.setAdapter(adapter);
	mCardScrollView.setOnItemClickListener(mClickAdapter);
	mCardScrollView.activate();
	setContentView(mCardScrollView);
}

private void createCards() {
	...
}

private class IncidentTypeCardScrollAdapter extends CardScrollAdapter {

	@Override
	public int getPosition(Object item) {
		return mCards.indexOf(item);
	}

	@Override
	public int getCount() {
		return mCards.size();
	}

	@Override
	public Object getItem(int position) {
		return mCards.get(position);
	}

	@Override
	public int getViewTypeCount() {
		return Card.getViewTypeCount();
	}

	@Override
	public int getItemViewType(int position) {
		return mCards.get(position).getItemViewType();
	}

	@Override
	public View getView(int position, View convertView, ViewGroup parent) {
		return mCards.get(position).getView(convertView, parent);
	}
}

Our ScrollView is based on the exemple of the developer site for Google Glass. We added the click adapter so that it would meet our needs and pass an intent with the type of incident.

Save a picture and a report in Parse

Once the user has selected an incident type, he is prompted to take a picture of the scene to add it to the report. For this step we used the same code as for the bus schedule application. Thus, the user can preview the picture he will take and use the camera button to take it. But this time, we gave the possibility for the user to retake the picture if it didn’t meet is standard. When he is satisfied by the picture he can add a quick voice description of the incident to the report. For this we use the speech recognition activity embedded in Glass.

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
	if (requestCode == SPEECH_REQUEST && resultCode == RESULT_OK) {
		List<String> results = data
				.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
		String _report = results.get(0);

		mIncidentDescription.put(
				Poc_DeclareAnIncident_Constants.INCIDENT_DESCRIPTION,
				_report);

		voice = true;

		mInstructionView
				.setText(Poc_DeclareAnIncident_Constants.RECORD_REPORT);
	}
	super.onActivityResult(requestCode, resultCode, data);
}

private GestureDetector createGestureDetector(Context context) {
	GestureDetector gestureDetector = new GestureDetector(context);
	//Create a base listener for generic gestures
	gestureDetector.setBaseListener(new GestureDetector.BaseListener() {
		@Override
		public boolean onGesture(Gesture gesture) {
			if (gesture == Gesture.TAP && ending) {
				...

				Intent _speechIntent = new Intent(
						RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
				_speechIntent.putExtra(RecognizerIntent.EXTRA_PROMPT,
						"Please speak your incident report");
				startActivityForResult(_speechIntent, SPEECH_REQUEST);

				releaseCamera();
				...
				return true;
			} else if (gesture == Gesture.SWIPE_LEFT) {
				if (voice) {
                voice = false;

                Intent _speechIntent = new Intent(
                      RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
                _speechIntent.putExtra(RecognizerIntent.EXTRA_PROMPT,
                      "Please speak your incident report");
                startActivityForResult(_speechIntent, SPEECH_REQUEST);
             }
             return true;
			}
			return false;
		}
	});

	return gestureDetector;
}

/*
 * Send generic motion events to the gesture detector
 */
 @Override
 public boolean onGenericMotionEvent(MotionEvent event) {
    if (mGestureDetector != null) {
        return mGestureDetector.onMotionEvent(event);
    }
    return false;
 }

The embedded speech recognition activity is started with activity for result. The result is the first element of an array of strings in the result intent. Here we add gesture management to be able to restart the activity to record anew the report if needed.

speech_prompt_treat

In addition this time we need to save the picture and the report of the incident. It’s here that Parse will come in handy. Before we continue, I want to explain quickly what Parse is. Most of you might already know what it is, but that will help me explain what we were looking for by using Parse.

Parse is a MBAAS, a Mobile Backend As A Service, sometimes referred to as Backend as a Service (BaaS). Basically, it’s a cloud computing category that’s comprised of companies that make it easier for developers to setup, use and operate a cloud backend for their mobile, tablet and web apps.

With a MBAAS, you don’t need to setup either a physical infrastructure for your servers, nor all the software part for those servers. It also provides data management tools, so you do not need to build a database. All of those tools are meant to ease the work of developers, so to easily integrate those tools you have at your disposal SDKs. Parse also have embedded possibilities to help the developers manage location point. This will prove useful in the future and I will talk more about this feature later on.

/**
 * Callback function when the picture is saved in parse
 */
private SaveCallback mPictureSaved = new SaveCallback() {
	@Override
	public void done(ParseException e) {
		if (e == null) {
			mIncidentDescription.put(
					Poc_DeclareAnIncident_Constants.INCIDENT_PICTURE,
					mIncidentPicture);

			mIncidentDescription.saveInBackground(mIncidentSaved);
			mProgress.setMessage("Saving incident...");
		} else {
			Toast.makeText(PictureMainActivity.this,
					"Failed to save incident picture", Toast.LENGTH_SHORT)
					.show();
		}
	}
};

/**
 * Callback function when the incident is saved in parse
 */
private SaveCallback mIncidentSaved = new SaveCallback() {

	@Override
	public void done(ParseException e) {
		if (e == null) {
			String incidentId = mIncidentDescription.getObjectId();

			Intent locationIntent = new Intent(
					PictureMainActivity.this,
					com.glass.poc.poc_declareanincident.location_feed.IncidentLocationActivity.class);

			locationIntent
					.putExtra(Poc_DeclareAnIncident_Constants.INCIDENT_ID,
							incidentId);
			startActivity(locationIntent);
			mProgress.dismiss();
		} else {
			Toast.makeText(PictureMainActivity.this,
					"Failed to save incident report", Toast.LENGTH_SHORT)
					.show();
		}
	}
};

@Override
public boolean onGesture(Gesture gesture) {
	if (gesture == Gesture.TAP && ending) {
		if (ending && !voice) {
			...
		}

		if (ending && voice) {
			...
			mIncidentPicture.saveInBackground(mPictureSaved);
		}
		return true;
	} else if (gesture == Gesture.SWIPE_LEFT) {
		...
		return true;
	}
	return false;
}

Here we save asynchronously the picture, as a ParseFile, and the rest of the incident report with the saveInBackground function of Parse. (Here for more information)

Once we retrieved the picture and the vocal report, we save them along with other information into Parse. Then we can proceed to adding the incident location to those information.

Location on Glass

Before adding the incident location, we need to retrieve it. Glass can find its own location through Wi-Fi information or GPS one (Location on Glass developer guide). Actually, Glass uses the same mechanism as in Android and uses its API to retrieve location updates (Location update on Glass). Hence we have to create a criteria, a location manager and a location listener as in any Android application.

private Criteria mLocationCriteria;
private LocationManager mLocationManager;

private LocationListener mLocationListener = new LocationListener() {
	public void onLocationChanged(Location location) {

		// Called when a new location is found by the network location provider.
		double mIncidentLatitude = location.getLatitude();
		double mIncidentLongitude = location.getLongitude();

		mLocation = new ParseGeoPoint(mIncidentLatitude, mIncidentLongitude);

		ParseQuery<ParseObject> query = ParseQuery
				.getQuery(Poc_DeclareAnIncident_Constants.INCIDENT_LOCATION);

		query.getInBackground(mIncidentId, new GetCallback<ParseObject>() {
			public void done(ParseObject object, ParseException e) {
				if (e == null) {
					object.put(
						Poc_DeclareAnIncident_Constants.INCIDENT_Loc,
						mLocation);
					object.saveInBackground(new SaveCallback() {

						@Override
						public void done(ParseException e) {
							if (e == null) {
								Toast.makeText(
									IncidentLocationActivity.this,
									"Incident location saved Tap to dismiss",
									Toast.LENGTH_SHORT).show();

								end = true;
							} else {
								Toast.makeText(
									IncidentLocationActivity.this,
									"Failed to save incident location",
									Toast.LENGTH_SHORT).show();
								finish();
							}
						}
					});
				} else {
					Toast.makeText(
						IncidentLocationActivity.this,
						"Failed to retrieve incident",
						Toast.LENGTH_SHORT).show();
				}
			}
		});

		mMapView = new ImageView(IncidentLocationActivity.this);
		setContentView(mMapView);
			loadMap(mIncidentLatitude, mIncidentLongitude, 17);
		mLocationManager.removeUpdates(mLocationListener);
	}

	public void onStatusChanged(String provider, int status, Bundle extras) {
	}

	public void onProviderEnabled(String provider) {
	}

	public void onProviderDisabled(String provider) {
	}
};

When we get the location update, we can retrieve the corresponding incident from Parse and add the user current location as our incident location.

@Override
public void onCreate(Bundle savedInstanceState) {
	super.onCreate(savedInstanceState);
	setContentView(R.layout.location_immersion);

	mLocationCriteria = new Criteria();
	mLocationCriteria.setAccuracy(Criteria.ACCURACY_FINE);

	mLocationManager = (LocationManager) this
			.getSystemService(Context.LOCATION_SERVICE);

	...

	// Queue the location research runnable
	mHandler.post(mUserLocationRunnable);
}

private class UserLocationRunnable implements Runnable {

	private boolean mIsStopped = false;

	public void run() {
		if (!isStopped()) {
			String provider = mLocationManager.getBestProvider(
					mLocationCriteria, true);

			boolean isEnabled = mLocationManager
					.isProviderEnabled(provider);

			if (isEnabled) {
				// Register the listener with the Location Manager to receive location updates
				mLocationManager.requestLocationUpdates(
						LocationManager.NETWORK_PROVIDER, 10000, 0,
						mLocationListener);
			} else {
				String s = "No provider enable.";

				txtV.setText(s);
			}

			setStop(true);
		}
	}

	public boolean isStopped() {
		return mIsStopped;
	}

	public void setStop(boolean isStopped) {
		this.mIsStopped = isStopped;
	}
}

Do not forget to add the appropriate permission in the manifest.

Now we have our incident location. To confirm it is a valid one, we provide the user a map where the location is displayed. Here we had to use a static map because we couldn’t add play Services, and Google maps in particular, to a Google Glass project (Map image for Google Glass). The map loading  is done in the location listener, just after the location is saved in Parse.

/**
 * Template for a static map
 */
private static final String STATIC_MAP_URL_TEMPLATE = "https://maps.googleapis.com/maps/api/staticmap"
		+ "?center=%.5f,%.5f"
		+ "&zoom=%d"
		+ "&sensor=true"
		+ "&size=640x360"
		+ "&markers=color:red%%7C%.5f,%.5f"
		+ "&scale=1"
		+ "&style=element:geometry%%7Cinvert_lightness:true"
		+ "&style=feature:landscape.natural.terrain%%7Celement:geometry%%7Cvisibility:on"
		+ "&style=feature:landscape%%7Celement:geometry.fill%%7Ccolor:0x303030"
		+ "&style=feature:poi%%7Celement:geometry.fill%%7Ccolor:0x404040"
		+ "&style=feature:poi.park%%7Celement:geometry.fill%%7Ccolor:0x0a330a"
		+ "&style=feature:water%%7Celement:geometry%%7Ccolor:0x00003a"
		+ "&style=feature:transit%%7Celement:geometry%%7Cvisibility:on%%7Ccolor:0x101010"
		+ "&style=feature:road%%7Celement:geometry.stroke%%7Cvisibility:on"
		+ "&style=feature:road.local%%7Celement:geometry.fill%%7Ccolor:0x606060"
		+ "&style=feature:road.arterial%%7Celement:geometry.fill%%7Ccolor:0x888888";

/** Formats a Google static maps URL for the specified location and zoom level. */
private static String makeStaticMapsUrl(double latitude, double longitude,
		int zoom, double markLat, double markLong) {
	return String.format(STATIC_MAP_URL_TEMPLATE, latitude, longitude,
			zoom, markLat, markLong);
}

private ImageView mMapView;

Here we added a red mark at the incident location.

/** Load the map asynchronously and populate the ImageView when it's loaded. */
private void loadMap(double latitude, double longitude, int zoom) {
	double tlat = latitude;
	double tlong = longitude;
	String url = makeStaticMapsUrl(latitude, longitude, zoom, tlat, tlong);
	new AsyncTask<String, Void, Bitmap>() {
		@Override
		protected Bitmap doInBackground(String... urls) {
			try {
				HttpResponse response = new DefaultHttpClient()
						.execute(new HttpGet(urls[0]));
				InputStream is = response.getEntity().getContent();
				return BitmapFactory.decodeStream(is);
			} catch (Exception e) {
				Log.e(TAG, "Failed to load image", e);
				return null;
			}
		}

		@Override
		protected void onPostExecute(Bitmap bitmap) {
			if (bitmap != null) {
				mMapView.setImageBitmap(bitmap);
			}
		}
	}.execute(url);
}

location_first_encounter

Conclusion

Through this reporting application we have been able to put to the test the location providing features of Glass, alongside with the use of a MBAAS. We also got to use scrollViews and Glass cards. Oddly there seem to be only this to display scrollable content, the use of listViews being broken since a while (issue 484).

Next time, we will talk about resolving the incident we saved with the reporting application we talked about.

Ok Glass, when is the next Bus ?

iD.apps étudie depuis quelques temps déjà le développement pour les Google Glass.

Nul besoin de présenter les Google Glass (si c’est le cas aller voir ici), en revanche le développement sur les Glass est un peu moins connu. Nous vous proposons donc un exemple concret de développement sur les Glass. Pour comprendre tous les points techniques, des connaissances en développement Android sont conseillées.

Pour rappel, d’un point de vue technique, les Google Glass sont un smartphone Android embarqué dans un prisme équipé des principaux capteurs suivants :

  • Une caméra;
  • Un micro;
  • Un accéléromètre/gyroscope (pour le mouvement de la tête);
  • Un capteur de luminosité;
  • Une surface tactile : la branche droite des Google Glass.

Les Glass sont connectées à internet soit via la connexion de votre smartphone qu’elles exploitent par bluetooth, soit directement via wifi.

Cet article vous décrit la mise en oeuvre d’une application utilisant la reconnaissance vocale, la caméra et la connexion internet. Celle-ci permet de récupérer les horaires des prochains bus rennais d’un arrêt. Pour cela nous allons nous appuyer sur les données ouvertes qui fournissent en temps réel les temps d’attente aux arrêts. Enfin pour identifier les arrêts nous allons nous appuyer sur les QR Codes qui sont affichés dessus.

Concrètement, au lancement de l’application l’utilisateur est dirigé sur une vue affichant ce que capte la caméra. Cela, afin de lui permettre de viser le QR code à scanner. Sur cette vue,  l’utilisateur scanne le QR code en prenant une photo de celui-ci par une action de tap. Lorsqu’il a scanné le QR code, et que celui-ci est détecté, l’utilisateur est dirigé sur une LiveCard ajoutée à la timeline des Glass. Sur cette Livecard l’utilisateur voit les horaires des prochains bus.

Pour développer cette application, nous nous appuyons sur le site de développement Google Glass (Glass developpement overview) et sur les références Android.

Avant toute chose

Le développement sur Google Glass se fait dans un environnement Android. Il est donc nécessaire d’avoir un SDK à jour jusqu’à la version Android 4.4.2 (API19) ainsi que le Glass Development Kit Preview (cf. Quick Start). Lors de la création d’un projet pour Glass il faut régler la version cible du SDK et la version minimum de celui-ci sur la version 19 de l’API. C’est la seule supportant le GDK Preview. Ensuite, pour la compilation, il faut régler le paramètre de compilation sur GDK Preview. Il est aussi conseillé d’enlever toutes indications de thème.

Lancer une application sur Google Glass

Toute application sur Google Glass doit commencer par une commande vocale. La création de celle-ci permet aussi un accès à l’application via le menu principal des Glass accessible en utilisant le touch pad. Les commandes vocales utilisables sont limitées aux commandes officielles (VoiceTriggers). Si aucune des commandes ne correspond à ce dont vous avez besoin, il est possible d’en faire enregistrer une nouvelle mais cela prend du temps et elle doit être conforme à la checklist de Google (VoiceTrigger checklist). Il est donc conseillé d’entamer les démarches très tôt si vous voulez que votre GlassWare puisse être disponible sur le store MyGlass. Toutefois, il est aussi possible d’autoriser ses propres commandes dans le manifeste pour des besoins de développement ; c’est  ce que nous utilisons actuellement.

Pour ajouter une commande personnalisée, il faut d’abord ajouter une string en ressource pour définir le nom de la commande vocale.

<?xml version="1.0" encoding="utf-8"?>
<resources>
    <string name="glass_voice_trigger">When is the next bus ?</string>
</resources>

Ensuite, il faut créer une ressource xml dans res/xml/<ma_commande_vocale>.xml, dans laquelle on ajoute une composante trigger avec un attribut keyword référençant la ressource string définie plus tôt.

<?xml version="1.0" encoding="utf-8"?>
<trigger keyword="@string/glass_voice_trigger"/>

Si la commande vocale est une commande déjà enregistrée, correspondant à une Voice Triggers, l’attribut n’est plus keyword mais command (pour plus d’information cf. Starting Glassware). Ensuite, il faut enregistrer, dans le manifeste, l’intention associée à la commande vocale et autoriser la commande.

<activity | service ...>
        <intent-filter>
            <actionandroid:name=
                    "com.google.android.glass.action.VOICE_TRIGGER"/>
        </intent-filter>
        <meta-dataandroid:name="com.google.android.glass.VoiceTrigger"
            android:resource="@xml/ma_commande_vocale"/>
    </activity | service>
<uses-permission
     android:name="com.google.android.glass.permission.DEVELOPMENT"/>

When is the next bus ? icon

Une fois l’application installée sur les Glass, vous devriez voir une nouvelle commande vocale et une nouvelle application dans le menu d’applications.

Scanner un QR code avec des Google Glass

Maintenant que nous pouvons lancer notre application, nous devons pouvoir scanner le QR code présent sur l’arrêt de bus.

La prise de photo avec les Glass peut se faire de deux façons : soit avec l’activité native Google Glass qui ne permet pas d’aperçu, soit grâce à la Camera API d’Android qui permet de fonctionner de la même façon qu’une application photo sur un smartphone Android.

Ici, une preview nous semblait nécessaire et devait donner la possibilité à l’utilisateur de viser le QR code à scanner avec la caméra intégrée. Pour cela, on crée donc un SurfaceHolder pour afficher cette preview.

private final SurfaceHolder.Callback mSurfaceHolderCallback =
              new SurfaceHolder.Callback() {

	@Override
	public void surfaceCreated(SurfaceHolder holder) {
		try {
			mCamera.setPreviewDisplay(holder);
			mCamera.startPreview();
		} catch (IOException e) {
			e.printStackTrace();
		}
	}

	@Override
	public void surfaceDestroyed(SurfaceHolder holder) {
		// Nothing to do here.
	}

	@Override
	public void surfaceChanged(SurfaceHolder holder, int format, int width,
			int height) {
		// Nothing to do here.
	}
};

Attention, lors de l’utilisation intensive de la caméra des Glass veillez à bien libérer celle-ci après avoir pris une photo pour éviter de bloquer les Glass.

private void releaseCamera() {
	if (mCamera != null) {
		if (mSurfaceHolderCallback != null) {
			mPreview.getHolder().removeCallback(mSurfaceHolderCallback);
			mCamera.release();
		}
	}
}

Ensuite, si vous souhaitez la réutiliser, il faut réinitialiser complètement la caméra.

Ici, nous avons besoin d’effectuer un zoom pour pouvoir scanner un QR code sans que l’utilisateur soit obligé de coller sa tête devant.

private void initCamera() {
	mCamera = getCameraInstance();
	mPreview.getHolder().addCallback(mSurfaceHolderCallback);
	Camera.Parameters parameters = mCamera.getParameters();
	int maxZoom = parameters.getMaxZoom();
	if (parameters.isZoomSupported()) {
		if (mCamera.getParameters().getZoom() >= 0
				&& mCamera.getParameters().getZoom() < maxZoom) {
			int i = maxZoom / 2;
			parameters.setZoom(i);
		} else {
			// zoom parameter is incorrect
		}
	}
	mCamera.setParameters(parameters);
}

Ainsi, après avoir initialisé la preview et saisi la caméra, nous pouvons effectuer un takePicture et définir les opérations de CallBack nécessaires à la prise de photo proprement dite. Nous n’avons utilisé que le CallBack JPEG dans lequel nous affichons le résultat de la prise de photo avant de scanner l’image à la recherche du QR code. Enfin nous démarrons la LiveCard permettant d’afficher les horaires des prochains bus.

private final PictureCallback mJPEGCallback = new PictureCallback() {

	@Override
	public void onPictureTaken(byte[] data, Camera camera) {

		mPreview.setVisibility(View.GONE);
		BitmapFactory.Options options = new BitmapFactory.Options();
		options.inSampleSize = 5;

		mPicture = BitmapFactory.decodeByteArray(data, 0, data.length,
				options);
		mImageTaken.setImageBitmap(mPicture);
		mImageTaken.setVisibility(View.VISIBLE);

		String result = scanQRCode(data);

		if (!TextUtils.isEmpty(result)) {
			Toast.makeText(ScanQrCodePictureActivity.this, result,
					Toast.LENGTH_SHORT).show();
			Intent startPublishIntent = new Intent(
					ScanQrCodePictureActivity.this,
					PublishScheduleIntoLiveCardService.class);
			startPublishIntent.putExtra(Poc_Horaires_Constants.URL, result);
			startService(startPublishIntent);
		} else {
			Toast.makeText(ScanQrCodePictureActivity.this,
					getString(R.string.detection_failed),
					Toast.LENGTH_SHORT).show();
		}
		releaseCamera();
		finish();
	}
};

Un scan avec le scanner Zbar ne se fait que sur une image au format Y800.

private String scanQRCode(byte[] data) {
	Bitmap imageRes = BitmapFactory.decodeByteArray(data, 0, data.length);
	String symbol = null;
	int width = imageRes.getWidth();
	int height = imageRes.getHeight();
	int[] pixels = new int[width * height];

	imageRes.getPixels(pixels, 0, width, 0, 0, width, height);
	Image qrcode = new Image(width, height, "RGB4");
	qrcode.setData(pixels);

	int result = mScanner.scanImage(qrcode.convert("Y800"));

	if (result != 0) {
		SymbolSet syms = mScanner.getResults();
		for (Symbol sym : syms) {
			symbol = sym.getData();
		}
	}
	return symbol;
}

Les horaires de bus dans une LiveCard

Le résultat du scan du QR code se présente sous la forme d’une URL de laquelle on extrait le numéro de l’arrêt. Celui-ci nous sert ensuite dans l’appel à l’API Keolis. Dans notre application, l’extraction, l’appel à l’API et l’affichage du résultat se font dans un service qui va mettre à jour une LiveCard au travers d’une RemoteView.

Une LiveCard est une carte Google Glass donnant des informations pertinentes dans l’instant. Les LiveCards se placent dans la Timeline à gauche de l’écran d’accueil des Glass et sont régulièrement mises à jour pour le temps où leur utilisation est pertinente.

private LiveCard mLiveCard;
private RemoteViews mLiveCardView;
...
public int onStartCommand(Intent intent, int flags, int startId) {
	if (mLiveCard == null) {
		
		URL = intent.getStringExtra(Poc_Horaires_Constants.URL);
		busStop = extractBusStop(URL);

		mLiveCard = new LiveCard(this, LIVE_CARD_TAG);
		mLiveCardView = new RemoteViews(getPackageName(),
				R.layout.service_live_card);
		...
		mLiveCard.publish(PublishMode.REVEAL);

		// Queue the update text runnable
        mHandler.post(mUpdateLiveCardRunnable);
	}
	return START_STICKY;
}

La mise à jour de notre LiveCard se fait via un thread qui va appeler l’API Keolis au travers de Retrofit toutes les minutes. Retrofit permet de transformer une interface java en objet permettant l’appel d’une API REST.

public interface KeolisApiCalls {
	@GET("/?cmd=getbusnextdepartures")
	void getHoraires(@Query("version") String version,
			@Query("key") String key, @Query("param[mode]") String mode,
			@Query("param[stop][]") String stop, Callback<OpenData> cb);
}

Interface permettant de faire appel à l’API Keolis dans notre application.

private RestAdapter mKeolisRestAdapter;
private KeolisApiCalls mKeolisApiCalls;
...
public void run() {
	if (!isStopped()) {

		mKeolisRestAdapter = new RestAdapter.Builder()
				.setEndpoint(KeolisApiCalls.ENDPOINT)
				.setConverter(new SimpleXMLConverter()).build();
		mKeolisApiCalls = mKeolisRestAdapter
				.create(KeolisApiCalls.class);

		Callback<OpenData> cb = new Callback<OpenData>() {
			@Override
				public void success(OpenData retour, Response qResponse) {
				String scheduleToDisplay = "";

				if (retour.getAnswer().getStatus().getCode() != 0) {
					scheduleToDisplay = ""
							+ retour.getAnswer().getStatus().getCode()
							+ " : "
							+ retour.getAnswer().getStatus()
									.getMessage();

					updateScheduleError(scheduleToDisplay);
				} else {
					updateSchedule(FormatBusSchedule
							.listSchedules(retour));
				}
			}
			...
		};

		mKeolisApiCalls.getHoraires(KeolisApiCalls.VERSION,
				KeolisApiCalls.API_KEY, Poc_Horaires_Constants.STOP,
				busStop, cb);

		// Queue another schedule update in 60 seconds.
		mHandler.postDelayed(mUpdateLiveCardRunnable, DELAY_MILLIS);
	}
}

Dans le thread de mise à jour de notre LiveCard, on initialise un objet de type RestAdapter implémentant l’interface Retrofit que nous avons définie. Cet objet permet d’appeler l’API Keolis. L’appel du WebService prend en paramètre une callBack dans laquelle nous récupérons le résultat de notre requête. Nous pouvons ensuite l’afficher dans notre LiveCard.

Et voilà, notre utilisateur sait quels sont les prochains passages de Bus à son arrêt.

Conclusion

Au travers de cette application, nous mettons en oeuvre un bon exemple de développement sur Google Glass.  On se familiarise ainsi avec l’utilisation de la caméra, des LiveCards, de Retrofit et enfin de Zbar. Vous pouvez constater que le développement sur Glass est très similaire à celui pour Android, du moins pour ce qui est du code, car pour les usages c’est une autre histoire que nous développerons dans de prochaines applications 😉