Ok Glass, report an incident (1/2)

Here is a new article where we are going to continue our trip with Google Glass. If you missed the previous one, it’s this way (in french).

This time, we decided to explore the possibility to use Glass with a MBAAS (Mobile Backend As A Service). To complete this next trial of Glass capabilities we also wanted to be able to give direction with Glass and test its GPS tracking features. Here we chose to use Parse as our MBAAS one of the reason was the fairly easy use of geo-points to search in the database after they are saved.

In this article, we will work on an application that helps the user to report an incident around him. When a user encounters an incident, he can launch the application and choose the kind of problems he is meeting. In our case, we consider that an incident is something like a theft, a deterioration or a defacement. After he has made his choice, he is able to take a picture of the scene and add a first vocal report. Then the location is stored and a map indicates the incident location so the user can confirm it.

I have a terrible accent and a stuffy nose, so the two mixed together make it often nearly impossible for Glass to understand me. And don’t search for the car, it isn’t in the picture.

Scrolling through a set of types on Glass

Like all Glassware our application has a voice command as a launch trigger. Here the voice trigger is “Report an incident”. It is also possible to launch the application with the touchpad, just like in the previous video.

scrollView-first-encounter

The first step to report an incident is to specify its type. We limit the user to a certain set of types, so it eases the incident declaration. Given that, it appears that we couldn’t use the speech recognition.

Speech recognition gives to the user too much freedom, making it tricky to recognize the incident type we want. So, we decide to use a set of predefined cards grouped in a ScrollView that the user could interact with to browse through the set. The user navigates through the set by swiping to choose a card, and then use a tap interaction in order to select one.

private List<Card> mCards;
private CardScrollView mCardScrollView;

private AdapterView.OnItemClickListener mClickAdapter = new AdapterView.OnItemClickListener() {
	public void onItemClick(AdapterView<?> parent, View view, int position,
			long id) {
		Intent _toPictureActivity = new Intent(TypeAnIncidentActivity.this,
				PictureMainActivity.class);
		_toPictureActivity.putExtra(
				Poc_DeclareAnIncident_Constants.INCIDENT_TYPE,
				Poc_DeclareAnIncident_Constants.INCIDENT_TYPES
						.get(position));
		startActivity(_toPictureActivity);
	}
};

@Override
protected void onCreate(Bundle savedInstanceState) {
	super.onCreate(savedInstanceState);

	createCards(); //create cards to be displayed 
	mCardScrollView = new CardScrollView(this);
	IncidentTypeCardScrollAdapter adapter = new IncidentTypeCardScrollAdapter();
	mCardScrollView.setAdapter(adapter);
	mCardScrollView.setOnItemClickListener(mClickAdapter);
	mCardScrollView.activate();
	setContentView(mCardScrollView);
}

private void createCards() {
	...
}

private class IncidentTypeCardScrollAdapter extends CardScrollAdapter {

	@Override
	public int getPosition(Object item) {
		return mCards.indexOf(item);
	}

	@Override
	public int getCount() {
		return mCards.size();
	}

	@Override
	public Object getItem(int position) {
		return mCards.get(position);
	}

	@Override
	public int getViewTypeCount() {
		return Card.getViewTypeCount();
	}

	@Override
	public int getItemViewType(int position) {
		return mCards.get(position).getItemViewType();
	}

	@Override
	public View getView(int position, View convertView, ViewGroup parent) {
		return mCards.get(position).getView(convertView, parent);
	}
}

Our ScrollView is based on the exemple of the developer site for Google Glass. We added the click adapter so that it would meet our needs and pass an intent with the type of incident.

Save a picture and a report in Parse

Once the user has selected an incident type, he is prompted to take a picture of the scene to add it to the report. For this step we used the same code as for the bus schedule application. Thus, the user can preview the picture he will take and use the camera button to take it. But this time, we gave the possibility for the user to retake the picture if it didn’t meet is standard. When he is satisfied by the picture he can add a quick voice description of the incident to the report. For this we use the speech recognition activity embedded in Glass.

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
	if (requestCode == SPEECH_REQUEST && resultCode == RESULT_OK) {
		List<String> results = data
				.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
		String _report = results.get(0);

		mIncidentDescription.put(
				Poc_DeclareAnIncident_Constants.INCIDENT_DESCRIPTION,
				_report);

		voice = true;

		mInstructionView
				.setText(Poc_DeclareAnIncident_Constants.RECORD_REPORT);
	}
	super.onActivityResult(requestCode, resultCode, data);
}

private GestureDetector createGestureDetector(Context context) {
	GestureDetector gestureDetector = new GestureDetector(context);
	//Create a base listener for generic gestures
	gestureDetector.setBaseListener(new GestureDetector.BaseListener() {
		@Override
		public boolean onGesture(Gesture gesture) {
			if (gesture == Gesture.TAP && ending) {
				...

				Intent _speechIntent = new Intent(
						RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
				_speechIntent.putExtra(RecognizerIntent.EXTRA_PROMPT,
						"Please speak your incident report");
				startActivityForResult(_speechIntent, SPEECH_REQUEST);

				releaseCamera();
				...
				return true;
			} else if (gesture == Gesture.SWIPE_LEFT) {
				if (voice) {
                voice = false;

                Intent _speechIntent = new Intent(
                      RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
                _speechIntent.putExtra(RecognizerIntent.EXTRA_PROMPT,
                      "Please speak your incident report");
                startActivityForResult(_speechIntent, SPEECH_REQUEST);
             }
             return true;
			}
			return false;
		}
	});

	return gestureDetector;
}

/*
 * Send generic motion events to the gesture detector
 */
 @Override
 public boolean onGenericMotionEvent(MotionEvent event) {
    if (mGestureDetector != null) {
        return mGestureDetector.onMotionEvent(event);
    }
    return false;
 }

The embedded speech recognition activity is started with activity for result. The result is the first element of an array of strings in the result intent. Here we add gesture management to be able to restart the activity to record anew the report if needed.

speech_prompt_treat

In addition this time we need to save the picture and the report of the incident. It’s here that Parse will come in handy. Before we continue, I want to explain quickly what Parse is. Most of you might already know what it is, but that will help me explain what we were looking for by using Parse.

Parse is a MBAAS, a Mobile Backend As A Service, sometimes referred to as Backend as a Service (BaaS). Basically, it’s a cloud computing category that’s comprised of companies that make it easier for developers to setup, use and operate a cloud backend for their mobile, tablet and web apps.

With a MBAAS, you don’t need to setup either a physical infrastructure for your servers, nor all the software part for those servers. It also provides data management tools, so you do not need to build a database. All of those tools are meant to ease the work of developers, so to easily integrate those tools you have at your disposal SDKs. Parse also have embedded possibilities to help the developers manage location point. This will prove useful in the future and I will talk more about this feature later on.

/**
 * Callback function when the picture is saved in parse
 */
private SaveCallback mPictureSaved = new SaveCallback() {
	@Override
	public void done(ParseException e) {
		if (e == null) {
			mIncidentDescription.put(
					Poc_DeclareAnIncident_Constants.INCIDENT_PICTURE,
					mIncidentPicture);

			mIncidentDescription.saveInBackground(mIncidentSaved);
			mProgress.setMessage("Saving incident...");
		} else {
			Toast.makeText(PictureMainActivity.this,
					"Failed to save incident picture", Toast.LENGTH_SHORT)
					.show();
		}
	}
};

/**
 * Callback function when the incident is saved in parse
 */
private SaveCallback mIncidentSaved = new SaveCallback() {

	@Override
	public void done(ParseException e) {
		if (e == null) {
			String incidentId = mIncidentDescription.getObjectId();

			Intent locationIntent = new Intent(
					PictureMainActivity.this,
					com.glass.poc.poc_declareanincident.location_feed.IncidentLocationActivity.class);

			locationIntent
					.putExtra(Poc_DeclareAnIncident_Constants.INCIDENT_ID,
							incidentId);
			startActivity(locationIntent);
			mProgress.dismiss();
		} else {
			Toast.makeText(PictureMainActivity.this,
					"Failed to save incident report", Toast.LENGTH_SHORT)
					.show();
		}
	}
};

@Override
public boolean onGesture(Gesture gesture) {
	if (gesture == Gesture.TAP && ending) {
		if (ending && !voice) {
			...
		}

		if (ending && voice) {
			...
			mIncidentPicture.saveInBackground(mPictureSaved);
		}
		return true;
	} else if (gesture == Gesture.SWIPE_LEFT) {
		...
		return true;
	}
	return false;
}

Here we save asynchronously the picture, as a ParseFile, and the rest of the incident report with the saveInBackground function of Parse. (Here for more information)

Once we retrieved the picture and the vocal report, we save them along with other information into Parse. Then we can proceed to adding the incident location to those information.

Location on Glass

Before adding the incident location, we need to retrieve it. Glass can find its own location through Wi-Fi information or GPS one (Location on Glass developer guide). Actually, Glass uses the same mechanism as in Android and uses its API to retrieve location updates (Location update on Glass). Hence we have to create a criteria, a location manager and a location listener as in any Android application.

private Criteria mLocationCriteria;
private LocationManager mLocationManager;

private LocationListener mLocationListener = new LocationListener() {
	public void onLocationChanged(Location location) {

		// Called when a new location is found by the network location provider.
		double mIncidentLatitude = location.getLatitude();
		double mIncidentLongitude = location.getLongitude();

		mLocation = new ParseGeoPoint(mIncidentLatitude, mIncidentLongitude);

		ParseQuery<ParseObject> query = ParseQuery
				.getQuery(Poc_DeclareAnIncident_Constants.INCIDENT_LOCATION);

		query.getInBackground(mIncidentId, new GetCallback<ParseObject>() {
			public void done(ParseObject object, ParseException e) {
				if (e == null) {
					object.put(
						Poc_DeclareAnIncident_Constants.INCIDENT_Loc,
						mLocation);
					object.saveInBackground(new SaveCallback() {

						@Override
						public void done(ParseException e) {
							if (e == null) {
								Toast.makeText(
									IncidentLocationActivity.this,
									"Incident location saved Tap to dismiss",
									Toast.LENGTH_SHORT).show();

								end = true;
							} else {
								Toast.makeText(
									IncidentLocationActivity.this,
									"Failed to save incident location",
									Toast.LENGTH_SHORT).show();
								finish();
							}
						}
					});
				} else {
					Toast.makeText(
						IncidentLocationActivity.this,
						"Failed to retrieve incident",
						Toast.LENGTH_SHORT).show();
				}
			}
		});

		mMapView = new ImageView(IncidentLocationActivity.this);
		setContentView(mMapView);
			loadMap(mIncidentLatitude, mIncidentLongitude, 17);
		mLocationManager.removeUpdates(mLocationListener);
	}

	public void onStatusChanged(String provider, int status, Bundle extras) {
	}

	public void onProviderEnabled(String provider) {
	}

	public void onProviderDisabled(String provider) {
	}
};

When we get the location update, we can retrieve the corresponding incident from Parse and add the user current location as our incident location.

@Override
public void onCreate(Bundle savedInstanceState) {
	super.onCreate(savedInstanceState);
	setContentView(R.layout.location_immersion);

	mLocationCriteria = new Criteria();
	mLocationCriteria.setAccuracy(Criteria.ACCURACY_FINE);

	mLocationManager = (LocationManager) this
			.getSystemService(Context.LOCATION_SERVICE);

	...

	// Queue the location research runnable
	mHandler.post(mUserLocationRunnable);
}

private class UserLocationRunnable implements Runnable {

	private boolean mIsStopped = false;

	public void run() {
		if (!isStopped()) {
			String provider = mLocationManager.getBestProvider(
					mLocationCriteria, true);

			boolean isEnabled = mLocationManager
					.isProviderEnabled(provider);

			if (isEnabled) {
				// Register the listener with the Location Manager to receive location updates
				mLocationManager.requestLocationUpdates(
						LocationManager.NETWORK_PROVIDER, 10000, 0,
						mLocationListener);
			} else {
				String s = "No provider enable.";

				txtV.setText(s);
			}

			setStop(true);
		}
	}

	public boolean isStopped() {
		return mIsStopped;
	}

	public void setStop(boolean isStopped) {
		this.mIsStopped = isStopped;
	}
}

Do not forget to add the appropriate permission in the manifest.

Now we have our incident location. To confirm it is a valid one, we provide the user a map where the location is displayed. Here we had to use a static map because we couldn’t add play Services, and Google maps in particular, to a Google Glass project (Map image for Google Glass). The map loading  is done in the location listener, just after the location is saved in Parse.

/**
 * Template for a static map
 */
private static final String STATIC_MAP_URL_TEMPLATE = "https://maps.googleapis.com/maps/api/staticmap"
		+ "?center=%.5f,%.5f"
		+ "&zoom=%d"
		+ "&sensor=true"
		+ "&size=640x360"
		+ "&markers=color:red%%7C%.5f,%.5f"
		+ "&scale=1"
		+ "&style=element:geometry%%7Cinvert_lightness:true"
		+ "&style=feature:landscape.natural.terrain%%7Celement:geometry%%7Cvisibility:on"
		+ "&style=feature:landscape%%7Celement:geometry.fill%%7Ccolor:0x303030"
		+ "&style=feature:poi%%7Celement:geometry.fill%%7Ccolor:0x404040"
		+ "&style=feature:poi.park%%7Celement:geometry.fill%%7Ccolor:0x0a330a"
		+ "&style=feature:water%%7Celement:geometry%%7Ccolor:0x00003a"
		+ "&style=feature:transit%%7Celement:geometry%%7Cvisibility:on%%7Ccolor:0x101010"
		+ "&style=feature:road%%7Celement:geometry.stroke%%7Cvisibility:on"
		+ "&style=feature:road.local%%7Celement:geometry.fill%%7Ccolor:0x606060"
		+ "&style=feature:road.arterial%%7Celement:geometry.fill%%7Ccolor:0x888888";

/** Formats a Google static maps URL for the specified location and zoom level. */
private static String makeStaticMapsUrl(double latitude, double longitude,
		int zoom, double markLat, double markLong) {
	return String.format(STATIC_MAP_URL_TEMPLATE, latitude, longitude,
			zoom, markLat, markLong);
}

private ImageView mMapView;

Here we added a red mark at the incident location.

/** Load the map asynchronously and populate the ImageView when it's loaded. */
private void loadMap(double latitude, double longitude, int zoom) {
	double tlat = latitude;
	double tlong = longitude;
	String url = makeStaticMapsUrl(latitude, longitude, zoom, tlat, tlong);
	new AsyncTask<String, Void, Bitmap>() {
		@Override
		protected Bitmap doInBackground(String... urls) {
			try {
				HttpResponse response = new DefaultHttpClient()
						.execute(new HttpGet(urls[0]));
				InputStream is = response.getEntity().getContent();
				return BitmapFactory.decodeStream(is);
			} catch (Exception e) {
				Log.e(TAG, "Failed to load image", e);
				return null;
			}
		}

		@Override
		protected void onPostExecute(Bitmap bitmap) {
			if (bitmap != null) {
				mMapView.setImageBitmap(bitmap);
			}
		}
	}.execute(url);
}

location_first_encounter

Conclusion

Through this reporting application we have been able to put to the test the location providing features of Glass, alongside with the use of a MBAAS. We also got to use scrollViews and Glass cards. Oddly there seem to be only this to display scrollable content, the use of listViews being broken since a while (issue 484).

Next time, we will talk about resolving the incident we saved with the reporting application we talked about.

MBaas – Retrouvez l’intervention en vidéo d’un de nos Experts au PAUG

Tour d’horizon des MBaas

Les applications mobiles reposent de nos jours presque toutes sur l’exploitation d’un BackOffice. Cependant l’implémentation d’un BackOffice, l’administration de machines associées, la gestion de la scalabilité et le TTM du marché actuel sont autant de difficultés à résoudre pour des petits ou moyens acteurs. Pourtant un nouveau type de service appelé MBaas voit le jour et permet de gérer une bonne partie de ces problématiques à moindre coût. Google a d’ores et déjà proposé son Mobile Backend aux développeurs mais d’autres acteurs pointent depuis un moment le bout de leur nez.

Cette présentation vous propose donc de parcourir les éléments suivants :
• Tour d’horizon des MBaas
• Fonctionnalités proposées
• Exemples d’implémentations

Vous pouvez télécharger l’application Android de démo ici.

Vous pouvez trouver les slides de la présentation  ici :

iD.apps à la Google I/O

Cette année la Google I/O, conférence des Développeurs utilisant les technologies Google, aura lieu le 26 Juin. iD.apps a une forte expertise dans le domaine d’Android, c’est donc en toute logique que nous avons souhaité y participer. La demande pour être présent à cet événement étant extrêmement importante, Google a mis en place un tirage au sort. Heureusement nous avons été chanceux et Jérémy d’iD.apps aura la chance de s’y rendre. Voyons quelles sont ses attentes.

Jérémy Samzun Lead Developer Android

Pourrais-tu rapidement te présenter et nous indiquer ton rôle chez iD.apps ? 

JS : Je m’appelle Jérémy Samzun, je suis Expert sur Android chez iD.apps. Je suis passionné de nouvelles technos et forcément par tout ce qui touche de près ou de loin à Google. Je m’intéresse bien entendu à la mobilité que ce soit chez Google, Apple ou encore Microsoft. Cela fait maintenant 5 ans que je développe sur Android ce qui me permet d’avoir un certain recul sur les futurs challenges techniques associés aux annonces divulguées à la Google I/O et donc aux futures demandes de nos clients.

Que représente pour vous la Google I/O? 

JS : La Google I/O est la conférence Google/Android/Chrome de l’année. Étant Développeur, il s’agit du rendez-vous à ne pas manquer que ce soit sur mon canapé devant la keynote ou au bureau pour les sessions. Durant cette conférence, nous aurons l’aperçu des problématiques sur lesquelles nous travaillerons au cours des deux prochaines années, voire plus (ainsi, quand Google a annoncé les fragments, cela a changé notre manière de développer depuis ce jour).

Quelles sont les annonces les plus attendues / espérées à la Google I/O côté consommateur ?

JS : Comme la WWDC pour iOS, la Google I/O est faite pour les Développeurs. Les annonces sont logicielles principalement. Nous attendons tout de même des annonces concernant une possible Nexus 8 et une montre Android Wear notamment. Elles permettent aussi de voir les prochaines évolutions liées à Android 4.5 ou 5.0.

Les annonces très intéressantes pour le consommateur sont l’arrivée d’Android Wear, l’évolution de Nest (domotique) et les prochaines évolutions d’Android. Il s’agit de rumeurs bien entendu! Mais sachant que l’Android Wear a été présentée sur l’Android Developpers Blog et des sessions ont été mises en place spécialement pour celle-ci, il est fort probable qu’elle soit donc présentée.

Concernant l’évolution de Nest, une session est planifiée sur ce sujet mais nous ne savons pas si ces évolutions seront majeures ou non.

Concernant l’évolution d’Android, des rumeurs indiquent qu’elle se nommerait Android 5.0 Lollipop, qu’il y aurait une refonte design pour coller au maximum au « flat design », l’intégration de Google Babel (unified chat service) ainsi qu’un un merge possible entre Chrome OS et Android.

Il n’y a pas qu’Android à la Google I/O, on peut donc espérer des annonces concernant aussi Chrome, Dart, l’arrêt des devices Nexus ou encore des robots.

En tant qu’Expert Android sur quoi se portent le plus tes attentes ?

JS : Étant Développeur Android, j’attends avec impatience l’Android Wear. Pour moi, cela sera l’annonce majeure de la Google I/O. En effet, cela positionnerait Android comme leader dans ce domaine en innovant sur un sujet très attendu par les consommateurs.

Les nouveautés Android auront aussi leur place même si je doute que les innovations sur ce sujet ne soient pas énormes. J’espère me tromper sur ce point.

Merci pour ton témoignage Jérémy!