UPDATE: This project has been moved to https://github.com/googlesamples/mlkit/tree/master/android/material-showcase as part of the ML Kit's new standalone SDK. Learn more
This app demonstrates how to build an end-to-end user experience with Google ML Kit APIs and following the new Material for ML design guidelines.
The goal is to make it as easy as possible to integrate ML Kit into your app with an experience that has been user tested for the specific use cases that are covered:
- Visual search using the Object Detection & Tracking API - a complete workflow from object detection to product search in live camera and static image
- Barcode detection using the Barcode API in live camera
This sample is no longer actively maintained and is left here for reference only.
- Clone this repo locally
git clone https://github.com/firebase/mlkit-material-android
- Create a Firebase project in the Firebase console, if you don't already have one
- Add a new Android app into your Firebase project with package name com.google.firebase.ml.md
- Download the config file (google-services.json) from the new added app and move it into the module folder (i.e. app/)
- Build and run it on an Android device
This app supports two usage scenarios: Live Camera and Static Image.
It uses the camera preview as input and contains two workflow: object detection & visual search, and barcode detection. There's also a Settings page to allow you to configure several options:
- Camera
- Specify the preview size of rear camera manually (Default size is chose appropriately based on screen size)
- Object detection
- Whether or not to enable multiple objects and coarse classification
- Product search
- Whether or not to enable auto search: if enabled, search request will be fired automatically once object is detected and confirmed, otherwise a search button will appear to trigger search manually
- Required time that the auto-detected object needs to be focused for being regarded as user-confirmed
- Barcode detection
- Barcode aiming frame size
- Barcode size check: will prompt "Move closer" if the current detected barcode size is not big enough
- Delay loading result: to simulate the case where the detected barcode requires further processing before displaying result.
It'll prompt to select an image from the Image Picker, detect objects in the picked image, and then perform visual search on them. There're well designed UI components (overlay dots, card carousel etc.) to indicate the detected objects and search results.
Note that the visual search functionality here is mock since no real search backend has set up for this repository, but it should be easy to hook up with your own search service (e.g. Product Search) by only replacing the SearchEngine class implementation.
© Google, 2019. Licensed under an Apache-2 license.