A Flutter plugin to use the Google ML Kit for Firebase API.
For Flutter plugins for other Firebase products, see FlutterFire.md.
Note: This plugin is still under development, and some APIs might not be available yet. Feedback and Pull Requests are most welcome!
To use this plugin, add firebase_ml_vision
as a dependency in your pubspec.yaml file. You must also configure Firebase for each platform project: Android and iOS (see the example folder or https://codelabs.developers.google.com/codelabs/flutter-firebase/#4 for step by step details).
Optional but recommended: If you use the on-device API, configure your app to automatically download the ML model to the device after your app is installed from the Play Store. To do so, add the following declaration to your app's AndroidManifest.xml file:
<application ...>
...
<meta-data
android:name="com.google.firebase.ml.vision.DEPENDENCIES"
android:value="ocr" />
<!-- To use multiple models: android:value="ocr,model2,model3" -->
</application>
To use the on-device text recognition model, run the text detector as described below:
- Create a
FirebaseVisionImage
object from your image.
To create a FirebaseVisionImage
from an image File
object:
final File imageFile = getImageFile();
final FirebaseVisionImage visionImage = FirebaseVisionImage.fromFile(imageFile);
- Get an instance of
TextDetector
and passvisionImage
todetectInImage().
final TextDetector detector = FirebaseVision.instance.getTextDetector();
final List<TextBlock> blocks = await detector.detectInImage(visionImage);
detector.close();
- Extract text and text locations from blocks of recognized text.
for (TextBlock block in blocks) {
final Rectangle<int> boundingBox = block.boundingBox;
final List<Point<int>> cornerPoints = block.cornerPoints;
final String text = block.text;
for (TextLine line in block.lines) {
// ...
for (TextElement element in line.elements) {
// ...
}
}
}
See the example
directory for a complete sample app using Google ML Kit for Firebase.