This repository contains demos of Apple's Core ML and Vision frameworks made available to developers in iOS 11.
The following demos are present in this project:
- Sentiment analysis of an input text using Core ML.
- Face detection using the Vision framework with camera data.
- Real-time object classification using Core ML + Vision.
Menu | Sentiment Analysis | Face Detection | Object Recognition |
---|---|---|---|
The project is mainly composed of multiple view controllers. Each one contains the logic for each demo:
SentimentAnalysisViewController
: sentiment analysis demoFaceDetectionViewController
: face detection demo.ClassificationViewController
: object classification demo.
The demos that use stream data from the camera use the CameraCapture
utility class created to extract images from the camera. It is built on top of AVFoundation
and it contains two main features:
- Provides to a delegate object the current frame as a
CVImageBuffer
object, which can be directly used as input to the Vision pipeline. - Provides a
CALayer
that can be used by the view controller to efficiently display the current frame.
Finally, the Main.storyboard
file contains the views of all the mentioned view controllers and a simplistic introduction menu view.
The ML models used in this project were not included in the repository.
The sentiment analysis and object classification demos use the following models:
- Sentiment Analysis: Sentiment polarity LinearSVC made by Vadym Markov.
- Object classification: SqueezeNet (paper).
Note that you can easily replace the Core ML model in the object classification project. If the model contains only one image as input, it is just a matter of replacing the only occurrence of the SqueezeNet
keyword in the project to the name of the new model (e.g. Inceptionv3
).
First, download the Core ML models described in Models. Then, simply open the project in Xcode 9 or later, and you're all set!