Demo | API Documentation | Devpost
📸 Share your images to Social Media from your DSLR camera without cables and in less than 1 minute. Project built in LauzHack 2019.
The inspiration came really from a moment where we needed to share photos and videos from our DSLR in a quick way while we were taking photos in some of the events that we are organizing. We realized that there's a huge bottleneck from the moment that you take a photo and that photo ends to any Social Media.
Given a DSLR camera - we decided to play with a Yi Action Camera (from Xiaomi) - the user goes to the built web application and selects the time frame that they want. The application imports all the media from that time frame and shows all the photos taken during that time.
In order to optimize the selection process, the application splits all the photos by similarity using Machine Learning and then the user selects a cluster. Given that cluster, the application selects the best quality image.
And then, in less than one minute, you have the photo ready to be shared!
Frontend and backend are very different components connected by API requests and deployed with Docker compose.
At the frontend, we can see an Express Node.JS application built with JavaScript. Bootstrap, JQuery, and EJS are some of the most important used libraries in order to achieve the frontend.
On the other side of the application, we have the backend which is implemented with Python 3.7. For creating the API that allows the communication between the two components, we have used Flask and OpenAPI (connected themselves with Connexion library), integrated with Docker compose. This API is hosted using uWSGI and Nginx in a small DigitalOcean droplet.
In order to achieve the clustering, we used Keras with the pre-trained model Resnet50 with the ImageNet weights in order to retrieve the features of the DSLR camera images. Given that the array of features, we used nmslib in order to create a 2048-dimensional space for computing distances between images. Given that space, we implemented an algorithm for creating the clusters.
Regarding the selection process, we used OpenCV in order to retrieve the blurry index for checking what image has the best quality given a cluster.
And last but not least, for the camera connection, we used an Open Source NodeJS project called yi-action-camera and we have added some new features for achieving the goal of the project.
The most important challenge that we had was when we realized at midnight (12 hours left) that React was not compatible with the needed technology of the yi-action-camera library. We have had to run, a lot, in order to implement an alternative. 0 hours slept was the result at the end.
Also, none of us had experience with Keras so we had to spend some time researching and understanding this Machine Learning framework.
And finally, our experience with Express was very poor. So we have had to refresh our JS knowledge of this framework.
Given the fact that we were only two in the team, we are really proud of the result which is a working prototype of our idea!
We have learned how beautiful can be to be able to solve a problem with your own hands in just a weekend. A problem that we know one month ago while the hackathon of our university (HackUPC) was happening.
Our first goal was to implement connection with a Canon camera - each member of the team has one. However, we realized that there's no SDK or API or library compatible with our cameras, yet. It would be really cool in a (near) future to add our cameras for using it in the next event.
This is how our project looks like unifying frontend and backend
- docker-compose
To run the whole stack, please execute the following from the root directory:
-
Run the server as a docker container with docker-compose
docker-compose up -d --build
MIT © EasyCam