Following on from our post last week, on how to improve your iOS app’s performance, Christian is back again to explain how he built the featured Image Classifier demo app using Apple’s Core ML.
Create ML, which was introduced during WWDC 2018, has simplified the ways in which machine learning models are created.
All you need is a dataset, a few lines of code and to be running Xcode’s playground. As CREATEML isn’t currently supported by iOS playgrounds, I selected a blank template under macOS.
Let’s start by looking at what we need to run our playground…
<p> CODE: "https://gist.github.com/WildStudio/bf3dc2c2504ba812d2994e41818f30c2.js"</p>
That’s it! Amazing, isn’t it?!
Now I just need some data in order to train my model to do what I want it to do.
In the case of my demo app, I decided to classify images by the room that they depicted. This could easily be a product on its own, used to ensure accurate categorisation in different room types on a listing app, for instance.
The batch of data I used included bathrooms, kitchens, bedrooms and living rooms. In order to train my model, I used around 25 images for each category. The more data you use to train your model, the more accurate it’ll be!
It’s also important to remember to use around 20% of your images to test your model. I split them into appropriate folders to prevent them from getting mixed up.
Once I had completed the aforementioned steps, I enabled playground’s Live View. I did so by clicking ‘Assistant Editor’ in the playground.
Once enabled, I dragged and dropped the image data set which then prompted the model to start its training:
Upon completion, I was able to view the results (as shown below).
Image Classifier breaks the results down into three labels: Training, Validation and Evaluation. Training refers to the percentage of training data Xcode was able to successfully train. Validation, well, that refers to the validity…told you it wasn’t rocket science!
Once I’d dragged and dropped the testing data, I received the results of the Evaluation:
My model was then ready for use which meant I could save it (by clicking on the arrow next to the Image Classifier title).
Mission complete! I had created a Room Classifier model!
I then imported it into my app to see how it ran. All I needed in order to do this was to drag it into my project.
Once I’d imported my shiny model into my app, I was able to load it and use it to perform a prediction with my model.
I just needed to use the following piece of code to process a request using an image and the Vision framework:
I then called upon the following code to perform the image classification using the ML classifier:
<p> CODE: "https://gist.github.com/WildStudio/5fb982f12f187f57cfbf88700f119825.js"</p>
Note: The auto-generated class for model loading and prediction has the same name as my ML model.
Once the classification was finished, it called its completion handler – returning the results of the request if no errors were encountered. Results returned as an Array.
I used a label in the app to display the image classification to the user, as shown in the code below:
<p> CODE: "https://gist.github.com/WildStudio/77c6ccee4c8d0b30b7e8d81b686274c4.js"</p>
Aaaaand, that was it!
It’s now easier than ever to build classification apps thanks to Core ML.
By using just a few lines of code, and by calling upon Xcode’s playground, I was able to train my very own machine learning model to classify images with ease.
Why not give it a go yourself?
(Please note: to run the aforementioned code you will need at least Xcode 10, iOS 12 and Mac Mojave).
---
What are your thoughts on Core ML?
If you’re looking for a new mobile tech blog to add to your reading list, you’ve found us! We share weekly posts about app development, from how to create easy animations for your Android app to how to handle deep links using Firebase.
Follow us on Twitter, Medium and LinkedIn to be notified of our future posts!
https://developer.apple.com/videos/play/wwdc2018/703/
https://developer.apple.com/machine-learning/
(Hero image credit: Apple)