Sitemap
Better Programming

Advice for programmers.

Build an Object Detection App Using Amazon Rekognition, Amplify, and SwiftUI

5 min readJan 4, 2021

--

Binoculars in front of a cityscape
Photo by Krissana Porto on Unsplash.

Why Use Amazon Rekognition?

There are many ways of integrating machine learning into an iOS app. Apple even offers Core ML to train our own models and get predictions that run on an iOS device. However, instead of worrying about training our own models and a lot of factors, there are cloud services that we can access easily to get the desired predictions. Amazon Rekognition is one of those cloud services making it easy to identify objects, text, and people, among other possibilities.

If you are interested in deploying an app that uses these types of predictions fast, this tool is for you.

Configuring Your Project

If you have never used the AWS Amplify CLI before, you can follow the Getting Started guide. If you don’t have an AWS account, you need to sign up for one, then install and configure the CLI.

After configuring the CLI, let’s create a new project in Xcode. Choose “Single View App.” Select Swift as the language and SwiftUI as the user interface. You can leave the Life Cycle as “SwiftUI App.” Give it the product name and organization name that you want.

Creating our project
Creating our project

Setting Up Amplify

Considering you already configured the AWS CLI and Amplify on your computer, run the following command in your project’s folder:

$ amplify init

This will create a new Amplify project on AWS. Here, you can enter a name for the project and environment. Choose your default editor, choose iOS app as a type, and choose the AWS profile you want to use.

After you see the message “Amplify setup completed successfully,” you can run the following command to add the predictions plugin:

$ amplify add predictions
  • When prompted, choose the “Identify” category, as we are going to work with labeling images.
  • You need to choose “Yes” to add auth to the project. If you aren’t using it, then choose email as the authentication type and choose “Identify Labels.”
  • You can choose the default name for the resource, default configuration, and finally choose “Auth and Guest users” to allow unsigned users to use the predictions plugin. In this article, we are allowing guest users in order to focus on the prediction part and not on the auth.
Adding the Amplify predictions plugin
Adding the Amplify predictions plugin

After setting up the predictions plugin, you can run the following command to push the changes to the cloud:

$ amplify push

Setting Up Pods

Once all the changes are pushed to the cloud, we need to add CocoaPods to our project:

$ pod init

Now add the following pods to the Podfile:

Podfile for our project

To install the pods, we need to run:

$ pod install

Now that we are finished with the configuration, we can open Xcode.

Check AWS Configuration Files in Xcode

If you are running the latest Amplify CLI on your terminal, you should see the folder in the image below when opening Xcode:

Required files
Required files

If you don’t see this folder, you need to drag these files to Xcode: awsconfiguration.jsonand amplifyconfiguration.json.

We might need to update the Amplify configurations. To keep only a reference to the original files, uncheck “Copy items if needed” when dragging the files into the project.

Initializing Amplify

To use Amplify, we need to initialize it in our App Lifecycle file. In the example project, since I named the project “SwiftUIAmplifyRekognitionTutorial”, the file is named SwiftUIAmplifyRekognitionTutorialApp.swift. You should change the file contents to look like this:

SwiftUI app Lifecycle file

Since this tutorial is using the new SwiftUI lifecycle, we still need to use UIApplicationDelegateAdaptor in order to initialize Amplify in didFinishLaunchingWithOptions. If you chose to use the traditional AppDelegate lifecycle, you can edit your AppDelegate.swift, add the initializeAWSAmplify method, and call it.

If you noticed, we added the Auth plugin in initializeAWSAmplify. Even though we will allow the user to use the predictions as a guest, we still need to add this plugin in order to initialize Amplify successfully.

If you see the message “Amplify initialized” on your Xcode console, everything is working as expected.

Creating an ImagePicker in SwiftUI

Now that our project is fully configured, we can start by creating an ImagePicker that will allow us to use the camera (if using a physical device) or choose images from the library.

Create a new file in your project and name it ImagePicker.swift. Then add the following code into it:

Image Picker UIViewControllerRepresentable

Using UIViewControllerRepresentable, we will be able to call the ImagePicker from our SwiftUI views.

Setting Permissions on Info.plist

Before we can get the images on our device, we need to ask the user for permissions to access the camera and library. To do so, open your Info.plist and add the key NSPhotoLibraryUsageDescription with the description “Access to the library is needed to get a picture for labeling” (or something else that suits you) and the key NSCameraUsageDescription with the description “The camera is needed to get a picture for labeling.”

Your Info.plist file should look like this:

Modified Info.plist
Modified Info.plist

Labeling Images on ContentView

Now we can finally start choosing images and then getting the labels using Amplify. To do so, we can create a very basic UI in ContentView:

View for getting an image and labels using Amplify

As you can see in the code, we created a simple VStack that contains a text and a button that will show the ImagePicker we created before, enabling us to take a photo with the camera (if using a physical device) or choose a picture from the library (if using the simulator). On the same VStack, we also added the optional chosenImage (shown after our selection) and a List (will show the labels we get using the Amplify predictions plugin). Your app should look like this:

Final app
Simple UI for getting an image

When the ImagePicker is dismissed, we call the loadImage() method, which checks if we selected an image, then compresses the image for faster upload. We then get the URL of the image in order to pass it to the detectLabels() method, which will display the predicted labels when the call to AWS returns. As you can see in the image below, when choosing a picture of a hot dog, we do get the correct label with a confidence of 99.89%:

App demo
Chosen image and labels we get using Amplify

Conclusion

If you want to take a look at the whole project, please check out the GitHub repo.

Thanks for reading!

--

--