DEV Community

Cover image for Swift UI camera app without using UIView or UI*
Ash Gaikwad
Ash Gaikwad

Posted on

Swift UI camera app without using UIView or UI*

In this article, I'm writing down my experience and codes that worked to get an app working in Swift UI that uses a user camera and shows a live feed on the screen. This app works in both macOS (Apple Silicon tested) and iOS.

To start, create a new multi-platform project in Xcode. I used Xcode version 14.2 (14C18) on M1.

Setup - Permissions

As we need to use a camera, in Xcode, we should enable it through xcodeproj file. Click on this file in Xcode and it will open up a nice view to see and edit the project's settings. This file is named after your project such as myapp.xcodeproj. I named my project as expiry, so my file is expiry.xcodeproj.

Image showing xcodeproj file in Xcode left sidebar

Click on Signing & Capabilities. There enable a checkbox for the camera.

Image showing UI of Signing & Capabilities

Then click on Info and add a new entry. This one is the same as editing info.plist file. We are doing it through UI. To add a new key here, hover over any of the existing entries. It will show a + button; click on that to add a new entry below. Now for the key column, type Privacy - Camera Usage Description. As soon as you start typing, the UI will show you a dropdown of available keys.

Once the key is added, click on the value column and type $(PRODUCT_NAME) camera use. You can skip the Type column as it is auto-set to String.

Image showing UI of Info

App Structure

As this is a simple camera app, we will have only Views, ViewModel and Managers in the app. The final app, on macOS, will look as shown in the screenshot below:

Image showing screenshot of the app on macOS

App Structure - Views

We will have the following Views in the app:

  1. ContentView - this is the main view of the app. Xcode has this file auto-created for us as ContentView.swift. We will edit this shortly
  2. FrameView - this view shows camera output. We will create a new file for this
  3. ErrorView - this will show up when we have some camera-related error to be displayed to the user. For this view also, we will create a new file
  4. ControlView - this view will have a button that will take the photo. For this view also, we will create a new file

App Structure - ViewModels

To handle the business logic of views, we will have separate classes. For this app, we only need one view model:

  1. ContentViewModel - this takes care of the main video and error state flow logic. We will create a new file in a folder called ViewModel (also newly created by us)

App Structure - Managers

To handle complex logic of devices, streams, image and file IO etc, we will have Manager classes. These will be stored in a folder called Camera as all these classes will be related to the camera and its output management.

  1. CameraManager - this manages device, config, session, queue (for video output), permission and error states
  2. FrameManager - this will initiate the camera manager and read the camera output
  3. PhotoManager - this will capture a photo from the video stream and store on the device

Let's Swift

Code - CameraManager

Create a folder in your app called Camera. Inside the folder, create a new Swift file CameraManager.swift. This file will hold a class that is derived from ObservableObject class.

Replace Xcode auto-generated contents of this file with the following code:

import AVFoundation

class CameraManager: ObservableObject {

    /** enums to represent the CameraManager statuses */
    enum Status {
        case unconfigured
        case configured
        case unauthorized
        case failed
    }

    /** enums to represent errors related to using, acessing, IO etc of the camera device */
    enum CameraError: Error {
        case cameraUnavailable
        case cannotAddInput
        case cannotAddOutput
        case deniedAuthorization
        case restrictedAuthorization
        case unknownAuthorization
        case thrownError(message: Error)
    }

    /** ``error`` stores the current error related to camera */
    @Published var error: CameraError?

    /** ``session`` stores camera capture session */
    let session = AVCaptureSession()

    /** ``shared`` a single reference to instance of CameraManager
     All the other codes in the app must use this single instance */
    static let shared = CameraManager()

    private let sessionQueue = DispatchQueue(label: "yourdomain.expiry.SessionQ")

    private let videoOutput = AVCaptureVideoDataOutput()

    private var status = Status.unconfigured

    /** ``set(_:queue:)`` configures `delegate` and `queue`
     this should be configured before using the camera output */
    func set(
      _ delegate: AVCaptureVideoDataOutputSampleBufferDelegate,
      queue: DispatchQueue
    ) {
      sessionQueue.async {
        self.videoOutput.setSampleBufferDelegate(delegate, queue: queue)
      }
    }

    private func setError(_ error: CameraError?) {
        DispatchQueue.main.async {
            self.error = error
        }
    }

    private init() {
        configure()
    }

    private func configure() {
        checkPermissions()
        sessionQueue.async {
          self.configureCaptureSession()
          self.session.startRunning()
        }
    }

    private func configureCaptureSession() {
        guard status == .unconfigured else {
            return
        }
        session.beginConfiguration()
        defer {
            session.commitConfiguration()
        }
        let device = AVCaptureDevice.default(
            .builtInWideAngleCamera,
            for: .video,
            position: .back)
        guard let camera = device else {
            setError( .cameraUnavailable)
            status = .failed
            return
        }
        do {
            let cameraInput = try AVCaptureDeviceInput(device: camera)

            if session.canAddInput(cameraInput) {
                session.addInput(cameraInput)
            } else {
                setError( .cannotAddInput)
                status = .failed
                return
            }
        } catch {
            debugPrint(error)
            setError(.thrownError(message: error))
            status = .failed
            return
        }

        if session.canAddOutput(videoOutput) {
            session.addOutput(videoOutput)

            videoOutput.videoSettings =
            [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]

            let videoConnection = videoOutput.connection(with: .video)
            videoConnection?.videoOrientation = .portrait
        } else {
            setError( .cannotAddOutput)
            status = .failed
            return
        }
        status = .configured
    }

    private func checkPermissions() {
        switch AVCaptureDevice.authorizationStatus(for: .video) {
        case .notDetermined:
            sessionQueue.suspend()
            AVCaptureDevice.requestAccess(for: .video) { authorized in
                if !authorized {
                    self.status = .unauthorized
                    self.setError(.deniedAuthorization)
                }
                self.sessionQueue.resume()
            }
        case .restricted:
            status = .unauthorized
            setError(.restrictedAuthorization)
        case .denied:
            status = .unauthorized
            setError(.deniedAuthorization)
        case .authorized:
            /** ``Status.authorized-enum.case`` represents all success to get the camera access */
            break
        @unknown default:
            status = .unauthorized
            setError(.unknownAuthorization)
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

The public properties and methods are documented in the code above. There are two major private func who needs a bit of explanation -- checkPermissions and configureCaptureSession. Let's talk about both of these in details.

func checkPermission - file ./Camera/CameraManager.swift

This function checks if our app has permission to access the camera. If permission is not given, then the app will ask the user for the permission. If the user grants the permission to use the app, we are all set 🎉 and if the user denies the permission, we set appropriate .error and .status values.

There are total 4 cases we need to handle (5 including unknown):

  • .notDetermined - as the access is not determined yet, this case will go ahead and ask the user to grant us permission to use the camera
  • .restricted - user gave access but with restrictions. We treat it the same as .denied
  • .denied - user denied the access
  • @unknown default - welp, anything unknown for us will be treated the same as .denied
  • .authorized - Ah! the sweet sweet access granted status that our app needs in order to function

func configureCaptureSession - file ./Camera/CameraManager.swift

In order to use the output from the camera, we must configure the session and set the input & output of the camera video feed. Output will be sent to self.videoOutput that will delegate as configured by the set public method.

session.beginConfiguration()
defer {
    session.commitConfiguration()
}
Enter fullscreen mode Exit fullscreen mode

These two lines start the session configuration and commit the configuration by end of the configureCaptureSession method.

Then we make an audio video capture device and set that to capture video input. On success, we assign the value of the device to camera.

Next, we try to create audio video input capture input of video and add that to our session. The last block in the method will add video output to the session and configure it. This completes the session configuration. At this point, as we reached the end of the method, the deferred call of session.commitConfiguration() will execute.

Code - FrameManager

Inside the Camera folder, create a new Swift file FrameManager.swift. This file will hold a class that is derived from NSObject, ObservableObject and comply with AVCaptureVideoDataOutputSampleBufferDelegate by implementing captureOutput method as an extension.

Replace Xcode auto generated contents of this file with the following code:

import AVFoundation

class FrameManager: NSObject, ObservableObject {

    /** ``shared`` a single instance of FrameManager.
    All the other codes in the app must use this single instance */
    static let shared = FrameManager()

    /** ``current`` stores the current pixel data from camera */
    @Published var current: CVPixelBuffer?

    /** ``videoOutputQueue`` a queue to receive camera video output */
    let videoOutputQueue = DispatchQueue(
        label: "yourdomain.expiry.VideoOutputQ",
        qos: .userInitiated,
        attributes: [],
        autoreleaseFrequency: .workItem)

    private override init() {
        super.init()
        CameraManager.shared.set(self, queue: videoOutputQueue)
    }
}

extension FrameManager: AVCaptureVideoDataOutputSampleBufferDelegate {
    /** ``captureOutput(_:didOutput:from:)`` sets the buffer data to ``current`` */
    func captureOutput(
        _ output: AVCaptureOutput,
        didOutput sampleBuffer: CMSampleBuffer,
        from connection: AVCaptureConnection
    ) {
        if let buffer = sampleBuffer.imageBuffer {
            DispatchQueue.main.async {
                self.current = buffer
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

In the above code, public properties and methods have comments stating their purpose. In captureOutput method, it is on the main thread we update the value of self.current.

Code - ContentViewModel

Now it's time to create our View Model. Make a new folder ViewModel in the project's root. Inside this folder, create a new Swift file ContentViewModel.swift and replace its Xcode auto-generated content with the following code:

import CoreImage

class ContentViewModel: ObservableObject {

    @Published var frame: CGImage?
    @Published var error: Error?

    private let cameraManager = CameraManager.shared

    private let frameManager = FrameManager.shared

    init() {
        setupSubscriptions()
    }

    func setupSubscriptions() {
        cameraManager.$error
          .receive(on: RunLoop.main)
          .map { $0 }
          .assign(to: &$error)

        frameManager.$current
            .receive(on: RunLoop.main)
            .compactMap { buffer in
                if buffer != nil {
                    let inputImage = CIImage(cvPixelBuffer: buffer!)
                    let context = CIContext(options: nil)
                    return context.createCGImage(inputImage, from: inputImage.extent)
                } else {
                    return nil
                }
            }
            .assign(to: &$frame)
    }
}
Enter fullscreen mode Exit fullscreen mode

Here we are setting two published variables frame that has pixel buffer data and error that has info about camera related errors. These two variables will receive their live values from CameraManager as setup in method setupSubscriptions. Inside .compactMap block of frameManager.$current, we are converting CVPixelBuffer to CGImage and assigning it to the frame variable. In the future, if you want to use CIFilter, this block is the place to add those.

Code - FrameView, ControlView and ErrorView

Let's create three new files in the project's root location. These will be Swift UI View files. After creating the files, replace their Xcode auto-generated contents with the ones given below:

File: FrameView.swift

import SwiftUI

struct FrameView: View {
    var image: CGImage?
    private let label = Text("Camera feed")

    var body: some View {
        if let image = image {
          GeometryReader { geometry in
            Image(image, scale: 1.0, orientation: .up, label: label)
              .resizable()
              .scaledToFill()
              .frame(
                width: geometry.size.width,
                height: geometry.size.height,
                alignment: .center)
              .clipped()
          }
        } else {
          Color.black
        }
    }
}

struct FrameView_Previews: PreviewProvider {
    static var previews: some View {
        FrameView()
    }
}
Enter fullscreen mode Exit fullscreen mode

File: ErrorView.swift

import SwiftUI

struct ErrorView: View {
    var error: Error?

    var body: some View {
        self.error != nil ? ErrorMessage(String(describing: self.error)) : nil
    }
}

func ErrorMessage(_ text: String) -> some View {
    return VStack {
        VStack {
            Text("Error Occured").font(.title).padding(.bottom, 5)
            Text(text)
        }.foregroundColor(.white).padding(10).background(Color.red)
        Spacer()
    }
}

struct ErrorView_Previews: PreviewProvider {
    static var previews: some View {
        ErrorView(error: CameraManager.CameraError.cameraUnavailable as Error)
    }
}
Enter fullscreen mode Exit fullscreen mode

Image showing how error view looks on iOS

File: ControlView.swift

import SwiftUI

struct ControlView: View {
    var body: some View {
        VStack{
            Spacer()
            HStack {
                Button {
                    PhotoManager.take()
                } label: {
                    Image(systemName: "camera.fill")
                }.font(.largeTitle)
                    .buttonStyle(.borderless)
                    .controlSize(.large)
                    .tint(.accentColor)
                    .padding(10)
            }
        }
    }
}

struct ControlView_Previews: PreviewProvider {
    static var previews: some View {
        ControlView()
    }
}
Enter fullscreen mode Exit fullscreen mode

Please note that we are yet to create PhotoManager.

At this moment, you can comment out PhotoManager.take() function call and run the app to see how it looks. It should ask you for camera access on start and on granting the access, it should show you a live camera feed.

Image showing screenshot of the app on macOS

Now let's complete the app by giving functionality to the camera button. With the click of a button, we want it to store what we see on the screen as a photo. This will be done through PhotoManager. Create a new file PhotoManager.swift inside the Camera folder. Replace its Xcode auto-generated contents with the following code:

import CoreImage
import Combine
import UniformTypeIdentifiers

class PhotoManager {
    private static var cancellable: AnyCancellable?
    static func take() {
        debugPrint("Clicked PhotoManager.take()")
        cancellable = FrameManager.shared.$current.first().sink { receiveValue in
            guard receiveValue != nil else {
                debugPrint("[W] PhotoManager.take: buffer returned nil")
                return
            }

            let inputImage = CIImage(cvPixelBuffer: receiveValue!)
            let context = CIContext(options: nil)
            let cgImage = context.createCGImage(inputImage, from: inputImage.extent)
            guard cgImage != nil else {
                debugPrint("[W] PhotoManager.take: CGImage is nil")
                return
            }
            self.save(image: cgImage!, filename: "my-image-test.png")
        }

    }

    static func save(image: CGImage, filename: String) {
        let cfdata: CFMutableData = CFDataCreateMutable(nil, 0)
        if let destination = CGImageDestinationCreateWithData(cfdata, String(describing: UTType.png) as CFString, 1, nil) {
            CGImageDestinationAddImage(destination, image, nil)
            if CGImageDestinationFinalize(destination) {
                debugPrint("[I] PhotoManager.save: saved image at \(destination)")
                do {
                    try (cfdata as Data).write(to: self.asURL(filename)!)
                    debugPrint("[I] PhotoManager.save: Saved image")
                } catch {
                    debugPrint("[E] PhotoManager.save: Failed to save image \(error)")
                }
            }
        }
        debugPrint("[I] PhotoManager.save: func completed")
    }

    static func asURL(_ filename: String) -> URL? {
        guard let documentsDirectory = FileManager.default.urls(
            for: .documentDirectory,
            in: .userDomainMask).first else {
            return nil
        }

        let url = documentsDirectory.appendingPathComponent(filename)
        debugPrint(".asURL:", url)
        return url
    }
}
Enter fullscreen mode Exit fullscreen mode

If in the previous steps, you commented out PhotoManager.take() from ControlView, you can now uncomment it and run the app again. This time on clicking the camera icon button, a file PNG image will be created on your device. In the log output, you can see the full path of the image. For me, it was at file:///Users/ash/Library/Containers/yourdomain.expiry/Data/Documents/my-image-test.png.

This was a sample app that I completed as a learning exercise for a bigger app that I'm working on.

The codes you see in this article have a huge scope for improvement. As you try these codes to create your own app, you will get some of those refactoring and enhancement ideas. In the comment, I added link to some of the resources that I found useful.

💬 Feel free to comment your thoughts. Happy coding!

Top comments (1)

Collapse
 
ashgkwd profile image
Ash Gaikwad