Swift | A Virtual Try-On | Hair Color

The Blue Prototype
5 min readMay 8, 2022
Photo by Valerie Elash on Unsplash

Virtual try-on technology lets customers see how the products looks on them. It allows them to virtually “try on” the product before purchasing them. Now customers can try products virtually as clothes, cosmetics, jewellery, goggles etc.

Currently, lots of e-commerce brands are using virtual try-on technology for marketing. Virtual try-on is a part of Augmented Reality (AR). Augmented Reality is the future of everything now. It is not limited to shopping only. Education, healthcare, retail, entertainment etc everywhere we are using it.

In the field of development, there are lot of libraries, which provides virtual try-on functionality but most of them are paid. Apple is also working on new frameworks every year. They are providing lots of swift apis to work with augmented reality.

Get Started

In this article, I’ll discuss that how can I apply any color on human hair by swift framework. I’ll play around a sample project by which a user can take a selfie and apply any colour on the hair. Let’s break this article into 2 steps. First step is to capture a selfie and second step is to apply hair color on captured selfie.

Step 1: Camera

Lets start first to take a selfie. I am assuming that you’re already aware about taking a photo by AVFoundation framework. I’ll not go into depth on this, jumping on the high nodes to capture the required photo. Lets see the code:

guard let videoDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .front) else { return }guard let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice) else { return }guard self.captureSession.canAddInput(videoDeviceInput) else { return }self.captureSession.addInput(videoDeviceInput)

In the above lines, firstly, I am checking for the trueDepthCamera. It is the main requirement. If trueDepthCamera is available then adding the input device with front camera.

As per Apple, builtInTrueDepthCamera is a combination of cameras and other sensors that creates a capture device capable of photo, video, and depth capture.

guard self.captureSession.canAddOutput(photoOutput) else { return }self.captureSession.addOutput(photoOutput)

Now, have added the photo output with the session.

self.photoOutput.enabledSemanticSegmentationMatteTypes = photoOutput.availableSemanticSegmentationMatteTypesself.photoOutput.isDepthDataDeliveryEnabled = trueself.photoOutput.isPortraitEffectsMatteDeliveryEnabled = trueself.photoOutput.isHighResolutionCaptureEnabled = true

These above four lines are the core code which is applied on the output. Lets discuss some points in depth before moving to the explanation of these lines.

AVSemanticSegmentationMatte wraps a matting image for a particular semantic segmentation. The matting image stores its pixel data as CVPixelBuffer objects in kCVPixelFormatType_OneComponent8 format. The image file contains the semantic segmentation matte as an auxiliary image.

matteType is the semantic segmentation matte image type.

Lets back to the above 4 lines:

  • First, I am enabling all semantic segmentation matte types that may be captured and delivered along with the photo.
  • Second, Specifying to configure the capture pipeline for depth data.
  • Third, by which, capture output generates a portrait effects matte.
  • Fourth, To enable the high resolution capture.

Now, lets add some AVCapturePhotoSettings as codec type, flash, preview photo format, depth data, portrait effect matte and semantic segmentation matte types.

if  photoOutput.availablePhotoCodecTypes.contains(.hevc) {self.photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])}if videoDeviceInput.device.isFlashAvailable {self.photoSettings.flashMode = .auto}self.photoSettings.isHighResolutionPhotoEnabled = trueif let previewPhotoPixelFormatType = self.photoSettings.availablePreviewPhotoPixelFormatTypes.first {self.photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPhotoPixelFormatType]}self.photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliveryEnabledself.photoSettings.isPortraitEffectsMatteDeliveryEnabled = photoOutput.isPortraitEffectsMatteDeliveryEnabledif self.photoSettings.isDepthDataDeliveryEnabled {if !photoOutput.availableSemanticSegmentationMatteTypes.isEmpty {self.photoSettings.enabledSemanticSegmentationMatteTypes = photoOutput.availableSemanticSegmentationMatteTypes}}

After this, add AVCaptureVideoPreviewLayer on the view and start running the session. Once you capture the photo, you well get the AVCapturePhoto in the delegate function of AVCapturePhotoCaptureDelegate.

func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {}

Here, first step has been completed.

Step 2: Hair Color

Now, you have a AVCapturePhoto. To apply any filter on this capture photo, first you have to extract preview pixel buffer image & semantic segmentation matte image for hair type. Then, I can apply CIFilters with the help of these images.

Lets get semantic segmentation matte image for hair type with the orientation from captured AVCapturePhoto. Below is the code for this. It’ll return an object of CIImage.

extension AVCapturePhoto {func hairSemanticSegmentationMatteImage() -> CIImage? {if var matte = self.semanticSegmentationMatte(for: .hair) {if let orientation = self.metadata[String(kCGImagePropertyOrientation)] as? UInt32, let exifOrientation = CGImagePropertyOrientation(rawValue: orientation) {matte = matte.applyingExifOrientation(exifOrientation)}return CIImage(cvPixelBuffer: matte.mattingImage)}return nil}}

Now, lets get preview pixel buffer image from captured photo. Here is the code:

extension AVCapturePhoto {func getPreviewPixelBufferImage() -> CIImage? {if let pixelBuffer = self.previewPixelBuffer {if let orientation = self.metadata[String(kCGImagePropertyOrientation)] as? UInt32, let exifOrientation = CGImagePropertyOrientation(rawValue: orientation) {let ciImage = CIImage(cvPixelBuffer: pixelBuffer).oriented(forExifOrientation: Int32(exifOrientation.rawValue))return ciImage} else {let ciImage = CIImage(cvPixelBuffer: pixelBuffer)return ciImage}}return nil}}

Both functions return the object of CIImage type.

Now lets apply the color on hair with the help of CIColorMatrix & CIBlendWithMask filters. Here is the process:

  1. Get the preview pixel buffer image from captured AVCapturePhoto.
  2. Get the semantic segmentation matte image for hair type from captured AVCapturePhoto.
  3. Get the RGB of color which you want to apply on hair.
  4. Apply CIColorMatrix filter on the preview pixel buffer image with the RGB values of the color.
  5. Now apply CIBlendWithMask filter on CIColorMatrix output image with the masking of semantic segmentation matte image. Then resulted output image will be with coloured hair.

Assuming, you know that how to use CIFilter. If not, then no issue. In the end of this article, I’ll share the link of full source code of this assignment.

So we have completed the step 2 also. After completing the step 2, resulted image will be like this. See the hair of this boy.

In this picture, he is my kid 😊. He was playing around so used his face for this assignment 😂.

I tried to apply three colours — red, green & black. You can see that hair are coloured. In the below position, left one is the original image & right one is the hair segmented matte image.

Summary

Apple made it quite simple & no cost, otherwise it’ll be very costly if you use any 3rd party library. I am sure, in the coming WWDC, Apple will achieve the same thing with streaming video. Then, it’ll be a huge change.

Here is link of full source code: https://github.com/sanjeevworkstation/SGHairColor

Thank You. Happy coding! 👍

--

--