Skip to content
Home » A Comprehensive Guide to Using the IP Adapter Face ID in ComfyUI for Stable Diffusions

A Comprehensive Guide to Using the IP Adapter Face ID in ComfyUI for Stable Diffusions

It seems like there’s a lot of excitement in the world of stable diffusions and image generative A.I. with the new year. A new toy called the IP adapter Face ID has been launched, and it’s creating quite a buzz in the ComfyUI community.

IPAdapter FaceID Model Update With ComfyUI

The IP adapter Face ID is a recently released tool that allows for face identification testing. If you visit the ComfyUI IP adapter plus GitHub page, you’ll find important updates regarding this tool. On December 28th and December 30th, they frequently updated their custom nodes to incorporate the face ID updates. On December 28th, they released the face ID plus models, followed by the face ID plus version two updates on December 30th. It seems that there were some errors before these updates, but they were quickly addressed with the new custom node for IP adapters to load up the face ID plus version two.

Article for the Video Tutorial

The face ID version two models are now stable and applicable to existing workflows. They have been tested for animation workflows, videos, and image generation workflows. It’s worth noting that the IP adapter face model is not on the GitHub page. Instead, if you scroll down, you’ll find a table listing out the various face ID models, including Face ID, Face ID Lora, Face ID plus, and the new version two model and Lora.

The Lora model, similar to the normal Laura model, should be located in your models file and the Lora subfolder. On the other hand, the IP adapter Face ID models should be placed in your normal IP adapter models folder.

To proceed with the workflow, you’ll need two image encoders. The Vit-h and Vit big G are for the normal encoder, while the Vith image encoder is specifically for the new Face ID plus version two models. You can find the Vith image encoder in the clip visions subfolder of your models folder. Make sure to download it before using it.

Returning to the workflow, you’ll notice the clip visions section where you’ll have the new image encoder options. Since you’re working with the normal face ID groups in this particular case (face ID SD 1.5), you won’t need the additional image encoders.

In the workflow, two pipelines are presented. The first one is for the normal face ID IP adapter, while the yellow groups represent the face ID plus version two IP adapter. Both pipelines require the corresponding Lora’s models to be loaded with the IP adapter Face ID models, as mentioned earlier. Additionally, for the face ID plus version two models, you need to choose the VIT-H image encoder in the clip visions section.

Before passing the image into the IP adapter, it’s necessary to create a prepared image for InsightFace. This step improves loading performance and checks the compatibility of the models. For the normal Face ID IP adapter, you can use the prepared image for clip visions. However, for the face ID plus version two, you’ll need a separate prepared image.

To test both pipelines, make sure to choose the correct corresponding Lora’s models and clip visions options for the IP adapter Face ID models. The purple load image serves as a reference image for the face, and you can adjust the weights and noise settings to influence the generated image.

Conclusion About IPAdapter Latest Update

It’s important to note that IP adapter Face ID is different from the reactor face swap. IP adapter uses the source image as a reference and allows you to control how much it influences the new image. The main goal of IP adapter is to create images influenced by the source image, rather than an exact face swap.

To run the Face ID, you need to install InsightFace into your ComfyUI. You can follow a step-by-step guide on Reddit to install InsightFace and make it compatible with ComfyUI. The process involves updating folders and dragging WHL files, along with executing bat files to download the InsightFace third-party libraries into your Python embedded folders.

That’s it! This is how you can run the Face ID in IP adapters and the Face ID plus version two. You can test different results with the Face ID and Face ID plus in your upcoming videos.

Resources

Workflow In This Tutorial : https://www.patreon.com/posts/95651113

ComfyUI_IPAdapter_plus : https://github.com/cubiq/ComfyUI_IPAdapter_plus