3D Art vs. AI Art 2: The Rules Have Already Changed!

A month ago I reported that AI art might be a fatal blow to 3D art as a hobby. Since then, thanks to a new AI tool called Controlnet, the divide between 3D and AI art hobbies do not look so insurmountable. In fact they may enhance each other in big ways!

This tool solves many problems, the biggest is getting AI generated art to “pose” for a picture. Up until now it was a lot of prompt manipulation and seed guessing. Now you can use something as simple as a sketch and turn it into photo realistic images (or some other art style)

This has at least two major benefits to 3D artists. First, now that controlling poses is possible for AI generated figures, you need a good way to feed it poses to use. Turns out the best way by far is to use popular 3D posing tools like DAZ Studio or Poser to make the exact poses you want your characters to do.

Second, it allows you to use AI programs like Stable Diffusion with the Controlnet extension and plug-in (I find the HED plug in to be the best for this) to post process your 3D images and give them the same visual quality as pure AI images.

How it works

Controlnet creates “drawings” from the 3D images, and uses the images to map the render. That is why the AI version looks so close to the 3D version. Without the controlnet pre-render it draws based only on the prompt and the image with random differences.

To be specific, what I am doing is feeding the same picture of Ariane drinking soda in the diner into both img2img and controlnet, and I am using the prompt: “dateariane portrait in a diner with a drink” which Stable Diffusion is using as a target to render to.

You can actually control how strict it applies to the sketch and input image. The more strict, the more it sticks to the original image, but there are uses to be less strict.

On the left I turned controlnet weight to 0, which is almost like not using it at all. The drink is wrong, the diner looks nothing like the Drive-N-Dine, and you got the classic funky misformed hands common to stable diffusion images.

On the right, I set the control net weight to 0.5 so it is a split between controlnet and no controlnet. This is especially useful if you just want to copy a pose and move it to a completely different figure.

Here’s an example of me doing just that. I’m taking the pose from Ariane in the diner in controlnet, and applying it to Bonnie at the strip club in img2img at a weight of 0.5. I also had to change the prompt to “portrait in a diner with a drink blonde woman with small pink top”. Otherwise, it makes Bonnie look like Ariane.

Besides control of poses, controlnet offers a lot of control over lighting and color that is not available without it. Especially important is the control over the number of fingers your figures have.

This ultimately opens up a lot of artistic possibilities for 3D artists who know both 3D and AI. It means we 3D artists are relevant again. But…

Still not perfect though

As I pointed out on my last essay on this topic, there is still the issues with using AI images for storytelling. Here I do a similar experiment like last time doing multiple renders using different random seed numbers.

While the Controlnet images look a lot closer to the 3D images, they still often have different lighting, color, inconsistent wall textures and background pictures, hair, and faces. No one has proven a good way to animate with this, despite posing being one of the main requirements for animation.

It is also still a problem if you have more than one character in an image, their faces will merge, unless you set up separate renders for each character in the scene and combine them with photoshop or inpainting techniques. This takes a lot of work over and above the 3D work.

The technology is radically changing. It is likely these issue will get fixed in the future, along with the questionable legality of all of it.

Fun examples from other genres

I saw this picture as an example (credits and “how to” here), personally I love the “Lo-Fi” girl cartoon art, and while the realistic version looks impressive, I am reminded of the “Live Action” Disney movies, all of which pale to their traditionally drawn originals.

But I am also reminded of my first rule of what makes a great song: If a song can be performed in a completely different style or genre and still be enjoyable, then it can be classified as a “great song”. If a remixed image (going from drawing to photo or the other way around) can also be great, that just proves how great the original is.

At the same time turning a weirdly disproportional doll into a realistic photo of a weirdly disproportional woman just proves how unrealistic the doll is (Source).

Meanwhile I am going to have a little fun with these tools.

2 comments

  • Yeah! I am so glad coz Victoria Justice got Ariane role! She totally nailed it!
    : )

    • I use a group of actresses along side weighting in favor of the 3D model. Yes Victoria Justice is one, and on some of the images she is recognizable. I don’t use her exclusively for several reasons, the biggest is that Victoria has been acting since she was 14, and I don’t want Ariane to look like a teenager.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.