Lately I’ve been using AI to generate character portraits since I’m terrible at art. This actually came in really handy for PaizoCon, since everything was on VTTs. Here are a couple of ones that I’ve gotten that were good enough to keep. They’re definitely not perfect, but you can upload them back into Nightcafe and have the AI keep working on them later
Hokgnath Bladegnawer (Battle Oracle/Mindsmith)
Matu’uk Bladegnawer (Giant Barbarian)
Jerhyn Skorn (Throwing Magus/Rogue)
I’ve found that making a character in HeroForge and then plugging that into Dream.ai is the best way to do this. The HeroForge mini pose will help give the AI a good framework to build around, and you can tweak it with keywords in your prompt. This is how I made character art for my last few oneshot characters. :)
Thanks for that, I’m going to try that process out; one of the big issues I’ve had with Nightcafe is that it tends to generate a lot of character portraits that are missing limbs, or it interprets the description of weapons (not armor, though) pretty badly, so hopefully using the HeroForge image as a base will help with that.
No problem! Here’s an example of what I made with this technique:
And yeah, the other day I was working on character art with a friend for our next PF2e game and he used “tower shield” as an input. We got Big Ben in the background, and a small shield. 💀 This stuff requires a lot of finessing to get usable output.
My wife has absolutely zero interest in TTRPGs and Pathfinder, but she does like AI and gets a huge kick whenever I share a prompt that ended up with an amusing output with her.
I stalled Stable Diffusion some time back and have since been using it to make a lot of character art as well.
Was to be my gunslinger character in a just cancelled game:
Was to be my fighter character in another game (Blood Lords) from the same group that just got cancelled (our GM is taking time away from tRPGs):
And my druid from a game that is just hitting it’d fourth session:
Installing Stable Diffusion gives you the ability to do large sized work, and to refine the results a lot more. That said, I work these back and forth between Stable Diffusion and Gimp (open source equivalent to Photoshop).
Here’s the gunslinger, with some more refining I did recently:
I’m definitely going to have to check out Stable Diffusion; the stuff you have coming out of it looks great. I’m terrible at GIMP and not a visual artist at all, which is why I’m checking out AI to generate my stuff. How much post-processing work did you have to do in GIMP to get this quality?
I actually use Gimp not for post processing but to make guides.
I can make something very ugly in Gimp, and then run it through stable diffusion a sozen times and it will clean up.
That final image you see there, started as this.
I used a tool in stable diffusion called ‘inpaint’ to delete the second figure. which gave me a very rough looking background:
I took that into gimp and used smudge and clone tools to make that blur look more like sky. Than I ran it through Stable Diffusion’s “img2img” tool that makes an image based on an image and your prompts.
I had to take this back and forth between stable diffusion and Gimp a few times to add in stars and fix the sky there: (I had several of these where I’d just clone something to a spot, then have stable diffusion smooth it in.)
Finally I had one image with a great sky on the right, but the figure on the left was all wrong.
So I took the original ‘2 women’ pic and my good sky image into gimp, and used a ‘layer mask’ at the halfway point to put the sky of my good image into the pic of the woman on the left.
- Except they didn’t match exactly in color and position: If you look at this one, just right of the moon and then angled down to her hand sky changes. That’s where my layer mask blended them. It’s almost perfect, but at this point I was just having fun and wanted a piece I could keep for wallpaper on my desktop. Once I could see where it was blending - I couldn’t stop seeing it.
So I took this image and ran it in Stable Diffusion’s “img2im” again a couple of times and it made the blend natural.
This character was meant to be the cousin of my witch PC - was going to replace the witch with a gunslinger. The game just got cancelled yesterday. But the funny thing is the woman on the right remains in my folder of work images and could be used for the witch if I round out her ears. The gunslinger was a half-elf, the witch was human.
Sadly with that game cancelled both go into my pool of future characters now.
Anyway… the whole process I went through is why I get annoyed when people say AI art is all stolen and not original.
Just like photography you may start with something that came from an existing source - but it is perfectly possible to make it your original piece of art. And just like photography, a lot of people will thing you just clicked a button and got a result and don’t deserve any credit…
The only difference is photography’s been around for a while now so most people have gotten over themselves on the claims that it’s just a snapshot of something that already existed. AI Art will eventually get the respect it deserves, once more people are doing original things with it. There are already people who go way beyond me to levels that would rival hand drawn works of art in their human craftsmanship.
Here’s the guide that got me started on Stable Diffusion:
https://www.youtube.com/watch?v=DHaL56P6f5M “Stable diffusion tutorial. ULTIMATE guide” from Sebastian Kamph (because that link didn’t give a preview, just search for that on youtube to be sure you get a safe link).
The first 30 seconds of the video sounds like one of those weird text to speech readers, but then the guy comes in and goes step by step talking through installing and using the tool. You’ve got to install things like git and python and he goes through all of that also in detailed steps. Honestly it was one of the best “how to install python” guides also… ;)
Reddit’s down now, but there were some great guides there on using prompts and using tools that customize models, train the tools, and more.
Any advice for prompts or settings? I got stable diffusion installed but have been having issues getting good images out of it