This post is for those who want to know how I'm getting my results for AI renders. Potent as it is, AI can be very fickle in creating the sort of images you want unless you know exactly how to manipulate the AI. While you can randomly generate a good picture with Stable Diffusion without any image reference, it's extremely unreliable if you want something specific. I've been trying to figure out a workflow to get desired outputs, using a combination of NovelAI and Stable Diffusion. I'm starting off with a very early version of my Quicky Sanders character many years ago, which I found again. This is very obviously an inexperienced and crude drawing, prior to my development in digital art. Popping this straight into SD as an image reference is tough. The AI often doesn't recognise the features that make this a human in a mudpit, partly because of how badly is drawn, but also because of the style. I'm trying to create a photorealistic output, and a flat 2D drawing doesn't transition