- Apr 15, 2022
- 8
- 226
Decided that I should post my guide I made for another thread on here. The end goal will be videos of our chosen subject from our locally generated images along with some other tools I recommend. This is meant for noobs and is not a very technical guide.
- IMPORTANT: If you already use ComfyUI or other programs, part 3 of this guide and the dzine guide linked to in part 3 are great adjuncts to your repertoire of tools!
A Guide to Stable Diffusion Local Image Generation with ForgeUI PART 1
Preface
- The end goal of this guide is to get a consistent character using images of our choosing. It will be multiple parts.
- This part is to get you set up with local image generation, the consistent face/body will come later.
- You have a choice with local image generation. ForgeUI or ComfyUI. ForgeUI is simple, runs on older hardware, and is much quicker to get into than ComfyUI. If you're interested in ComfyUI then I can't help you. If you go around searching for UI's, do not use A1111's UI; It's outdated and ForgeUI is built upon it.
- This is a dummy's guide. I will assume you have 0 knowledge pertaining to all this. I also won't be going into too much technical detail and won't cover every single thing it can do.
Min Specs: If you have a laptop or PC that does not have a graphics card, forget it. Ideally you should have:
- 8gb vram Nvidia card with cuda, so basically anything modern within the past few years. You can run it on a 4gb vram card but the program will start digging into your CPU and you will experience slowdowns in your pc/generations while generating images; it's still completely viable.
- An SSD drive to install it on. Externals work just fine. Have at least 100gb free, more allows for you to install more models.
- Any good modern CPU
- 16gb of ram
Installation
- You first need a version of ForgeUI to install. I recommend the one below if you are illiterate with coding languages in general, like python. You don't have to mess around with cmd or all the other technical stuff as much. If you choose to install another version then look at the documentation for installation and use ChatGPT if you get stuck. I can't help you with installing other versions, but they all work relatively the same once installed. This version in particular is not being updated anymore, so if that bothers you search for a version that is still maintained.
- Head on over to https://github.com/lllyasviel/stable-diffusion-webui-forge
What to look for

- Download the recommended one and when you unzip in your desired install location, you'll see a folder with files.
- Run update, then run. If nothing launches or you get an error after using run, launch run again and it should open up the webui with your browser of choice.
Getting Situated
- When it opens up you'll be greeted by this. You'll also have a cmd window running, don't close it. What you'll see:

- Calm down and we'll go through everything you need to know, just the basics.
- At the top left click xl, let's go over why real quick, and install a model before we move further.
Base Model
- Stable Diffusion has multiple models to use, let's talk about the big ones.
- SD1.5: The OG. Much older but it has lots of models. Pros: Use models based on this if your PC is shit or you just want to generate stuff extremely fast. Cons: 512x512 based, can do larger images but too large and you get degradation without proper upscaling. Poor prompt adherence.
- SDXL: Souped up SD1.5. Pros: Good realism, 1024x1024 based and can do smaller or larger. Cons: Slow if your PC is shit.
- Pony: Anime heavy versions of SD1.5 or SDXL
- Flux: The newest model renowned for realism and details. Pros: Unmatched detail, hailed as the best. Cons: Slow, resource heavy. Has a tendency to produce plastic looking skin.
- We will be using SDXL because it's the best of both worlds. It's wedged in-between 1.5 and Flux in terms of speed and details.
Finding Models
- Go to Civitai, make an account, and make sure you allow nsfw models in your profile options.
- We will be using https://civitai.com/models/133005/juggernaut-xl
- It is based on SDXL
- Download it and drop it in (YourDriveName):\webui_forge_cu121_torch231\webui_forge_cu121_torch231\webui\models\Stable-diffusion
- Your path may be different, but the last 4 locations will be the same.
- Feel free to try any other models, I have tons myself and they all excel at different aspects. Those explicit AI porn videos I made in earlier posts were fed images made by pornmaster. If you want sexual acts use a sex based model.
- Do download pornmaster though as we will be using it in a part 3 later.
Your First Generation
- Go to settings and reload UI, or quit out of the browser and cmd then hit run again to get the model to show up. It will be under checkpoints.
What it should look like:

- Your GPU weight should only be lowered if you get an error in your cmd saying you're low on space.
- Prompt: What you want to see
- Negative: What you don't want to see
- DPM++ 2M Karras is pretty standard and works well, but you can play around with them all.
- 30 steps will get us a good image. Some models require more or less.
- Width and Height is pixels. If you plan on feeding an image directly to a video generator make sure it lines up with a commonly used ratio, like 3:4 or 9:16.
- Batch count vs Batch size: Size is images generated at once. Set to 1 if your PC sucks. So if you set batch count to 4 and size to 1 it generates 4 images in a row. Batch count to 4 and size to 4, you get 4 sets of 4 images generated in a row.
- CFG: How closely the AI follows your prompt. Lower = more creativity. 5-7 is generally recommended for most models. I run 7 most of the time. Play around with it.
- Test this out
Prompt: A Korean woman, wearing a black crop sweater, high heels, knee high stockings, black thong underwear, on an empty street sidewalk at night
Negative: BBW, fat woman, worst quality, low quality , lowres, illustration, 3d, 2d, cartoons, anime, painting, CGI, 3D render, bad anatomy, sketch, photoshop, airbrushed skin, overexposed, watermark, text, logo, label, blurry, depth of field, bokeh, blurry foreground, ugly nails, ugly fingers, fused fingers, missing finger, extra fingers, old, big breasts, bad teeth, shiny eyes, bad eyes
- I got this:

- You didn't because the seed is random every time when set to -1. Infinite possibilities.
- Click the folder icon below the image to access the image folder where they all save.

- PROTIP: Look at the cmd window to see the behind the scenes as it generates. Be on the look out for errors as well.
Other Tabs
- img2img: For editing already existing images. You can edit the entire image at a time, or just one part of it using inpainting. This tutorial would be insanely long if I went into all of this. Check out YouTube.
- PNG Info: Drop a created image in here to see all its metadata. Useful if you want to replicate the prompt and negative. Reusing a seed is also powerful for seeing how changes in steps or other options affect an image. Copy paste or hit a button to send it your tab of choice.
- Settings: Self explanatory
- Extensions: Self explanatory, some might be broken though if the GitHub page is down. Installing a broken extension can brick your ForgeUI and force a reinstall.
In Closing
- In part 2 I'll go over installing and using the adetailer extension for better faces, as well as creating what is called a lora to get a consistent face and body of a character. Part 3 will dig into video generation.
- Just this guide alone will allow you to go nuts with generating images however.
- IMPORTANT: If you already use ComfyUI or other programs, part 3 of this guide and the dzine guide linked to in part 3 are great adjuncts to your repertoire of tools!
A Guide to Stable Diffusion Local Image Generation with ForgeUI PART 1
Preface
- The end goal of this guide is to get a consistent character using images of our choosing. It will be multiple parts.
- This part is to get you set up with local image generation, the consistent face/body will come later.
- You have a choice with local image generation. ForgeUI or ComfyUI. ForgeUI is simple, runs on older hardware, and is much quicker to get into than ComfyUI. If you're interested in ComfyUI then I can't help you. If you go around searching for UI's, do not use A1111's UI; It's outdated and ForgeUI is built upon it.
- This is a dummy's guide. I will assume you have 0 knowledge pertaining to all this. I also won't be going into too much technical detail and won't cover every single thing it can do.
Min Specs: If you have a laptop or PC that does not have a graphics card, forget it. Ideally you should have:
- 8gb vram Nvidia card with cuda, so basically anything modern within the past few years. You can run it on a 4gb vram card but the program will start digging into your CPU and you will experience slowdowns in your pc/generations while generating images; it's still completely viable.
- An SSD drive to install it on. Externals work just fine. Have at least 100gb free, more allows for you to install more models.
- Any good modern CPU
- 16gb of ram
Installation
- You first need a version of ForgeUI to install. I recommend the one below if you are illiterate with coding languages in general, like python. You don't have to mess around with cmd or all the other technical stuff as much. If you choose to install another version then look at the documentation for installation and use ChatGPT if you get stuck. I can't help you with installing other versions, but they all work relatively the same once installed. This version in particular is not being updated anymore, so if that bothers you search for a version that is still maintained.
- Head on over to https://github.com/lllyasviel/stable-diffusion-webui-forge
What to look for

- Download the recommended one and when you unzip in your desired install location, you'll see a folder with files.
- Run update, then run. If nothing launches or you get an error after using run, launch run again and it should open up the webui with your browser of choice.
Getting Situated
- When it opens up you'll be greeted by this. You'll also have a cmd window running, don't close it. What you'll see:

- Calm down and we'll go through everything you need to know, just the basics.
- At the top left click xl, let's go over why real quick, and install a model before we move further.
Base Model
- Stable Diffusion has multiple models to use, let's talk about the big ones.
- SD1.5: The OG. Much older but it has lots of models. Pros: Use models based on this if your PC is shit or you just want to generate stuff extremely fast. Cons: 512x512 based, can do larger images but too large and you get degradation without proper upscaling. Poor prompt adherence.
- SDXL: Souped up SD1.5. Pros: Good realism, 1024x1024 based and can do smaller or larger. Cons: Slow if your PC is shit.
- Pony: Anime heavy versions of SD1.5 or SDXL
- Flux: The newest model renowned for realism and details. Pros: Unmatched detail, hailed as the best. Cons: Slow, resource heavy. Has a tendency to produce plastic looking skin.
- We will be using SDXL because it's the best of both worlds. It's wedged in-between 1.5 and Flux in terms of speed and details.
Finding Models
- Go to Civitai, make an account, and make sure you allow nsfw models in your profile options.
- We will be using https://civitai.com/models/133005/juggernaut-xl
- It is based on SDXL
- Download it and drop it in (YourDriveName):\webui_forge_cu121_torch231\webui_forge_cu121_torch231\webui\models\Stable-diffusion
- Your path may be different, but the last 4 locations will be the same.
- Feel free to try any other models, I have tons myself and they all excel at different aspects. Those explicit AI porn videos I made in earlier posts were fed images made by pornmaster. If you want sexual acts use a sex based model.
- Do download pornmaster though as we will be using it in a part 3 later.
Your First Generation
- Go to settings and reload UI, or quit out of the browser and cmd then hit run again to get the model to show up. It will be under checkpoints.
What it should look like:

- Your GPU weight should only be lowered if you get an error in your cmd saying you're low on space.
- Prompt: What you want to see
- Negative: What you don't want to see
- DPM++ 2M Karras is pretty standard and works well, but you can play around with them all.
- 30 steps will get us a good image. Some models require more or less.
- Width and Height is pixels. If you plan on feeding an image directly to a video generator make sure it lines up with a commonly used ratio, like 3:4 or 9:16.
- Batch count vs Batch size: Size is images generated at once. Set to 1 if your PC sucks. So if you set batch count to 4 and size to 1 it generates 4 images in a row. Batch count to 4 and size to 4, you get 4 sets of 4 images generated in a row.
- CFG: How closely the AI follows your prompt. Lower = more creativity. 5-7 is generally recommended for most models. I run 7 most of the time. Play around with it.
- Test this out
Prompt: A Korean woman, wearing a black crop sweater, high heels, knee high stockings, black thong underwear, on an empty street sidewalk at night
Negative: BBW, fat woman, worst quality, low quality , lowres, illustration, 3d, 2d, cartoons, anime, painting, CGI, 3D render, bad anatomy, sketch, photoshop, airbrushed skin, overexposed, watermark, text, logo, label, blurry, depth of field, bokeh, blurry foreground, ugly nails, ugly fingers, fused fingers, missing finger, extra fingers, old, big breasts, bad teeth, shiny eyes, bad eyes
- I got this:

- You didn't because the seed is random every time when set to -1. Infinite possibilities.
- Click the folder icon below the image to access the image folder where they all save.

- PROTIP: Look at the cmd window to see the behind the scenes as it generates. Be on the look out for errors as well.
Other Tabs
- img2img: For editing already existing images. You can edit the entire image at a time, or just one part of it using inpainting. This tutorial would be insanely long if I went into all of this. Check out YouTube.
- PNG Info: Drop a created image in here to see all its metadata. Useful if you want to replicate the prompt and negative. Reusing a seed is also powerful for seeing how changes in steps or other options affect an image. Copy paste or hit a button to send it your tab of choice.
- Settings: Self explanatory
- Extensions: Self explanatory, some might be broken though if the GitHub page is down. Installing a broken extension can brick your ForgeUI and force a reinstall.
In Closing
- In part 2 I'll go over installing and using the adetailer extension for better faces, as well as creating what is called a lora to get a consistent face and body of a character. Part 3 will dig into video generation.
- Just this guide alone will allow you to go nuts with generating images however.
Last edited:











