Text generation seems to leave a little to be desired. Five separate generations and “Hello World” always came out as “Hello Word.”
Don’t forget to
git pull
in your A111 directory before loading the model!Seems to be the upscaler. Using an older one looks great.
Anyone have any luck getting the model to work? Running it with an anime model an the outputs are either blurry or latent deepfried.rompt: cute anime girl crying, face
Negative prompt: watermark, text
Steps: 20 | Seed: 2436787329 | Sampler: DPM++ 2S a Karras | CFG scale: 6 | Size: 512x512 | Parser: Full parser | Model: test15 | Model hash: d01b8e6877 | Refiner: Refiners_sd_xl_refiner_1.0 | VAE: Anything-V3.0.vae | Latent sampler: Euler a | Image CFG scale: 6 | Denoising strength: 0.3 | Refiner start: 0.8 | Secondary steps: 20 | Version: 18279db | Pipeline: Original | Operations: “hires | txt2img” | Hires upscale: 2 | Hires steps: 20 | Hires upscaler: LatentStill kinda new to this, do we download the whole huggingface directory? Just the safetensors? Do we need the refiner too? I guess, what are the steps for “optimal installation”?
So from what I understand, all you need is the base for A1111. If you’re using comfyUI, might want to download the refiner too.
Just learned that you can generate image using text to image using base model then send it to img2img and use refiner model with around 0.33 to 0.25 strength.
Okay that makes sense, and that’s just the safetensors file right? I don’t need the whole repo?
Right you don’t need any of the other files
Removed by mod
What are you using? With A1111 I had the same problem, had to add in the - - no-half-vae in the batch file for the command line
Removed by mod