Comfyui upscale beds reddit

Comfyui upscale beds reddit. Upscale x1. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. It will replicate the image's workflow and seed. 19K subscribers in the comfyui community. 2 and resampling faces 0. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. Still working on the the whole thing but I got the idea down Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. And at the end of it, I have a latent upscale step that I can't for the life of me figure out. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. 17K subscribers in the comfyui community. The final steps are as follows: Apply inpaint mask run thought ksampler take latent output and send to latent upscaler (doing a 1. This. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. I upscaled it to a resolution of 10240x6144 px for us to examine the results. It depends on how large the face in your original composition is. Depending on the noise and strength it end up treating each square as an individual image. It uses CN tile with ult SD upscale. Hope someone can advise. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. 5, euler, sgm_uniform or CNet strength 0. I haven't been able to replicate this in Comfy. You end up with images anyway after ksampling so you can use those upscale node. 5 "Upscaling with model" and then denoising 0. 5 noise ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. Belittling their efforts will get you banned. This will allow detail to be built in during the upscale. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. I solved that with using only 1 steps and adding multiple iterative upscale nodes. k. Please keep posted images SFW. You just have to use the node "upscale by" using bicubic method and a fractional value (0. 0 Alpha + SD XL Refiner 1. The workflow is kept very simple for this test; Load image Upscale Save image. a. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y Thanks. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. Subsequently, I'd cherry-pick the best one and employ the Ultimate SD upscale for a 2x upscale. I only have 4GB VRAM, so haven't gotten SUPIR working on my local system. - latent upscale looks much more detailed, but gets rid of the detail of the original image. 5 models (seems pointless to go larger). So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. No attempts to fix jpg artifacts, etc. Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. This is done after the refined image is upscaled and encoded into a latent. This means that your prompt (a. 5 denoise. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP 10 votes, 18 comments. 34 per hour) and discovered this workflow by @plasm0 that runs locally and support upscaling as well. For some context, I am trying to upscale images of an anime village, something like Ghibli style. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. Now, transitioning to Comfy, my workflow continues at the 1280x1920 resolution. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. it's nothing spectacular but gives good consistent results without These comparisons are done using ComfyUI with default node settings and fixed seeds. /r/StableDiffusion is Many people also have a hard time learning from written documents and need visual learning. 9 , euler That's because of the model upscale. 6 denoise and either: Cnet strength 0. However, I switched to Ultimate SD Upscale custom node. One does an image upscale and the other a latent upscale. Thanks Here is a workflow that I use currently with Ultimate SD Upscale. Thanks! Hi, does anyone know if there's an Upscale Model Blend Node, like with A1111? Being able to get a mix of models in A1111 is great where two models… Latent upscale is different from pixel upscale. . The upscale quality is mediocre to say the least. SD upscaler and upscale from that. Upscale and then fix will work better here. 0. I recently started tinkering with Ultimate SD Upscaler as well as other upscale workflows in ComfyUI. Instead, I use Tiled KSampler with 0. safetensors (SD 4X Upscale Model) The standard ERSGAN4x is a good jack of all trades that doesn't come with a crazy performance cost, and if you're low vram, i would expect you're using some sort of tiled upscale solution like ultimate sd upscale, yea? permalink. I’m new to ComfyUI and I’m aware that people create amazing stuff with just prompts and detailers. And here's my first question : Is one better than the other as far as final upscaled image quality? I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). Welcome to the unofficial ComfyUI subreddit. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. In A1111, I employed a resolution of 1280x1920 (with HiRes fix), generating 10-20 images per prompt. 2 options here. Really chaotic images or images that actually benefit from added details from the prompt can look exceptionally good at ~8. I want to upscale my image with a model, and then select the final size of it. They also want the details on how and why to do something besides just a guide to load this json and use it. There's "latent upscale by", but I don't want to upscale the latent image. You can also run a regular AI upscale then a downscale (4x * 0. 5), with an ESRGAN model. The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. And when purely upscaling, the best upscaler is called LDSR. As my test bed, i'll be downloading the thumbnail from say my facebook profile picture, which is fairly small. If it’s a close up then fix the face first. But I probably wouldn't upscale by 4x at all if fidelity is important. 5 to get a 1024x1024 final image (512 *4*0. this is just a simple node build off what's given and some of the newer nodes that have come out. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfec - image upscale is less detailed, but more faithful to the image you upscale. But it's weird. 20K subscribers in the comfyui community. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. I have a custom image resizer that ensures the input image matches the output dimensions. 0 + Refiner) This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. 25- 1. That's because latent upscale turns the base image into noise (blur). So I made a upscale test workflow that uses the exact same latent input and destination size. 1-0. And above all, BE NICE. 49 votes, 12 comments. I then use a tiled controlnet and use Ultimate Upscale to upscale by 3-4x resulting in up to 6Kx6K images that are quite crisp. Reply reply Top 1% Rank by size Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. If it’s a distant face then you probably don’t have enough pixel area to do the fix justice. save. Also, if this is new and exciting to you, feel free to post . It works more like DLSS, tile by tile and faster than iterative one. After 6 days of hard work (2 days build, 1 day testing, 2 day recording and 1 day editing and very little sleep, well, I finally managed to upload this! full tutorial in the youtube description (it's entirely free of course) - and the video goes into 1h of detailled instructions on how to build it yourself (because I prefer for someone to learn how to fish than to give them a fish 😂 That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. g. Aug 5, 2024 · Flux has been out of under a week and already seeing some great innovation in the open source community. A lot of people are just discovering this technology, and want to show off what they created. The only way I can think of is just Upscale Image Model (4xultrasharp), get my image to 4096, and then downscale with nearest-extact back to 1500. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. Please share your tips, tricks, and… Grab the image from your file folder, drag it onto the entire ComfyUI window. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. report. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. A step-by-step guide to mastering image quality. Thanks for all your comments. Latent quality is better but the final image deviates significantly from the initial generation. Also ultimate sd upscale is also a node if you dont have enough vram it tiles the image so that you dont run out of memory. The downside is that it takes a very long time. Please share your tips, tricks, and workflows for using this software to create your AI art. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. I did once get some noise I didn't like, but rebooted & all was good second try. Also, both have a denoise value that drastically changes the result. Look at this workflow : Welcome to the unofficial ComfyUI subreddit. Thank "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. embed. 10K subscribers in the comfyui community. 5 upscale) upscaler to ksampler running 20-30 steps at . Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. Ugh. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). Try immediately VAEDecode after latent upscale to see what I mean. articles on new photogrammetry software or techniques. u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. I generate an image that I like then mute the first ksampler, unmute Ult. Both these are of similar speed. I like how IPAdapter with masking allows me to not have to write detailed prompts, and yet still maintains the fidelity of the subject and background - or any other masked elements for that matter. I was working on exploring and putting together my guide on running Flux on Runpod ($0. There are also "face detailer" workflows for faces specifically. I too use SUPIR, but just to sharpen my images on the first pass. Also with good results. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. 5=1024). Please share your tips, tricks, and… Welcome to the unofficial ComfyUI subreddit. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. There is a face detailer node. (206x206) when I'm then upscaling in photopea to 512x512 just to give me a base image that matches the 1. I had the same problem and those steps tanks performances as well. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. The final node is where comfyui take those images and turn it into a video. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). 2 This is a community to share and discuss 3D photogrammetry modeling. It's high quality, and easy to control the amount of detail added, using control scale and restore cfg, but it slows down at higher scales faster than ultimate SD upscale does. 43 votes, 16 comments. It's why you need at least 0. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? I try to use comfyUI to upscale (use SDXL 1. Please share your tips, tricks, and workflows for using this… second pic. 5 if you want to divide by 2) after upscaling by a model. 9, end_percent 0. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. szegdwr lvhy eshviu ybrbqk pfndnw efvdl nndjj kfvtfg gcxody tzeqr