Draw mask comfyui reddit

Draw mask comfyui reddit. And I never know what controlnet model to use. Please keep posted images SFW. In fact, there's a lot of inpainting stuff you can do with comfyui that you can't do with automatic1111. Current Situation. I suppose that does work for quick and dirty masks. This will take our sketch image and crop it down to just the drawing in the first box. I have this working, however to mask the upper layers after the initial sampling I VAE decode them and use rembg, then convert that to a latent mask. Thanks everyone always asks about inpainting at full resolution, comfyUI by default inpaints at the same resolution as the base image as it does full frame generation using masks. It doesn't replace the image (although that might seem to be what it's doing visually), it's saving a separate channel with that mask, so you get two outputs (image and mask) from that one node. In addition to a whole image inpainting and mask only inpainting, I also have workflows that I was wondering if there is anyway to create Mask in depth in comfyUI. This node pack was created as a dependency-free library before the ComfyUI Manager made installing dependencies easy for end-users. And above all, BE NICE. If you do a search for detailer, you will find both segs detailer and mask detailer. As i can't draw the second mask on the result of the first character image (the goal is to do it in one workflow) i draw it on the original picture and i send this mask only in the new VAE Encode (for Inpainting). If something is off I can redraw the masks as needed, one by one or only one. Outline Mask: Unfortunately, it doesn't work well because apparently you can't just inpaint a mask; by default, you also end up painting the area around it, so the subject still loses detail IPAdapter: If you have to regenerate the subject or the background from scratch, it invariably loses too much likeness Still experimenting with it though. Does anyone else notice that you cannot mask the very bottom of the image with the right-click masking option? And I'm not talking about the mouse not being able to 'mask' it there. One thing about human faces is that they are all unique. 86s/it on a 4070 with the 25 frame model, 2. 75s/it with the 14 frame model. Uh, your seed is set to random on the first sampler. You can choose your preferred drawing software, like Procreate on an iPad, and then import the doodled image into ComfyUI. Try drawing them over a black background, though, not a white background. You can do it with Masquerade nodes. "SEGS" is the format that Impact pack uses to bundle masks with additional information. But one thing I've noticed is that the image outside of the mask isn't identical to the input. It depends how you made the mask in the first place. Invoke AI has a super comfortable and easy to use regional prompter thats based on simply drawing, was wondering if there's such one in comfyui, even if it's an external node? suuuuup, :Dso, with set latent noise mask, it is trying to turn that blue/white sky into a space ship, this may not be enough for it, a higher denoise value is more likely to work in this instance, also if you want to creatively inpaint then inpainting models are not as good as they want to use what exists to make an image more than a normal model. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Regional prompting makes that rather simple all in one image, with multiple hand drawn masks all in app(my most complicated involved 8 hand drawn masks), sure I can paint a mask with an outside app, but why would I bother when it's built into an app in automatic1111. Edit: And rembg fails on closed shapes, so it's not ideal Welcome to the unofficial ComfyUI subreddit. Comfy Workflows Comfy Workflows. You can see how easily and effectively the size/placement of the subject can be controlled simply by drawing a new mask. I make them 512x512, but the size isn't important. I need to combine 4 5 masks into 1 big mask for inpainting. Import the image at the Load Image node. To blend the image and scroll naturally, I created a Border Mask on top: Mask. Finally, the story text image output from module 9 was pasted on the right side of the image. Share art/workflow . 0 for ComfyUI - Now with support for Stable Diffusion Video, a better Upscaler, a new Caption Generator, a new Inpainter (w inpainting/outpainting masks), a new Watermarker, support for Kohya Deep Shrink, Self-Attention, StyleAligned, Perp-Neg, and IPAdapter attention mask Source image. A lot of people are just discovering this technology, and want to show off what they created. Txt-to-img, img-to-img, Inpainting, Outpainting, Image Upcale, Latent Upscale, multiple characters at once, LoRAs, ControlNet, IP-Adapter, but also video generation, pixelization, 360 image generation, and even Live painting! I don't know if there is a node for it (yet?) in ComfyUI, but I imagine that under the hood, it would take each colored region and make a mask of each color, then use attention coupling on each mask with the associated regional prompt. Any way to paint a mask inside Comfy or no choice but to use an external image editor ? It's not released yet, but i just finished 80% of features. This workflow, combined with Photoshop, is very useful for: - Drawing specific details (tattoos, special haircut, clothes patterns, …) Load the upscaled image to the workflow, use ComfyShop to draw a mask and inpaint. What is the rationale behind the drawing of the mask? I don't want to break my drawing/painting workflow by editing csv files, calculating rectangle areas. For these workflows we use mostly DreamShaper Inpainting. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. It includes literally everything possible with AI image generation. Turns out drawing "^" shaped masks seems to work a bit better than rectangles (especially for smaller masks) because it implies the leg positioning. I think the later combined with Area Composition and ControlNet will do what you want. png. [Load image] -> [resize to match image being generated] -> [image-to-mask] -> [gaussian blur mask] to soften edges Then use [invert mask] to make a mask that is the exact opposite and [solid mask] to make a pure white mask. It needs a better quick start to get people rolling. The first issue is the biggest for me though. Use a "Mask from Color" node and set it to your first frame color. Share, discover, & run thousands of ComfyUI workflows. ) Learn how to master inpainting on large images using ComfyUI and Stable Diffusion. Imagine you have a 1000px image with a circular mask that's about 300px. Feed this over to a "Bounded Image Crop with Mask" node, using our sketch image as the source with zero padding. It animates 16 frames and uses the looping context options to make a video that loops. <edit 2> Actually now I understand what it's doing. I am working on a piece which requires me to have mask which reveals a texture. At least that's what I think. Here i add one of my PNG so you can see the whole workflow : Here I come up against two problems: Look into Area Composition (comes with ComfyUI by default), GLIGEN (an alternative area composition), and IPAdapter (custom node on GitHub, available for manual or ComfyUI manager installation). If you're using the built in mask editor, just use a small brush and put dots outside the area you already masked. Even if you set the size of the masking circle to max and go over it close enough so that it appears to be fully masked, if you actually save it to the node and Yeah there are tools that do this , I can’t check them right now but I can later if you remind me. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. Thanks. one Mask after the other. There are many detailer nodes not just facedetailer. Would you pls show how I can do this. So from what I can tell, ComfyUI seems to be vastly more powerful than even Draw Things (which has a lot of configuration settings). You can paint all the way down or the sides. So far (Bitwise mask + mask) has only 2 masks and I use auto detect so mask can run from 5 too 10 masks. In ComfyUI, the easiest way to apply a mask for inpainting is: use the "Load Checkpoint" node to load a model. I'm not sure exactly what it stores, but i always draw a mask, send it to MaskToSEGS where I can set the crop factor to determine region for context, then to SEGS Detailer. You can also select non-face bbox models and facedetailer will detail hands etc ComfyUI is not supposed to reproduce A1111 behaviour I found the documentation for ComfyUI to be quite poor when I was learning it. I want to create a maks which follows the contours of thr subject (a lady in my case). The method is very simple; you still need to use the ControlNet model, but now you will import your hand-drawn draft. I kinda fake it by loading any image, than drawing mask on it, than convert mask to image and than send that image to controlnet. I learned about MeshGraphormer from this youtube video of Scott Detweiler, but felt like simple inpainting does not do the trick for me, especially with SDXL. But basically if you are doing manual inpainting make sure that the sampler producing your inpainting image is set to fixed that way it does inpainting on the same image you use for masking. Wanted to share my approach to generate multiple hand fix options and then choose the best. Step Two: Building the ComfyUI Partial Redrawing Workflow. Please share your tips, tricks, and workflows for using this software to create your AI art. For the specific workflow, please download the workflow file attached to this article and run it. I use the "Load Image" node and "Open in MaskEditor" to draw my masks. Release: AP Workflow 7. Is there "drawing" node for comfyui that would be bit more user friendly? Like ability to zoom in on parts you are drawin on, colors etc. Basically though you’d be using a mask, you’d right click on the load images and draw the mask, then there is a node to snip it and stitch it back in … pretty sure the node was something like “stitch”. Reproducing the behavior of the most popular SD implementation (and then surpassing it) would be a very compelling goal I would think. A transparent PNG in the original size with only the newly inpainted part will be generated. Release: AP Workflow 8. This will set our red frame as the mask. use the "Load Image (as Mask)" to load the grayscale mask image, specifying "channel" as "red". How can I draw regional prompting like invokeAIs regional prompting (control layers) that allows drawing the regional prompting rather than typing numbers? Title says it all. Is this more or less accurate? While obviously it seems like ComfyUI has big learning curve, my goal is to actually make pretty decent stuff, so if I have to put the time investment into Comfy, that's fine to me. (And if you wanted 4 masks in one image, draw over a transparent background in a . In this example, it will be 255 0 0. The workflow that was replaced: When Canvas_Tab came out, it was awesome. That way, if you take just the red channel from the mask, it'll give you just the red man, and not the background. The Krita plugin is great but the nodal soup part isn't there so I can't change some things. Create a black and white image that will be the mask. And you can't use soft brushes. Combine both methods: gen, draw, gen, draw, gen! Always check the inputs, disable the KSamplers you don’t intend to use, make sure to have the same resolution in Photoshop than in ComfyUI. Unless you specifically need a library without dependencies, I recommend using Impact Pack instead. Hi amazing ComfyUI community. use the "Load Image" node to load a source image to modify. For example, the Adetailer extension automatically detects faces, masks them, creates new faces, and scales them to fit the masks. Seems very hit and miss, most of what I'm getting look like 2d camera pans. I believe it does mostly the same things as OP's node. Overall, I've had great success using this node to do a simple inpainting workflow. Mar 10, 2024 · comfyui_facetools. an alternative is Impact packs detailer node which can do upscaled inpainting to give you more resolution but this can easily end up giving you more detail than the rest of As for the rest, if memory serves the mask segm custom_node has a couple of extra install steps which are easy to follow & if you load the workflow & see redded out nodes just go to the ComfyUi node manager in the side float menu & click install missing nodes then reset & you should be good to go. 0 for ComfyUI - Now with a next-gen upscaler (competitive against Magnific AI and Topaz Gigapixel!) and higher quality mask inpainting with Fooocus inpaint model Join the largest ComfyUI community. The flow is in shambles right now so I'll just share this screengrab. Step One: Image Loading and Mask Drawing. 4. A way to draw inside comfyui? Are there any nodes for sketching/drawing directly in comfyui? Of course you can always take things into an external program like photoshop, but i want to try drawing simple shapes for controlnet or paint simple edits before putting things into inpaint. The mask editor suck. This workflow generates an image with SD1. 💡 Tip: Most of the image nodes integrate a mask editor. Mask detailer allows you to simply draw where you want it to apply the detailing. Welcome to the unofficial ComfyUI subreddit. In fact, from inpainting to face replacement, the usage of masks is prevalent in SD. Currently, there are many extensions (custom nodes) available for background removal in ComfyUI, such as Easy-use, mixlab, WAS-node-suite, Inspyrenet-Rembg, and others. What else is out there for drawing/painting a latent to be fed into ComfyUI other than the Photoshop one(s)? Welcome to the unofficial ComfyUI subreddit. But when Krita plugin happened I switched to that. It's not that slow, but I was wondering if there was a more direct Latent with 'fog' background -> Latent Mask node somewhere. This comprehensive tutorial covers 10 vital steps, including cropping, mask detection, sampler erasure, mask fine-tuning, and streamlined inpainting for incredible results. Belittling their efforts will get you banned. . It's a more feature-rich and well-maintained alternative for dealing Welcome to the unofficial ComfyUI subreddit. For some reason this isn't possible. TLDR, workflow: link. Layer copy & paste this PNG on top of the original in your go to image editing software. Aug 25, 2024 · Hello, ComfyUI community! I'm seeking advice on improving the background removal process in images. Yet, there is no mask node as a common denominator node. These custom nodes provide a rotation aware face extraction, paste back, and various face related masking options. So, has someone…. i think, its hard to tell what you think is wrong. Does anyone know why? I would have guessed that only the area inside of the mask would be modified. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and TLDR: THE LAB EVOLVED is an intuitive, ALL-IN-ONE workflow. They don't have to literally be single pixels, just small. After completing all the integrations, I output via AnythingAnywhere. Below is the effect image generated by the AI after I imported a simple bedroom line drawing: Welcome to the unofficial ComfyUI subreddit. Right click on any image and select Open in Mask Editor. Use the mask tool to draw on specific areas, then use it for input to subsequent nodes for redrawing. Discord Sign In. My mask images. Inpaint is pretty buggy when drawing masks in a1111. If you spent more than a few days in comfyui, you will recognize that there is nothing here that cannot be done with the already available nodes. Alternatively you can create an alpha mask on any photo editing software. I want to be able to use canny, ultimate SD upscale while inpainting, AND I want to be able to increase batch size. png file, and then R, G, B and Alpha can all mask different areas. nak mntz pen fvbfbza mrwjawj plswu lzrjvrz buldi nspfhk pcagyw