Comfyui load workflow from image example reddit. Really happy with how this is working.


Belittling their efforts will get you banned. the title says it all, after launching a few batches of low res images I'd like to upscale all the good results. Hey all- I'm attempting to replicate my workflow from 1111 and SD1. Thanks a lot for sharing the workflow. I found one that doesn't use sdxl but can't find any others. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. With for instance a graph like this one you can tell it to: Load this model, put these bits of text into the CLIP encoder, make an empty latent image, use the model loaded with the embedded text and noisy latent to sample the image, now save the resulting image. I am trying to understand how it works and created an animation morphing between 2 image inputs. com This repo contains examples of what is achievable with ComfyUI. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Ensure that you use this node and not Load Image Batch From Dir. Its just not intended as an upscale from the resolution used in the base model stage. Then restart ComfyUI. You can find the workflow here and the full image with metadata here. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. You can Load these images in ComfyUI to get the full workflow. Load Image List From Dir (Inspire). This is my workflow (downloaded from github and modified): my workflow, there are other but this is mine :p. Until now I was launching a pipeline on each image one by one, but is it possible to have an automatic iterative task to do this? I would give the input directory and the pipeline would run by itself on each image. Here I modified it from the official ComfyUI site, just a simple effort to make it fit perfectly on a 16:9 monitor. The graph that contains all of this information is refered to as a workflow in comfy. In order to avoid sending it via ssh and then loading with the LoadImage node, I'd like to somehow send it using the API and the connection I'm establishing between my computer and the server in this case. Please keep posted images SFW. Mute the two Save Image nodes in Group E Click Queue Prompt to generate a batch of 4 image previews in Group B. Really happy with how this is working. If you need help, drop me a DM and I can customise the workflow for you. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. I recently started to learn ComfyUi and found this workflow from Olivio and Im looking for something that does a similar thing, but instead can start with a SD or Real image as an input. Aug 7, 2023 · You can't just grab random images and get workflows - ComfyUI does not 'guess' how an image got created. Download this workflow picture. 0. All of my images that I've generated with this workflow have this mistake now - I can confirm that the the other fields are correctly pasted in when I drag-and-drop (or load) the image into ComfyUI. The image blank can be used to copy (clipspace) to both the load image nodes, then from there you just paint your masks, set your prompts (only the base negative prompt is used in this flow) and go. Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current This repo contains examples of what is achievable with ComfyUI. This is a super cool ComfyUI workflow that lets us brush over PARTS of an image, click generate, and out pops an mp4 with the brushed-over parts animated! This is handy for a bunch of stuff like marketing flyers, because it can animate parts of an image while leaving other areas, like text, untouched. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. 4 and it worked fine. The Load Image node can be used to to load an image. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. Please share your tips, tricks, and workflows for using this software to create your AI art. pth model to work though, (but was going fast) so I switched it to GFPGAN1. 0 for ComfyUI. I originally wanted to release 9. You can modify the first part of the workflow which generates an image, remove it and replace it with a Load Image node. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. Workflow Image with generated image The workflow in the example is passed in via the script in inline string, but it's better (and more flexible) to have your python script load that from a file instead. This is the node you are looking for. 5 by using XL in comfy. WAS suite has some workflow stuff in its github links somewhere as well. New comments cannot be posted. Images can be uploaded by starting the file dialog or by dropping an image onto the node. See full list on github. Oct 25, 2023 · I tried loading a workflow I made earlier today with the new update pulled and it generated an image of a dog when the prompt indicated ferret - same image of a dog it had generated before, and did this as I decoded/encoded to a different model, generating one with a "ferret" with the third model. Un-mute either one or both of the Save Image nodes in Group E Note the Image Selector node in Group D. Still though, its awesome! I COULDNT get the GPEN-BFR-512. And its hard to find other people asking this question on here. This workflow allows you to load images of an AI Avatar's face, shirt, pants and shoes and pose generates a fashion image based on your prompt. Along with normal image preview other methods are: Latent Upscaled 2x Hires fix 2x(two pass img) The Load Image node can be used to to load an image. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. 9 leaked repo, you can read the README. 1,2,3, and/or 4 separated by commas. The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably I'm struggling to find a workflow that allows image/ img input into comfy ui that uses sdxl. 4 - The best workflow examples are through the github examples pages. Pro-tip: Insert a WD-14 or a BLIP Interrogation node after it to automate the prompting for each image. With controlnet I can input an image and begin working on it. Jun 4, 2024 · I'd like to send an image into a workflow rinning on a server. TLDR of video: First part he uses RevAnimated to generate an anime picture with Revs styling, then it passes this image/prompt/etc to a second sampler, but Aug 7, 2023 · You can't just grab random images and get workflows - ComfyUI does not 'guess' how an image got created. It would be great if there was a simple tidy UI workflow the ComfyUI for SDXL. . Initial Input block - If you have the SDXL 0. AP Workflow 9. You should now be able to load the workflow, which is here. You know the WebUI can load generation data from jpeg file which means the jpeg picture contains parameters for generating image. That node will try to send all the images in at once, usually leading to 'out of memory' issues. This repo contains examples of what is achievable with ComfyUI. A lot of people are just discovering this technology, and want to show off what they created. Basically if you have a really good photo, but no longer have the workflow used to create it, you can just load the image and it'll load the workflow. Dont really need the control net for depth, just connect the reference image to both the InstantID's image and kps input would suffice since the kps would keep the face's orientation. Details on how to use the workflow are in the workflow link. We would like to show you a description here but the site won’t allow us. or through searching reddit, the comfyUI manual needs updating imo. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Just bse sampler and upscaler. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. Just drag-and-drop images/config to the ComfyUI web interface to get this 16:9 SDXL workflow. Welcome to the unofficial ComfyUI subreddit. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Surprisingly, I got the most realistic images of all so far. And above all, BE NICE. and it got very good results. Once the image has been uploaded they can be selected inside the node. Drop it on your ComfyUI (Alternatively load this workflow json file) Load the two openpose pictures in the corresponding image loaders Load a face picture in the IPAdapter image loader Check the checkpoint and vae loaders Use the "Common positive prompt" node to add a prompt prefix to all the tiles Enjoy ! Welcome to the unofficial ComfyUI subreddit. Anyone knows how to fix this? Thanks a lot in advance! EDIT1: Just tested with another workflow (https://github-production-user-asset-6210df. Yes, of course you can do this with an existing image. These are examples demonstrating how to do img2img. The control net for depth would help if you want the body to be posed to be the same as the reference image. Wish moving the masked image to composite over the other image was easier, or like a live preview instead of queing it for generation, cancel, move it a bit more etc. Locked post. So. No Loras, no fancy detailing (apart from face detailing). Now the problem I am facing is that it starts like already morphed between the 2 I guess because it happens so quickly. Is there something like this for Comfyui including sdxl? The Load Image node can be used to to load an image. This workflow chains together multiple IPAdapters, which allows you to change one piece of the AI Avatar's clothing individually. Use the Latent Selector node in Group B to input a choice of images to upscale. If I add another load LORA node between the refiner model and the ksampler for refiner model, I get exactly the same results, so my guess is I'm doing something wrong or refiner model does not work with (or does not get affected by) LORAs Jun 4, 2024 · I'd like to send an image into a workflow rinning on a server. I put the workflow to test by creating people with hands etc. s3 Welcome to the unofficial ComfyUI subreddit. ka ft gm tz tt my py cz nl zc