Stephan Tual - Former CCO Ethereum

channel image

Stephan Tual - Former CCO Ethereum

stephantual

subscribers

How to get perfect upscales with SUPIR. I tried answering all the questions you sent me in DMs :)

SUPIR worfklow: https://flowt.ai/community/supir-v2-plugandplay-edition-n5acf-v

▬ TIMESTAMPS ▬▬▬▬▬▬▬▬▬▬▬▬
00:00 Intro
01:16 SUPIR upscales complete guide
11:04 How to restore compressed images
15:38 Memetastic 1500% upscale
16:44 Upscaling AI generated images
19:28 Restoring old, damaged photographs
22:04 Adding details to an image like magnific

▬ SOCIALS/CONTACT/HIRE ▬▬▬▬▬▬▬▬▬▬▬▬
Discord: https://discord.gg/aJ32TNMnSM
All socials: https://linktr.ee/stephantual
Hire Actual Aliens: https://www.ursium.ai/

▬ LINKS REFERENCED ▬▬▬▬▬▬▬▬▬▬▬▬
S_noise and DPMPP_ETA: https://sandner.art/sigma-and-eta-noise-image-variations-in-latent-space-and-fast-detail-effects/

Came for a config file, stayed for the hundreds of cute animals.

Workflow & Models for download (or Plug&Play cloud) at:
https://flowt.ai/community/sdxs-0.09-second-per-image-bjabe-v

▬ TIMESTAMPS ▬▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction to the Kittens
01:09 Autoqueuing Bears
02:31 Running comfyUI in verbose mode
03:00 Editing the Yaml
04:09 Running the clean instance
04:45 Re-testing - 100 bears in 12 seconds!
05:05 Absolute madness begins
06:25 Before and after comparison

▬ SOCIALS/CONTACT/HIRE ▬▬▬▬▬▬▬▬▬▬▬▬
Discord: https://discord.gg/aJ32TNMnSM
All socials: https://linktr.ee/stephantual
Hire Actual Aliens: https://www.ursium.ai/

▬ LINKS REFERENCED ▬▬▬▬▬▬▬▬▬▬▬▬
https://github.com/IDKiro/SDXS

Restore old images, discover new details, or just upscale to infinity.
Now available in a Plug&Play format on flowt.ai (thanks guys!)

Use now or download for free:
https://flowt.ai/community/supir-v2-plugandplay-edition-n5acf-v
If you want to try the big red car image, I placed it at:
https://mega.nz/file/pshSBBqa#yfvLqzOA-9aWjbEmo1zh9OShqIou6VjipCCQ01fqoCg

Oh and it works with SDXL Lightning and accepts LORAs.

▬ SOCIALS/CONTACT/HIRE ▬▬▬▬▬▬▬▬▬▬▬▬
Discord: https://discord.gg/aJ32TNMnSM
All socials: https://linktr.ee/stephantual
Hire Actual Aliens: https://www.ursium.ai/

▬ LINKS REFERENCED ▬▬▬▬▬▬▬▬▬▬▬▬
https://flowt.ai/
Kudos to @rainbolttwo for the video concept ;)

The root of all evil is the worship of money.
To the worried community: It's going to be fine.

▬ TIMESTAMPS ▬▬▬▬▬▬▬▬▬▬▬▬
00:00 Intro
00:26 Early shenanigans
01:03 Alledged embezzlement
01:45 The mass brain exodus
02:03 Money wasn't a thang for Emad
02:30 The point of no-return
03:13 Selling Holodecks to the gullible
04:38 Reinventing the wheel for profit
05:02 Under New Management
05:49 Emad reaches Nirvana

▬ SOCIALS/CONTACT/HIRE ▬▬▬▬▬▬▬▬▬▬▬▬
Discord: https://discord.gg/aJ32TNMnSM
All socials: https://linktr.ee/stephantual
Hire Actual Aliens: https://www.ursium.ai/

▬ LINKS REFERENCED ▬▬▬▬▬▬▬▬▬▬▬▬
https://www.forbes.com/sites/iainmartin/2024/03/20/key-stable-diffusion-researchers-leave-stability-ai-as-company-flounders/?sh=7e425aac2ed6
https://www.forbes.com/sites/kenrickcai/2023/06/04/stable-diffusion-emad-mostaque-stability-ai-exaggeration/
https://decrypt.co/206307/former-vp-leaves-stability-ai-over-copyright-concerns

How to use SDXL lightning with SUPIR, comparisons of various upscaling techniques, vRam management considerations, how to preview its tiling, and even how to fix error messages you might encounter --- Full Workflow linked below.

▬ TIMESTAMPS ▬▬▬▬▬▬▬▬▬▬▬▬
00:00 Intro and results
00:51 Building the worklow
07:45 Going deeper - adding the model upscale
09:05 Vram management (fp16 vs fp32)
11:34 Using SDXL lightning
14:43 Creative outputs and restoration
15:35 Common Errors

▬ SOCIALS/CONTACT/HIRE ▬▬▬▬▬▬▬▬▬▬▬▬
Discord: https://discord.gg/aJ32TNMnSM
All socials: https://linktr.ee/stephantual
Hire Actual Aliens: https://www.ursium.ai/

▬ LINKS REFERENCED ▬▬▬▬▬▬▬▬▬▬▬▬
Full ComfyUI workflow: https://flowt.ai/community/supir-v2-nodes-base-workflow-r_knk
Kijais' wrapper v2: https://github.com/kijai/ComfyUI-SUPIR/tree/main
SUPIR models: https://huggingface.co/camenduru/SUPIR/tree/main
TCD LORA: https://huggingface.co/h1t/TCD-SDXL-LoRA/tree/main

Let's explore how to make some creative photorealistic AI videos using AnimateDiff LCM, SD15 in conjunction with the brand new modelscope nodes, including humans. Also includes an update on the video generator I'm building (SVD+AD+Modelscope) AND an old video restoration tool!

▬ TIMESTAMPS ▬▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction (straight from the workflow output)
00:30 New Modelscope nodes with SD1.5 input!
01:57 Installation and download (nodes and models)
03:40 Important LORA information (see links!)
04:00 Let's get noodling! (detailled T2V tutorial)
06:55 Video comparison of samplers and schedulers
07:20 Important prompt information specific modelscope
09:30 First outputs and best practices on generations
10:25 What's different in this edition?
11:02 TAS vs TCS
11:27 Adding AD LCM to Modelscope+SD15
13:57 AnimateDiff details: gen2 nodes and multival/dinkinit advice
16:10 LCM kSampler settings
17:10 First results of SD15+modelscope
17:55 Application to an anime ADLCM workflow with sampler cycling
19:30 Patriotic break
19:48 Workflow breakdown and using SUPIR in place of V2V
22:06 All possible stage2 up/downscales compared
23:00 Workflows aren't apps (seriously)
23:45 How to use the full workflow (hands-on)
26:03 Final Touches and best practices
26:56 SUPIR results and things to look out for
27:19 SDXL Lightning upscale results with CNs
29:52 Results of the SDXL lightning upscaler
30:31 Restoring a 20 years old video!
32:01 Results with side by side comparison
32:52 Outro with more results on AI-generated videos

▬ LINKS REFERENCED ▬▬▬▬▬▬▬▬▬▬▬▬
Workflow: https://comfyworkflows.com/workflows/c3a9a2c3-555d-4c1f-8f6a-f3aea312571c
Modelscope nodes: https://github.com/ExponentialML/ComfyUI_ModelScopeT2V
The LORA for modelscope, pre-prepped: https://mega.nz/file/JswlUAiT#qICwDLxbUSxO8OqpAew4VrPUhCNACwqzI4LSqzrgme4
Animateddiff evolved context window docs: https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved

▬ SOCIALS/CONTACT/HIRE ▬▬▬▬▬▬▬▬▬▬▬▬
Discord: https://disc..

I've spent way too much time generating 1000's of videos and I'm not SORA (get it?) hahah I need to sleep https://discord.gg/aJ32TNMnSM

▬ TIMESTAMPS ▬▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction
00:30 Overview of Zeroscope/Modelscope
01:32 Required downloads and installs
02:13 Let's get noodling! (basic)
07:16 Let's add the second model + a bus
09:03 Keeping parameters in line (optional but useful)
10:22 Adding the upscale before the 2nd stage
11:48 VERY important information about the 2nd upscale (SUPIR Test)
13:16 Exploring the full workflow together
20:00 Super cool tips for super cool people

▬ EVERYTHING IN THE VIDEO ▬▬▬▬▬▬▬▬▬▬▬▬
The whole enchilada: https://comfyworkflows.com/workflows/fc39dabc-9727-4067-bac7-dcd8ef02ac13
Model: https://huggingface.co/cerspense/zeroscope_v2_XL
Features: https://huggingface.co/docs/diffusers/main/en/api/pipelines/text_to_video
Controlnet for Zeroscope: https://huggingface.co/crishhh/animatediff_controlnet/blob/main/controlnet_checkpoint.ckpt

▬ COOL LINKS ▬▬▬▬▬▬▬▬▬▬▬▬
Discord: https://tinyurl.com/URSIUM
Socials: https://linktr.ee/stephantual
Hire Actual Aliens: https://www.ursium.ai/

Discover this month Top 10 Innovations in Stable Diffusion!
To get updates in real time or share your work, join the discord tinyurl.com/URSIUM

00:00 - Quick intro
00:17 - Stable Cascade - is it the end-all be all? (no)
02:03 - Comfy Lighting support and models
02:59 - Layer Diffusion & Layout Learning
04:02 - Stable Diffusion 3 - and some tea :)
05:21 - Zeroscope - the nightmare fuel engine!
06:12 - Yolo world has been implemented in ComfyUI
07:21 - FLATTEN all the things!
07:33 - Comfy Launcher - fully automagic ComfyUI (venv)
08:06 - OneDiff - accelerate your 4090 by 300% (tested in another video!)
09:17 - Proteus 0.4 releases with a style update
10:01 - Latte: Latent Diffusion Transformer for Video Generation
10:49 - Conclusion!
--
Discord: https://tinyurl.com/URSIUM
Socials: https://linktr.ee/stephantual
Hire Actual Aliens: https://www.ursium.ai/

-------------------------------------------------
*** All Links in this video ***
-------------------------------------------------
Stable cascade: https://stability.ai/news/introducing-stable-cascade
Würstchen: https://openreview.net/forum?id=gU58d5QeGv
SC worfklow: https://flowt.ai/community/stable-cascade-basic-workflow-pjk1r
SC stage B replacement: https://www.reddit.com/r/StableDiffusion/comments/1ay1x2q/cascade_at_home_replacing_stage_b_with_sdxl_and/

ComfyUI: https://github.com/comfyanonymous/ComfyUI

Juggernaut Lighting: https://civitai.com/models/133005/juggernaut-xl
Dreamshaper Lighting: https://civitai.com/models/112902?modelVersionId=354657
HelloWorld XL Lightning: https://civitai.com/models/43977?modelVersionId=355837

Layer Diffusion WP: https://arxiv.org/abs/2402.17113
Layerdiffusion Forge: https://github.com/layerdiffusion/sd-forge-layerdiffusion
Layer Diffusion nodes for comfy: https://github.com/layerdiffusion/sd-forge-layerdiffusion
Layout Learning: https://dave.ml/layoutlearning/

Stable Diffusion 3 : https://stability.ai/news/stable-diffusion-3

ZeroScope (models): https://huggingface.co/cers..

We install and build a worflkow for SUPIR, the HOT new Stable Diffusion super-res upscaler that destroys every other upscaler (again). Or does it?

DO NOT BUY WORKFLOWS: it's a grift that hurts creators like myself (and your wallet)

Discord and help: tinyurl.com/URSIUM
Workflow: https://comfyworkflows.com/workflows/b703fa8b-5fe0-4678-8692-021766a891c4
Kijai's repo: https://github.com/kijai/ComfyUI-SUPIR/tree/main
Installing xformers: pip install -U xformers --no-dependencies (within your python env, evidently)
Downloading the supir models: https://huggingface.co/camenduru/SUPIR/tree/main

00:00 - Cool intro
00:22 - Cloning SUPIR wrapper nodes
00:47 - Installing xFormers
02:03 - Downloading SUPIR models
02:47 - Building the workflow
03:11 - Adding moondream LLAVA model
04:01 - Going through SUPIR settings and recommendations
06:42 - Finalizing the worklow
07:11 - First results
08:10 - Improving the workflow and lowering vRam requirements
09:23 - Testing all the things
09:50 - Improving results
10:23 - More results
10:53 - Comments vs Magnific
11:30 - Conclusion

--
Discord: https://tinyurl.com/URSIUM
Socials: https://linktr.ee/stephantual
Hire Actual Aliens: https://www.ursium.ai/

Pass it photographs to recover detail or video games characters to bring them to life. Designed for you to learn how to use comfy effectively and make use of essential nodes so that you can debug it on your own.

Workflow/models/etc - (free, evidently) at:
-- https://comfyworkflows.com/workflows/8e351973-ffc4-4d1b-bc09-ee38ee655804

Discord at:
-- Discord: https://tinyurl.com/URSIUM

Covered in this video:
00:00 - Introduction
01:00 - Studying Krea & Magnific features
04:00 - Getting started and basic worfklow
09:00 - Organizing the workflow and extensions
10:03 - First result and observations
13:35 - Adding Moondream for automatic prompting
16:42 - Adding upscalers including CCSR
18:57 - Saving and watermarking for convenience
20:30 - Adding an update via model option
22:10 - Cleaning up the workflow
23:40 - Quite a lot of work left!
24:30 - A new approach and organizing a large workflow
25:50 - The model pipeline
27:00 - ClipSkip, Self-attention Guidance (SAG)
28:22 - FreeU_v2, DeepShrink, AutoCFG
30:20 - Workflow structure & ContextBig from RgThree
32:05 - Placing the upscaler first (and last)
32:54 - Tiling system used in Magnific.ai
33:36 - Adding color matching
34:23 - Adding the facedetailler
39:05 - Trying a large face and working on a per image basis
40:30 - Adding an IPadapter to transfer style
45:40 - Adding controlnets and aux pre-processor explanation
50:10 - Calculating 'resolution' for the aux preprocessors
52:28 - Tiled diffusion: why is it useful?
54:18 - What could go wrong? Debugging
56:46 - Examples of varied pictures
01:01:00 - Conclusions and SUFIR release

SOCIAL:
-- Discord: https://tinyurl.com/URSIUM
-- Everything Else: https://linktr.ee/stephantual
-- Hire Actual Aliens: https://www.ursium.ai/

Clean up your noodles and create easy to scale up workflows.
Nodes are at https://github.com/rgthree/rgthree-comfy

--
Discord: https://tinyurl.com/URSIUM
Socials: https://linktr.ee/stephantual
Hire Actual Aliens: https://www.ursium.ai/

What's the best upscaler in comfyUI?
Download the worfklow, all the files in the video, and the prompts at:
https://comfyworkflows.com/workflows/c907d7ad-5ea4-4b05-8f10-040ab57b4ea4

Questions: https://tinyurl.com/URSIUM

In this video:
00:00 - What did I test?
00:35 - Getting started & worfklow introduction
01:20 - Important upscaler basic concepts
02:38 - Hires Fix - Why it doesn't work
03:36 - Iterative Latent Upscale in Pixel Space
04:27 - Ultimate SD upscale - with details and information
05:28 - How to compare images like a pro
06:31 - Model upscale using NMKD superscale SS
06:55 - How to get more pixel-space upscale models
07:27 - Models that work well for this type of human-face images
07:50 - Ultimate SD vs NMKD SS (model upscale) results
09:00 - Opening your horizons to get better source pictures for upscales
09:50 - Introducing LDSR
10:20 - Trying to increase SDXL latent size well beyond theoretical limits
12:10 - LDSR nodes and comparison with CivitAI model, NMKD SS
13:45 - LDSR vs Ultimate upscale and tip on how to fix workflow
14:40 - Updated comparison using Faststone
15:28 - NMKD SS vs LDSR vs Ultimate SD vs Ultimate SD w/ seamfix
16:30 - What to look for when upscaling images
17:18 - Topaz review - is it worth it?
18:20 - Upscaler tier list and comments on Magnific/Krea type tools
18:50 - And onwards - secret! Don't want to spoil it!
22:20 - Oh yeah I also upscaled some cars and crowds

--
Socials: https://linktr.ee/stephantual
Hire Actual Aliens: https://www.ursium.ai/

Tired of wasting time re-rendering uncached nodes in 'experimental' workflows? Use RGThree to render only one branch at a time, saving yourself a good 10 minutes on that LDSR upscaler :)

Nodes are available at: https://github.com/rgthree/
Full worfklow available as part of my free long form tutorials: https://www.youtube.com/@stephantual/playlists

-- COMMUNITY --
Discord: https://tinyurl.com/URSIUM
Socials: https://linktr.ee/stephantual
Hire Actual Aliens: https://www.ursium.ai/

How do AI creators generate AI videos today? It's a number's game I'm afraid - not a question of 'prompt'. In this video we explore how VFX artists like @BryanHoward use comfy as part of their workflow, we touch on photorealism with Blender and why AI INTEGRATION, not REPLACEMENT is the way forward.

And I criticize SORA a little more, because I can.

The cool anime video at the beginning is there: https://www.youtube.com/@Iskarioto/videos

PS: These videos are not monetized. I have no idea why YouTube shows you ads. I do them for fun and sharing the love of #comfyui, #svd and #sdxl, nothing more.

Socials: https://linktr.ee/stephantual
Visual coms studio: https://www.ursium.ai/
--
The Ursium.AI collective is recruiting earthlings who can code for an interstellar journey of innovation. [email protected].

Two minute video on how to use "Switch Any" from the impact pack (https://github.com/ltdrdata/ComfyUI-Impact-Pack) and Rgthree nodes (https://github.com/rgthree/rgthree-comfy) to create a simple 'compare any images to each other).

These tips are part of a larger workflow which is posted in my "tutorials" playlist.

I'm at https://www.ursium.ai/
Socials: https://linktr.ee/stephantual
--
The Ursium.AI collective is recruiting earthlings who can code for an interstellar journey of innovation. [email protected].

Frustrated by 2 ksamplers or iterative latent upscalers that keep 'messing' with your image? ME TOO! Hence, LDSR by Flowtyone (https://github.com/flowtyone/ComfyUI-Flowty-LDSR) - it's the best for 'professional' use IMHO.

Workflow has a full tutorial in my "ComfyUI tutorials" playlist so please go check it out.
Workflow itself is at https://comfyworkflows.com/workflows/eba55917-3c47-43ac-90d9-50a6441ef437 if you want to give it as spin.

Socials:
https://linktr.ee/stephantual
https://www.ursium.ai/

--
The Ursium.AI collective is recruiting earthlings who can code for an interstellar journey of innovation. [email protected].

Episode 2 of "What's new in comfyUI"
Subscribe to keep up with the latest ComfyUI trends and innovations!

Talk to real Aliens at http://tinyurl.com/URSIUM

In this episode:
00:00 - Intro
00:27 - The AI Apocalyse is upon us
01:12 - Dynamic Crafter (github.com/chaojie/ComfyUI-DynamiCrafter)
01:45 - AnimateDiff major update (again)
02:00 - Preview of week 3 (Stable Cascade AD) - Tx Banodoco!
02:14 - Announcing the Ursium AI Discord (tinyurl.com/URSIUM)
02:58 - DreamShaper XL v2.1 Turbo DPM (https://civitai.com/models/112902/dreamshaper-xl)
03:20 - LEOSAM's HelloWorld XL (5.0 GPT V4) https://civitai.com/models/43977?modelVersionId=338512
03:48 - Copax TimeLessXL (https://civitai.com/models/118111?modelVersionId=344540) (very good for DPM++ 3M SDE / GPU)
04:12 - I take a dig at SORA because I can - I have no master
04:32 - RIP DragNUWA (pulled by CELA https://github.com/ProjectNUWA/DragNUWA) see https://www.microsoft.com/en-us/research/project/dragnuwa/
05:17 - Native Instant ID (github.com/cubiq/ComfyUI_InstantID)
06:18 - LLaVa everything with ComfyVLM (github.com/gokayfem/ComfyUI_VLM_nodes)
07:00 - Train motion LORAs in comfyUI (https://github.com/kijai/ComfyUI-ADMotionDirector)
08:00 - Image and video PostProcessing nodes (github.com/digitaljohn/comfyui-propost)
08:25 - JoviMetrix Nodes - https://github.com/Amorano/Jovimetrix
08:55 - FaceFusion (https://github.com/facefusion/facefusion)
09:36 - Automatic CFG (https://github.com/Extraltodeus/ComfyUI-AutomaticCFG)
10:18 - Outro with links to tutorials and more cool comfyUI stuff!

Links referenced in this episode:
Banodoco server - https://discord.com/invite/aJ32TNMnSM

Socials: https://linktr.ee/stephantual
Hire Us: https://www.ursium.ai/

--
The Ursium.AI collective is recruiting earthlings who can code for an interstellar journey of innovation. [email protected].

Wouldn't it be nice if you could break down your comfyui workflow into two parts, so you can 'pick' the images you prefer for postprocessing with something like https://github.com/chrisgoringe/cg-image-picker?

Well, you can't. Here's why.

Socials: https://linktr.ee/stephantual
Visual coms studio: https://www.ursium.ai/
--
The Ursium.AI collective is recruiting earthlings who can code for an interstellar journey of innovation. [email protected].

I got a new machine, so I'm making this "absolute beginner" video on how to install ComfyUI+Manager+A model as of February 12th, 2024. I hope many of you join us on a path of creativity!

In this video you'll learn how to to install comfy properly, and maybe rethink how you installed it. We cover:

00:00 - Cool Intro
00:30 - Portable or github?
01:18 - Pytorch nightlies or not?
02:01 - The 'correct download' (cu121)?
02:30 - Unzipping correctly and pitfalls
04:59 - Fist run
05:25 - ALIEN INVASION
05:50 - On xFormers and peak performance
06:20 - Command line options and vRam woes
07:10 - Default workflow and multiple windows
08:09 - ComfyUI manager first look
08:40 - Git on windows the easy way
09:20 - Text editors for git (install options)
11:00 - Running git commands
12:00 - What's in 'models' folder
12:30 - Downloading your first models: sd15 and SDXL
13:00 - Musing on civitAI, loras, embeddings and hypernetworks
14:30 - I end up in jail because licenses are serious business
15:30 - Organizing models in folders
16:37 - Running comfyUI for the first time
17:00 - Updating strategies for comfyUI itself
18:35 - Running the manager and installing custom nodes properly
19:30 - Why you should install SOME custom nodes manually
20:30 - How the manager installs a sample custom node (RGTHREE)
21:20 - Running our first sd15 workflow
22:40 - Conclusion

I am not sponsored by anyone except myself - if you are a node developer, check out Ursium.AI, a novel visual communication studio leveraging AI (we're hiring!)

Socials: https://linktr.ee/stephantual

Let's look at how we can use RGthree nodes (https://github.com/rgthree/rgthree-comfy) to create individual group muters/bypassers in 5 minutes or less.

This is part of a larger worflow tutorial you'll be able to find in my 'Tutorials' Playlist.

https://ursium.ai

--
The Ursium.AI collective is recruiting earthlings who can code for an interstellar journey of innovation. [email protected].

I'm so sick and tired of the hype trains. They are as stupid as this thumbnail.

00:00 - It's over guys! What is SORA?
00:49 - Good luck with those text prompts
01:20 - Why they aren't showing you 2 successive gens
02:21 - Control is the most important factor in all movie productions
04:00 - This happened before, you just forgot Dall-E (everyone has)
04:30 - SORA will be neutered on release
05:25 - Must be hard to be an investor in this space
06:00 - Copyrights? Never heard of that! (/s)

Kudos to Pom and the whole Banodoco server!
This video features a soundless sample of https://www.youtube.com/watch?v=7ttG90raCNo (watch it - it's brilliant)

Socials: https://linktr.ee/stephantual
Work: https://www.ursium.ai/

--
The Ursium.AI collective is recruiting earthlings who can code for an interstellar journey of innovation. [email protected].

Ever wanted to watermark your AI photos (might become a requirement soon anyways) - you can do it in one node using ComfyRoll (https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes).

Check out my 'Tutorials' Playlist for the full workflow and explanation of how to use Rgthree to organize your comfyui bus and avoid comfy-carbonara :)

Socials: https://linktr.ee/stephantual
Visual coms studio: https://www.ursium.ai/

--
The Ursium.AI collective is recruiting earthlings who can code for an interstellar journey of innovation. [email protected].

What's new in ComfyUI Week February Week 1!
You can contact me on X, Discord and LI if you'd like to show off your work!

*** CONTENTS ***
00:00 Intro - things move fast!
00:33: MotionDirector for AnimateDiff
03:20: Tutorial on Custom nodes by Suzie1 & ComfyRoll
04:46: Contronet-Lllite & Advanced Apply ControlNet with demo worfklow
06:42: Tips on how to best use ComfyUI
08:00: SVDXT-11 img2vid is out! + demo worflkow by the community
10:05: AnimateLCM is out - faster video generation!
11:30: Moondream, Parakeet vison and audio models (run locally!)
13:05: Segmind segMoE-SD-4x2-v0 with extra Kyun!
14:27: Conclusion

*** LINKS REFERENCED ***
ConfyUI: https://github.com/comfyanonymous/ComfyUI
MotionDirector: https://github.com/ExponentialML/AnimateDiff-MotionDirector
AnimateDiffEvolved: https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved
Guide to making custom node: https://github.com/Suzie1/ComfyUI_Guide_To_Making_Custom_Nodes/wiki
ComfyRoll Studio: https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes
Controlllite (OG edition): https://huggingface.co/kohya-ss/controlnet-lllite/tree/main
Controlllite (Animu edition):https://huggingface.co/bdsqlsz/qinglong_controlnet-lllite/tree/main
ConfyUI Advanced Control Net: https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet
Controlllite Workflow Demo: https://comfyworkflows.com/workflows/72b93581-0e37-4aa9-8a51-b10d16f6ca98
SVD-XT 1.1 - https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1/tree/main
SVD 1.1 vs 1.0 demo video: https://www.youtube.com/watch?v=7jaYG5MglHg
SVD comparison workflow: https://github.com/purzbeats/svd-comparison-grids/?tab=readme-ov-file
AnimateLCM: https://huggingface.co/wangfuyun/AnimateLCM
AnimateLCM Worfklow: https://flowt.ai/community/animatelcm-xk7nx-v
AnimateLCM CivitAI page: https://civitai.com/models/290375/animatelcm-fast-video-generation
Support the creator of AnimateDiff: https://www.patreon.com/Kosinkadink/posts
ComfyUI image 2 prompt for moondream: ht..

"I got bored one day and i put everything on a bagel".

- IPAdapters chained and masked composited
- Animdiff v3 and gen2 nodes
- Face Swap and restoration
- 5 control nets that can all be mixed/matched bypassed
- An upscaler via Ergan through pixel space,
- Hand MeshGraphormer
- Prompt Travelling
- Interpolation

This video is NOT a tutorial, but instead, an explanation as to WHY we're seeing a 'convergence' in methodologies as part of working with comfyUI and animatediff. Even Netflix picked up on the trend as they are now recruiting VFX people familiar with free and opensource tools such as AnimateDiff/ControlNets/Reactor and of course, IPAdapter.

00:00: Intro and what it contains
01:45: Don't pay for workflows!
03:45: AnimateDiff V3 Gen2 Nodes
06:20: Single frame generation
09:00: Thinking like an animator
12:20: Why I'm not using pipes here and Trung's 0246
15:20: This is not a 'filter'
19:20: Pre-Processor best practices
25:00: Depth map: Midas vs Zoe vs Zoe AD
26:00: Why I don't use a second Ksampler
27:00: Control nets and COCO masking Lineart
31:10: Keeping the background steady with tile controlnet
35:00: Where the Bagel is headed: automated background infill with a vision model (moondream)
38:00: IPAdapters masking with a composite mask
46:20: Segmenting anything to anything
47:00: Chaining IPAdapters
48:22: FaceSwapping with Reactor with restauration via GFPGAN
49:00: Warpspeed and Hand Meshgraphormer woes
53:00: Prompt Travelling and Parsec
56:00: More tools!
57:00: Upscale via pixel space
59:54: Interpolation and saving
1:00:00: Concluding nodes and Banodoco

WHY do we need background consistency? HOW do we obtain it? This is what i want to explore in this video, alongside concepts such as bypassing the issue of 'squished' clipvision images (which much be square) when dealing with vertical or portrait videos.

I'm releasing this video alongside my tutorial workflow which you can obtain for free (evidently, you should NEVER pay for workflows) on th..

No I won't upload the worfklow because there are great YouTubers who do amazing tutorials such as:
-

SHOW MORE

Created 2 years, 5 months ago.

25 videos

Category News & Politics

Former CCO Ethereum ► Passionate Communicator and recognized Innovator with nearly 30 years of IT expertise ► No longer active on social media ► Blog: http://stephantual.com