Generative AI Masterclass (Early Access)

About Course
This course is in development, enrollment is paused as lessons and content are being added.
Unlock the full creative potential of generative AI in this hands-on masterclass, where you’ll explore an integrated workflow using ChatGPT for ideation, Pinokio for automated app orchestration, and ComfyUI as your visual node-based interface. Learn how to generate stunning images with Stable Diffusion, animate concepts through Text-to-3D tools, and push creative boundaries with fine-tuned LoRA (Low-Rank Adaptation) models—all seamlessly controlled inside a Pinokio-powered environment. Whether you’re building characters, designing cinematic worlds, or training personalized AI styles, this masterclass equips you with the tools and techniques to go from prompt to production with precision and unlimited creativity.
Once you have purchased the masterclass, you will be eligible for all future updates at no additional charge. For the first year there will be quarterly updates and tentative plans to add a minimum of one annual update per year through 2030.
Please Note: A tentative release date is scheduled for May 2025 for early access of introductory lessons. Remaining lessons will be completed through Fall of 2025. As lessons are added, the price of the course will gradually increase from $99.99 to $499.99. This is a tentative curriculum and subject to change without notice.
What Will You Learn?
- Prompt Engineering with ChatGPT
- Automating Workflows with Pinokio
- Visual Scripting in ComfyUI
- Text to Image, Image to Image
- Inpainting and Outpainting
- Text-to-3D Techniques
- Training Custom LoRA Models
- Asset Optimization for VFX, Games & XR
- Ethical & Legal Considerations in AI Art
Course Content
Course Overview
-
About the Generative AI Masterclass
-
About the Instructor
Foundations of Generative AI
Understand the history, technology, and terminology behind modern generative AI tools.
-
The Origins of AI: Neural Networks, GANs, and Milestones
-
The Rise of Transformers: GPT, BERT, and LLMs
-
Diffusion Models: Denoising, Latent Space, and Creativity
-
Key Terminology: Prompts, CFG, Seed, Latent Space, Samplers
-
Industry Use Cases: Games, Film, Design, Marketing
-
The Future of AI: Real-Time Gen, Agents, Ethics & Ownership
-
Prompting & Ideation with ChatGPT
Use ChatGPT as a creative partner to build effective prompts and generate ideas.
-
Introduction to ChatGPT
-
Overview of ChatGPT Models
-
What Makes a Great Prompt?
-
Using ChatGPT for Ideation and Scene Building
-
Creating a Custom Chatbot
-
Prompting for Characters, Worlds, and Narrative Scenes
-
Structuring Image Prompts for Style, Mood, and Composition
-
Building Prompt Variations for Batch Outputs
-
Prompt Drift & Iterative Distortion
-
Ethical Prompting and Dataset Awareness
Getting Started with ComfyUI
Set up your working environment and understand the node-based interface.
-
What is Pinokio?
-
Installing ComfyUI with Pinokio
-
Understanding the Node-Based Graph Interface
-
Getting Started with Comfy Copilot
-
Downloading Checkpoints, VAEs, and LoRA Models
-
Saving, Reloading, and Organizing Workflows
Text-to-Image Generation
Generate high-quality images using text prompts and prompt logic.
-
Creating Your First Text-to-Image Workflow
-
CFG, Steps, Seed, and Samplers Explained
-
Prompting for Style, Genre, Format, and Visual Language
-
Using LoRA for Specific Characters or Art Styles
-
Negative Prompts and Prompt Weighting
-
Stable Diffusion vs. Flux
-
Generating Stylized Concept Art with Flux
-
Flux Consistent Characters
Image-to-Image & Visual Refinement
Modify, stylize, and evolve existing images using latent transformations.
-
Using Image-to-Image for Style or Scene Variation
-
Latent Strength, Noise, and Detail Preservation
-
Refining Faces, Objects, and Layout
-
Using I2I as Part of Iterative Creative Process
Inpainting & Outpainting
Use masks to regenerate or expand parts of an image for creative iteration.
-
Introduction to Inpainting with Masks
-
Replacing Faces, Objects, or Background Elements
-
Outpainting for Canvas Expansion or Scene Worldbuilding
-
Best Practices for Seamless Visual Blending
ControlNet for Guided Composition
Guide structure and layout with pose, depth, and edge maps.
-
What Is ControlNet and Why Use It?
-
Pose Guidance Using OpenPose and Scribble
-
Depth, Canny, and Edge Maps for Scene Structure
-
Combining Multiple ControlNets in One Graph
-
ControlNet + LoRA Workflows for Story-Driven Generation
IP-Adapter for Style Transfer & Image Conditioning
Use image prompts to guide AI generations visually, beyond text prompts.
-
Intro to IP-Adapter in ComfyUI
-
Using Reference Images for Style or Character Transfer
-
Combining IP-Adapter with Text Prompts
-
Pose & Face Preservation with Image Prompts
-
Use Cases: Consistent Characters Across Scenes
Personalizing Style and Subject with LoRA
Customize your generations with LoRA models to control aesthetic, character identity, or thematic style.
-
Understanding LoRA Models and How They Work
-
Loading and Merging Multiple LoRA Models
-
Adjusting LoRA Strength for Subtle or Bold Effects
-
Training Your Own LoRA with KohyaSS
-
Training Your Own LoRA with FluxGym
-
Refining LoRA results with XY Plots
Upscaling and Image Enhancement
Increase resolution and polish outputs for publishing or printing.
-
Latent vs. Pixel-Based Upscaling
-
Using ESRGAN, 4x-UltraSharp, and Other Models
-
High-Res Fix Workflows and When to Use Them
-
Export Settings for Web, Print, or Compositing
Batch Generation and Image Sequences
Automate generation of consistent series or high-volume outputs.
-
Batch Rendering with Loops and Queues
-
XY Plot: Iterating Style, Prompt, or LoRA
-
Maintaining Continuity Across Images
-
Use Cases: Comics, Storyboards, Product Design, and Variants
Video and Animation Concepts
Use generative images to drive early 3D design and asset creation.
-
Generating Frame-by-Frame Sequences
-
Consistency Across Frames Using Seeds and IP-Adapter
-
Building Looping Motions and Expression Cycles
-
Exporting Frames for Video Assembly
-
Using Optical Flow for Motion Stability
-
IC-Light Workflow for Video Relighting
Video-to-Video Generation
Use generative AI to transform, stylize, or reimagine video clips by processing them frame-by-frame through ComfyUI-compatible workflows.
Text & Image to 3D
Explore how to generate 3D assets from text and images using AI-based pipelines powered by ComfyUI, including Hunyuan3D workflows for multi-view 3D modeling.
-
Introduction to Text-to-3D Concepts
-
Using ComfyUI for Text-to-3D Generation
-
Image-to-3D Using Depth and Geometry Maps
-
Multi-Image to 3D with Hunyuan3D Models
-
Exporting and Refining AI-Generated 3D Assets
-
Rendering 3D Scenes for ComfyUI Reprocessing
-
Stylizing and Enhancing 3D Renders in ComfyUI
3D Scanning, Custom LoRAs and Generative Rendering
Learn how to transform real-world 3D scan data and Gaussian splats into photorealistic images using ComfyUI and custom-trained LoRA models.
-
3D Scanning & Generative AI Pipeline
-
Creating and Rendering a Gaussian Splat Scene
-
Capturing Source Imagery for LoRA Training
-
Training a Custom LoRA Model from Source Imagery
-
Stylizing Gaussian Splat Frames with ComfyUI
-
Enhancing with Optical Flow, ControlNet, Depth, and IP-Adapter
-
Applications in VFX, XR, and Digital Twins
Synthetic Data & Computer Vision
Learn how to use generative AI techniques in ComfyUI to create synthetic datasets, augment training data, and simulate vision-based environments for ML applications.
-
Introduction to Synthetic Data for Vision Models
-
Generating Synthetic Scenes with ComfyUI
-
Pose and Depth Generation Using ControlNet
-
Annotating Segmentation Masks and Bounding Boxes
-
Training Custom LoRA Models for Realistic Scene Generation
-
Training CV Models with Real & Synthetic Data
Capstone Project – Project Quixote
Bring a scene from your screenplay to life using generative AI tools. This final project combines storytelling and visual development through ComfyUI and ChatGPT.
-
Introduction to Project Quixote
-
Creating the Project Quixote Chatbot
-
Collaborating with ChatGPT for Script Development
-
Creating Concept Art for Key Moments
-
Multi-Image to 3D Character Creation
-
Adjusting Character Poses & Composition
-
Building Storyboards with IP-Adapter, LoRA, and ControlNet
-
Generating Short Cinematic Animations or Mood Clips
-
Final Breakdown & Presentation