top of page

Research

Gen- AI - Character Animation with Video

demonstrates the use of Comfy UI and Generative AI to create character sheets, convert video into pose estimates, and retarget animations onto 3D characters. The process includes real-time compositing with dynamic animated backgrounds.

Neural Rendering - Live VFX

Prototype using deep face technology to apply virtual prosthetics or facial features in real-time, powered by an NVIDIA 1080 RTX GPU and AMD Threadripper CPU. This demonstrates the ability to train a library using Deepface and apply digital double faces or virtual prosthetics to live performers or animations.​​

Gen-AI Video - Dynamically Animated Environments

Demonstrating generative AI using Stable Diffusion image generation and Runway to create fully animated environments with dynamic simulations. Water, wind, foliage, and particle effects are seamlessly rendered into the scene, bringing the environment to life.

Realtime Translation - Prototype​

MetaHuman Virtual Human demonstrating real-time American Sign Language (ASL) translation. The goal is to improve communication for the deaf community by developing a Machine learning dataset, providing access, fostering inclusivity, and ensuring equitable access to travel services.

AMD Digital Human Testing - Machine Learning Libraries

Faceware Processing to train a dataset for accurate mouth shape and movement synthesis. Synchronized motion capture of the face, neck, and body allows the system to generate realistic, natural speech animations with precise lip sync and facial expressions.

Virtual Human - Metahuman Conformed Mesh

Conformed mesh MetaHuman by aligning the rig to match the exact physical proportions of the character, ensuring precise and lifelike digital representation. Photogrammetry based skin textures and high fidelity detail maps for cavity and wrinkle maps are not included. 

Gen-AI Video - HeyGen -Automated Video Translation

Gen AI video platform that enables seamless virtual dubbing. It translates audio into 40+ languages while preserving original voice, adjusts phonemes and facial expressions to match the new language, and composites the updated video over the original.

Diffusion-based AI - Age transformation VFX​

Using Stable Diffusion XL (SDXL), a realistic face model, and ComfyUI to transform the age of individuals in video footage. It seamlessly composites AI-altered faces back onto the footage, aligning with motion and lighting for natural, realistic results. Temporal instability is however still evident in this model.

4D animated Virtual Humans - LOD Prototype

Demonstrating the levels of detail in 4d captured characters and the difference between captured animations versus procedurally animated and reaction based animations

Gen- AI - Equarectangular 360 Video for VR​

demonstration of equirectangular videos created using standard diffusion to generate prompt-based images with seamless edge stitching. Gen-AI adds loopable visual effects and motion

Virtual Human - Metahuman Conformed Mesh

Demonstrating ASL gestures by animating with control rig in Unreal Engine. Highlights the creation of key poses for a number and letter gesture codex, showing how the system accurately animates ASL signs to facilitate seamless communication in digital environments.

UE5 Nanite and Pathtracing  - Render Tests

Testing Nanite and Pathtracing in Unreal Engine 5

Modeling, Animation, lighting, camera

Stephan Kozak | Lakeview Animation, Toronto. Ontario | (416) 300 2103

bottom of page