Pioneering AI stylization for USC's first virtual production thesis, transforming CG mecha footage into anime aesthetics
As Motion Capture Operator and AI Supervisor for USC School of Cinematic Arts' first-ever virtual production thesis project, I brought cutting-edge AI stylization techniques to an ambitious Evangelion-inspired narrative featuring mecha combat sequences.
Building on my previous successes with AI rendering workflows in projects like Ex Machina AI Rendering and Immersion: Real Meets AI, I adapted my expertise to a new challenge: transforming CG mecha footage into anime aesthetics. While my previous projects had focused on achieving photorealism from CG inputs, this project required the opposite approach—creating stylized anime visuals from realistic 3D renders.
Working with motion capture data from performers in a complex production environment, I created stylization tests that demonstrated the potential of AI to transform standard CG output into compelling anime-style sequences. My work showcased innovations in handling non-human forms, preserving story-critical elements, and achieving stable frame-to-frame consistency—all significant challenges in late 2023 when this technology was still in its infancy.
Transform CG mecha footage from USC's virtual production into convincing anime-style visuals while preserving performance integrity.
January 2024 - June 2024, working with USC School of Cinematic Arts' thesis production team.
Motion Capture Operator and AI Supervisor, responsible for mocap testing and creating AI stylization previews.
ComfyUI, Stable Diffusion, After Effects, DaVinci Resolve, Giant (mocap software), Motion Capture hardware.
The project presented several significant technical challenges at the intersection of motion capture, virtual production, and AI stylization:
I developed a comprehensive approach that addressed each challenge through innovative technical solutions:
My approach to this project involved a systematic adaptation of established AI workflows to the unique requirements of anime stylization. The process evolved through careful testing and refinement, building on my previous technical explorations while advancing into new aesthetic territory.
The project began with extensive motion capture sessions at USC's facility. While I had initial experience with the Giant motion capture system, the production ultimately used Motiv for recording performances. This phase involved coordinating multiple actors simultaneously to capture both mecha combat sequences and cockpit interactions.
Building on my previous work with AI stylization in projects like "Immersion: Real Meets AI," I adapted my established ComfyUI workflow for anime aesthetics. This required significant adjustment of models, reference images, and processing parameters to transition from my previous focus on photorealism to a distinctly anime style. The workflow maintained my core approach to video processing while implementing specialized techniques for outline emphasis and metal texturization.
The non-human shapes of the mecha designs presented significant challenges for the AI system, which often attempted to interpret mechanical elements as human features. Through systematic experimentation, I developed a specialized approach that emphasized outlines and metallic textures while suppressing the system's tendency to anthropomorphize mechanical forms. This required careful prompt engineering and parameter adjustment to maintain the mecha's core design while achieving the desired anime aesthetic.
The cockpit scenes presented a different set of challenges from the mecha sequences. These scenes contained critical story elements on screens and displays that needed to be preserved through the stylization process. I developed a specialized masking technique in After Effects to protect these elements while allowing the surrounding environment to be fully stylized. This approach required precise rotoscoping and compositing to achieve seamless integration.
Cockpit scene stylization with preserved screen content
A persistent challenge in early 2024 AI video processing was frame-to-frame inconsistency leading to flicker. I implemented a multi-stage deflickering approach that included both pre-processing of source footage and post-processing of AI-generated output. This involved optimization in DaVinci Resolve and specialized compositing techniques in After Effects, pushing the stability limits of what was technically possible at the time.
This project represented a significant technical advancement in my AI stylization work, particularly in adapting existing workflows to new aesthetic goals while solving novel challenges related to non-human forms.
My approach to this project built directly on the techniques I developed for "Immersion: Real Meets AI," but with a crucial difference: instead of transforming CG into photorealism, I was now transforming CG into anime style. This required:
The project's focus on mechas presented unique technical hurdles that required creative solutions:
A crucial aspect of this project was the integration of AI-processed elements with story-critical content:
This project demonstrated my ability to apply cutting-edge AI techniques to transform standard CG animation into compelling anime-style visuals, showcasing my early mastery of stylization techniques at a time when the technology was still in its infancy.
Beyond the measurable metrics, this project represented several significant qualitative achievements:
Mecha stylization test showing before and after transformation from CG to anime aesthetic
This project represented an important evolution in my exploration of AI for creative applications, yielding both technical insights and valuable project experience.
This project reinforced my position at the forefront of AI-enhanced visual production, demonstrating my ability to adapt established techniques to new aesthetic challenges. Working with the USC School of Cinematic Arts' production team provided valuable experience in applying cutting-edge technology in a collaborative academic environment, while testing the boundaries of what was technically possible with early 2024 AI capabilities. The project represented a natural evolution of my technical approach, showcasing versatility in adapting my workflows to diverse visual styles.