AI & Computational Systems

Designing AI-assisted pipelines across pre-production, real-time production, and post-production.

Systems-First Approach to AI in Production

My work in AI focuses on building and integrating computational systems into existing production workflows, prioritizing reliability, spatial coherence, and collaboration over novelty. Every experiment I build is evaluated against production constraints: latency, controllability, repeatability, and integration with existing CG and VP pipelines.

OM: AI-Assisted Virtual Production Pipeline

This pipeline explored how AI can reduce iteration time in early look development while still feeding clean, controllable assets into a real-time VP workflow.

Director • AI-researcher

July 2025 - Present
Illustrated by Kira Narog
Production Designs by Sebastian Kapur

Directing and actively building an AI-assisted virtual production pipeline aimed at improving previs and shot planning efficiency using limited real-world reference data. The system integrates Unreal Engine, LiDAR/photogrammetry, and AI-assisted Gaussian splat reconstruction (via tools such as World Labs) to convert small sets of stylized location imagery into metrically grounded, navigable previs environments. This workflow supports early-stage camera blocking, environment validation, and creative alignment across departments when traditional scouting or full photogrammetry is impractical.

This research examines how AI-driven spatial reconstruction can complement traditional previsualization workflows without replacing established CG or VFX pipelines.

AI Style Transfer & Consistency Research

ComfyUI · Custom LoRA Training · Python-Assisted Workflows

July 2025 - September 2025

Built an AI-assisted stylization system to evaluate stylistic consistency across repeated iterations within a production pipeline.

      • Tested custom SDXL LoRAs using ComfyUI workflows across multiple asset contexts

     • Identified where AI accelerates early look development without compromising continuity or scale

      • Established guardrails for treating AI outputs as directional inputs rather than final assets

      • These findings informed how AI should remain assistive, supporting ideation and validation, while preserving predictability and control in Virtual Production workflows.

Independent research into AI style transfer and consistency using custom-trained LoRAs and ComfyUI pipelines to understand limitations of AI outputs in production contexts.

This research directly informed how AI outputs can be safely integrated into pipelines without breaking continuity or downstream workflows.

AI-Assisted Interactive Homepage System

Real-Time Scroll-Driven Visual System · AI-Integrated VAD Workflow

Creative Technologist · Virtual Art Department (AI Integration)

August 2023 - December 2024

This project explores how AI-generated imagery can be responsibly integrated into real-time, interactive web experiences without compromising spatial accuracy, performance, or authored intent. Using custom drone footage as the physical environment reference, I integrated a peer-designed, production-ready train asset via Higgsfield AI, ensuring scale, lighting, and motion continuity aligned with the real world. The resulting sequence was translated into a scroll-driven image system that transitions seamlessly into original film footage, reinforcing continuity between AI-assisted content and traditional production pipelines.

Original IP train design by Jesse Van Norman.

Designed a scroll-driven image-sequence system for real-time visual storytelling on the web, optimized for performance, preloading, and perceptual smoothness.

Higgsfield AI · Virtual Art Department Workflows · Custom IP Integration · Scroll-Driven Interaction Design · Web-Based Real-Time Playback