AI Portfolio Works

AI-Driven Content Creation

Stoic & Self Improvement Content

Experiment Concept
Created a self-improvement channel concept centered on Stoic philosophy using a comprehensive AI automation pipeline while maintaining creator oversight for quality control. The plan was to develop a modular automation system where discrete Python scripts handled specific editing tasks (captions, titles, content assembly) before being integrated into a unified workflow.
Tools
Stable Diffusion
Midjourney
Elevenlabs
Mubert
Python
Runpod Cloud GPU
MoviePy
Process

First step was to created several episodes using partially automated tasks, this was the blueprint for the process (coming up with the content idea, creating the script and doing the editing manually). After that, I started automating the individual tasks. I wrote some of the scripts myself but sometimes collaborated with developers to troubleshoot & optimize.

Technical Implementation

- Content Generation: Scripts produced via LLM API integration
- Audio: Generated through Text-to-Speech technology
- Music: AI-generated ambient soundtracks
- Visual Elements: Midjourney for base imagery, Stable Diffusion for animation, to create motion & loops
- Editing Automation: Python scripts for asset stitching, caption generation, and title insertion using script-based editing technique (filename injection)
- Processing: All processing is done on a cloud GPU which makes it all happen in seconds

Stable Diffusion Animation experimentations

Diverse Visual Techniques test & evaluation

About
Conducted comprehensive testing of various AI visual generation techniques across different aesthetic styles to evaluate capabilities and limitations for future production implementations.
Tools
Stable Diffusion
LORAs
Controlnet
ComfyUI
Automatic1111
AnimateDiff
Stable Video Diffusion SVD
Disco Diffusion
Rundiffusion
Technical Exploration

- Generation Methods: Image-to-video, text-to-video, motion-guided generation
- Model Customization: Implementation of LoRAs and motion LoRAs for stylistic control
- Control Mechanisms: Integration of ControlNet for precise output guidance
- Post-Processing: Various upscaling and frame interpolation techniques
- Tools Utilized: Automatic1111, ComfyUI, with Midjourney inputs as seed imagery

Technical Implementation

Experimented across multiple visual aesthetics including:

- Anime and stylized animation
- Horror and gore elements
- Ancient/historical imagery
- Photorealistic human representations
- Experimental glitch effects
- Abstract artistic compositions

Practical Application

- Incorporated generated clips directly into YouTube content production
- Provided visual assets to support other production teams
- Created detailed technical guides documenting replication methods
- Exported reusable settings files (JSON) to ensure consistent results across teams

Stable Diffusion
LORAs
Controlnet
ComfyUI
Automatic1111
AnimateDiff
Stable Video Diffusion SVD
Disco Diffusion
Rundiffusion

AI Generated Visuals Samples - 4K 60 FPS

Click on the thumbnail to view the video.

Application Example

Reddit Horror Narration Series

Experiment Concept
Created a horror video series concept transforming Reddit content into narrated videos using an AI-driven pipeline. Screenshots of original posts overlaid with disturbing visuals established authenticity, while TTS narration and captioning maintained viewer engagement throughout each story. Identified this format through targeted research of trending content styles suitable for automation.
Tools
Stable Diffusion
Elevenlabs
Mubert
Python
MoviePy
Reddit
Technical Implementation

- Content Curation: Selected engaging posts from Reddit horror communities
- Narration: Applied TTS technology for consistent voice delivery
- Visual Format:
  - Initial frames: Screenshot of original Reddit post/comment overlaid with disturbing visuals
  - Subsequent content: Caption-based storytelling with continued visual elements
- Visual Style: Utilized gore and horror visuals created during the AI generation experimentation phase