Skip to content
View Tylarcam's full-sized avatar
๐ŸŽฏ
Focusing
๐ŸŽฏ
Focusing

Highlights

  • Pro

Block or report Tylarcam

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
tylarcam/README.md

Hi I am Tylar

Applied AI engineer who ships production LLM/ML systems for real users.

๐ŸŽฏ Focus

Production AI Deployment | Foundation Model Integration | AI Education

I design and validate AI systems that solve real problemsโ€”from model evaluation frameworks at Handshake to teaching AI fundamentals at Columbia to deploying generative models in production.

๐Ÿ’ผ Current Work

  • AI Instructor @ Columbia University (Fall 2025 - Spring 2026)
    Teaching Python, AI tooling, and production workflows to justice-impacted engineers via Justice Through Code program

  • AI Model Validation @ Handshake (Aug 2025 - Present)
    Designed evaluation frameworks for multi-modal AI models, improving accuracy 30% through systematic prompt engineering and QA protocols

๐Ÿšข Recent Deployments

TuneStory โœ… Shipped 2025

Applied AI music product integrating Meta MusicGen via Modal cloud infrastructure.
Stack: MusicGen, Modal, Supabase, Gemini, TypeScript
Focus: Controllable AI generation with preserved creative intent

Key decisions:

  • Chose Modal over ad-hoc GPU hosting for reproducibility + scalability
  • Structured generation as modular pipelines for creative iteration
  • Designed system to support future education use cases

Demo Video | GitHub

Other Production Projects

  • Spec_Tracer - AI-powered UI debugging tool with precision context capture
  • jarvis_voice_agent - Multimodal voice control system (AssemblyAI + ElevenLabs)
  • audio_transcriber - Speech-to-text pipeline with timestamps

๐Ÿ›  Technical Stack

AI/ML: TensorFlow, HuggingFace, Claude/LLM APIs, MusicGen, Prompt Engineering
Backend: Python, TypeScript/React, Node.js, Modal, Supabase
Data: Pandas, NumPy, SQL, data validation frameworks
Tools: Git, Docker, Notion, Figma

๐Ÿ“š Background

PhD candidate in Interactive Arts & Technology @ Simon Fraser University
Research focus: Multimodal AI systems and human-AI interaction
Teaching: AI literacy, Python fundamentals, production ML workflows

๐Ÿ“ซ Connect


๐Ÿ’ก Currently seeking: AI Product Manager or Applied AI Engineering roles where I can embed with teams to solve customer problems and own problem-to-deployment cycles.

Pinned Loading

  1. tunestory tunestory Public

    Production AI music platform with controllable generation. Deployed MusicGen via Modal cloud infrastructure for music education. TypeScript, Modal, Supabase, Gemini - shipped to real users in 2025

    TypeScript

  2. Spec_Tracer Spec_Tracer Public

    AI-powered UI debugging tool with precision context capture to eliminate unwanted AI changes. Production React/TypeScript app solving the problem of LLM overwriting in dev workflows.

    TypeScript

  3. audio_Transcriber audio_Transcriber Public

    Production Python pipeline for speech-to-text with timestamp extraction. Leverages AssemblyAI API for accurate transcription - demonstrates AI service integration and data processing workflows.

    Python

  4. jarvis_voice_agent jarvis_voice_agent Public

    Real-time multimodal voice agent integrating AssemblyAI speech recognition + ElevenLabs synthesis. Python-based system exploring voice-first AI interaction patterns for production deployment.

    Python