Inspiration

Everyday objects spark curiosity, but understanding how they are made usually requires technical knowledge or access to teardown guides. Mak3 Unmak3 was inspired by the idea of turning simple curiosity into an accessible learning experience using AI.

What it does

Mak3 Unmak3 allows users to upload a photo of an everyday object and receive an assembly or disassembly guide. The app generates structured blueprints that include materials, tools, step-by-step instructions, and AI-generated illustrations.

How we built it

The application is built using the Minimax API for multimodal image understanding and structured output generation. The system analyzes images, produces machine-readable JSON blueprints, and generates step-specific illustrations using text-to-image models. The frontend renders this data into an interactive user experience with conversational follow-up support.

Challenges we ran into

Ensuring consistent and structured outputs from the model was a challenge, especially when generating detailed instructions from varied images. Designing reliable schemas and prompts that worked across many object types required careful iteration.

Accomplishments that we're proud of

Successfully built an end-to-end multimodal pipeline that goes from image upload to fully illustrated guides. The integration of object detection, structured instruction generation, and conversational interaction into a single workflow is a key achievement.

What's next for Mak3 Unmak3

Future plans include improving object selection accuracy, expanding instruction depth, and adding user customization options. We also aim to explore real-world educational and maker-focused applications.

Built With

Share this project:

Updates