VGGT is a transformer-based framework aimed at unifying classic visual geometry tasks—such as depth estimation, camera pose recovery, point tracking, and correspondence—under a single model. Rather than training separate networks per task, it shares an encoder and leverages geometric heads/decoders to infer structure and motion from images or short clips. The design emphasizes consistent geometric reasoning: outputs from one head (e.g., correspondences or tracks) reinforce others (e.g., pose or depth), making the system more robust to challenging viewpoints and textures. The repo provides inference pipelines to estimate geometry from monocular inputs, stereo pairs, or brief sequences, together with evaluation harnesses for common geometry benchmarks. Training utilities highlight data curation and augmentations that preserve geometric cues while improving generalization across scenes and cameras.
Features
- Unified transformer backbone for multiple geometry tasks (depth, pose, tracking, correspondence)
- Modular heads/decoders that share features while specializing per task
- Inference on single images, pairs, or short clips for broader applicability
- Evaluation scripts for standard geometry benchmarks and metrics
- Data pipelines with geometry-preserving augmentations and sampling strategies
- Checkpoints and configs enabling quick reproduction and fine-tuning