Meta Eyeful Tower Dataset Processing
In this tutorial we’ll walk through the process of downloading and processing data from the Eyeful Tower dataset, rebuilding the scene in Metashape, and then using Jawset Postshot to process gaussian splats for advanced lighting and view-dependent effects. This guide will combine state-of-the-art techniques in 3D scene reconstruction and rendering to generate photorealistic output.
While the Eyeful Tower dataset is a series of fantastic captures, there are limitations. This lesson is far from comprehensive and required much more effort to produce than should’ve been necessary.
The Eyeful Tower dataset, provided by the VR-NERF team and hosted on the project’s GitHub page, is a high-quality set of images and metadata used for 3D scene reconstruction. This tutorial focuses on processing Gaussian splats, a technique that allows for efficient and detailed volumetric rendering. Gaussian splats represent points in 3D space as 3D ellipsoidal distributions, ideal for creating intricate and dynamic visual effects in post-production.
We will begin by downloading the dataset from the Eyeful Tower repository, which provides both images and corresponding camera poses necessary for reconstructing the scene. Using Agisoft Metashape, we’ll reconstruct the camera views provided by Meta and build the 3D model of the scene, ensuring accurate geometry and texture mapping. Afterward, we’ll export the data for further processing in Jawset Postshot, where we’ll handle the gaussian splats which will enhance visual effects like lighting, and view dependent material effects.
By the end of this tutorial, you’ll have a deeper understanding of how to reconstruct and process the Eyeful Tower datasets with a focus on gaussian splat rendering, and how to combine multiple tools to achieve high-quality, production-ready results.
The Eyeful Tower dataset is released under the terms of the license specified on the GitHub repository and is intended for research and academic purposes. If you use the dataset in your work, you are required to cite the following paper:
Xu, L., Agrawal, V., Laney, W., Garcia, T., Bansal, A., Kim, C., Bulò, S. R., Porzi, L., Kontschieder, P., Božič, A., Lin, D., Zollhöfer, M., & Richardt, C. (2023). VR-NeRF: High-Fidelity Virtualized Walkable Spaces. In Proceedings of SIGGRAPH Asia (DOI: 10.1145/3610548.3618139). The full paper can be found at https://vr-nerf.github.io. Proper citation helps support the continued development of the research and ensures the authors are credited for their contributions.
Addendum
The Eyeful Tower dataset colmap files are compatible with NerfStudio and there’s a nice post by one of the authors of the paper you can find here. I’m not sure what the compatibility issue is atm. Also, it seems users are reporting the dataset consistently produces better output as a NeRF vs gaussian splats. Obviously, more research is needed on my part.
Useful Links
VR-NeRF: High-Fidelity Virtualized Walkable Spaces
EyefulTower Dataset Github Repository
AWS CLI – AWS Command Line Interface
Cultural Heritage Imaging | Gear
Meta Horizon Hyperscape Demo on Meta Quest
Sony RX0 II Digital Camera DSC-RX0M2 (RX0 2) B&H Photo
Bullet-time / photogrammetry multi-camera software – Xangle Camera Server