After working with some more photogrammetry, I am convinced that this is the way to go with creating 3D environments for VR. The level of detail you can get from these scans is incredible and is only limited by the resolution of the camera used. If the scans are done correctly and some cleanup is performed on them afterwards, they would be very hard to distinguish from real life if placed into a mixed reality environment.
The only problem is the lighting. Game engines use physical lighting, meaning light reflects off the surface of the geometry of a 3D object. As the polycount has be kept to a minimum for the game to run smoothly, what happens is that you will have a very detailed textured object (because of the high pixel density of a photo) but very low detailed shadowing and reflections. Alternatives are using image based lighting(HDR) or just flat lighting, but both lack the ability of having real-time shadows and the lack of depth in the detail of the geometry becomes apparent when viewed up close or from different angles.
The way to go for now, is to use the traditional approach used in flat game development, and generate normal, cavity and specular maps in Zbrush, using the high poly models from Agisoft. That’s what I did for my scans and the end results were good enough, but not ideal. Normal maps produce the illusion of more geometry and more detail, and they worked well for traditional 2D displays as the user could not physically lean in and get really close to objects, but a different approach is needed for VR.
Below are photos of the place I scanned and eventually used as the main set in the narrative of my final degree show project.
And this is one of the test scans I did of my studio space. Both scans were done using one DSLR camera, Agisoft, and ZBrush for clean up and retopology.