Photogrammetry for VR workflow

 

workflow

An overview of the workflow I use for creating realistic 3D assets

1. Agisoft

Agisoft takes photos and turns them into 3D models. It uses a clever algorithm that will calculate the position of where the photos were taken based on common points in the photographs, then builds a point cloud, and that data is used to build a 3D mesh and project pixel data onto it.

Taking the photos is the most important part of the process. For small objects (things that you can walk around ) you would normally use a turntable to place the object, and the camera will be on a tripod. Agisoft doesn’t give any recommendations on how many photos to take as it will likely differ for each scenario. The important part is to take the photos with at least 50% overlap. For the body scanning I did (3D body scanning ) I’ve taken around 200 photographs. Generally, the more photos the better, but taking too many will just make the whole process too slow without any gain in terms of quality. For smaller and simpler objects you can get around with 20-30 photos. Also, do not use wide angle lens unless you really have to as the lens distortion will affect the outputted 3D model.

For isolated objects, you will probably need to do masking as well. Unfortunately, the masking tools Agisoft has are not automatic and it will be impossible to manually mask 200 photographs with their native tools. What I did was import all my images as a movie sequence in AfterEffects, colour key the green screen, create an alpha channel from the mask, and separate the frames back in photoshop, and imported the separate masks in Agisoft. There might be an easier way of doing this – I am just not aware of any at the moment.

Scanning environments is somewhat simpler. This is how Agisoft recommends capturing the photos:

agisoft

I found that in small interiors(a few square metres) is better to sit in a central position with the camera and only move up and down and slightly left and right, while rotating the camera to cover all angles. Again it is up to you how many photos to take, I’ve taken about 150 for this scan.

The settings in Agisoft are somewhat misleading. I found that at times, putting it on highest settings for the pairing of the photographs will give a worse result than on medium settings. I generally use medium settings for the pairing, high on building the point cloud and high on building the mesh.The more advanced settings I leave on default and sometimes try different settings and compare the results.When I have a mesh built, I build the texture (usually 10k) and export it to Zbrush.

2. Zbrush

I use Zbrush to do the retopology, cleanup and to generate maps. If the 3D model is not going to be animated (in which case you would need custom topology), then using the automated tool called ZRemesher will usually give decent results.The highest poly object you can bring in unity is 64k triangles, but you can also have a denser object which unity will split into parts on import. The environment I used in my last project, was around 80k; all other objects were 50k or less. All together I kept my scene under 200k so the scene can run smoothly. After doing the retopology and reprojected the detail from the high poly model, I export my new 3D model back to Agisoft and reproject the texture. That way you have a lighter 3D model, but the same 10k texture density. An important thing to remember is to not move the mesh from its original position when working on it outside of Agisoft otherwise you will not be able to project the texture back. I repeat this process if I need to do some more clean-up or cut out/add missing parts. Last thing in Zbrush is generating normal, cavity and specular maps and export.

3.Maya

Going through Maya is necessary for doing any animation, for aligning the 3D models, centering the pivots, and getting the right scale. Unity and Maya work well together and any changes in a Maya scene can be automatically updated in Unity.

4.Unity

In Unity all there is left to do is set up the shader. With Unity 5, I mostly use the standard shader with normal and specular maps.UPDATE: Valve just released a plugin with shaders specifically designed for VR : link.   For some 3D models I use the pre-integrated skin shader as it gives me more functionality. Opting for HDRI lighting like skyshop, will give much better results but there is the drawback of no real-time shadows. Make sure the textures are all set to 8k in unity. Done!

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s