Welcome to the Deep 4DCVT Project. This webpage provides all the information necessary to perform multi-view stereo reconstruction of subjects captured in acquisition platforms such as Kinovis. This page provides all the tools to compile and use it. For simplicity, we use the Docker container system to meet all the dependencies and system requirements. The code can also be pulled with Git and is written in C++ and Python.




  1. Prerequisites Without GPU computation:

  2. Prerequisites With GPU computation:

  3. Setup

    Create working directory:

    • installDIR="/path/to/folder"
    • mkdir $installDIR/Deep4DCVT
    • cd $installDIR/Deep4DCVT

    Recover and unzip setup files:

    • wget
    • tar xvzf Deep4DCVT.tar.gz -C $installDIR/Deep4DCVT/

    Build Docker image:

    • docker build --build-arg gid=$(id -g $(whoami)) --build-arg uid=$(id -u $(whoami)) --build-arg name=$(whoami) -t vleroy/deep4dcvt:gpu .

    Clean directory

    • rm $installDIR/Deep4DCVT/Deep4DCVT.tar.gz
  4. Execution

    Parameters Settings:

    • ##### Detailed parameters for the reconstruction are contained in configuration file #####
    • mode='S' # see possible modes below.
    • firstFrame=150
    • lastFrame=150
    • # Path to data folder. Must contain 'Calib', 'UndistortedImages' and 'UndistortedSilhouettes' folders. Must also contain config file
    • dataFolder="/data/folder/path/"
    • # Path to Output folder. (can be empty)
    • outputFolder="/output/folder/path/"
    • # Config File containing detailed parameters for the reconstruction in the different modes. This file must be in dataFolder
    • configFileName=""
    • # For the following two parameters, camera ID ('%03i' in this case) must appear before frame number ('%06i' in this case)
    • # Images data format
    • imgDataFormat="UndistortedImages/cam-%03i/Undist_%06i.png"
    • # Silhouettes data format
    • silhDataFormat="UndistortedSilhouettes/cam-%03i/Undist_%06i.png"
    • # Camera ID should correspond between images/silhouettes and calibration
    • camDataFormat="Calib/%03i.txt"

    Prepare Output Folder:

    Launch Computations:

    • # From now on, paths declared here are virtual paths inside the container
    • inputImages="/data/$imgDataFormat"
    • inputSilhouettes="/data/$silhDataFormat"
    • inputCalib="/data/$camDataFormat"
    • configFile="/data/$configFileName"
    • command="/4dcvt-installs/deep4dcvtr/bin/Deep_4DCVT -m $mode -i $inputImages -s $inputSilhouettes -p $inputCalib -o /output/ -f $firstFrame -l $lastFrame --config_file $configFile"
    • ##### Launch Script #####
    • # Without GPU usage
    • docker run --rm -v /$dataFolder/:/data/ -v /$outputFolder/:/output/ vleroy/deep4dcvt:gpu $command
    • # With GPU usage (only useful in 'S' mode with CNN config)
    • docker run --rm --runtime=nvidia -v /$dataFolder/:/data/ -v /$outputFolder/:/output/ vleroy/deep4dcvt:gpu $command
    • ##### Post Processing (for 'S' mode, to be adapted when used in other modes) #####
    • folder="$outputFolder/RVDs/Static"
    • for f in $(ls $folder); do meshlabserver -i $folder$f -o $folder$f -s $installDIR/Deep4DCVT/cleaning_script.mlx; done

    Possible modes:

    • 'S': compute per-frame reconstructions, using either Daisy descriptors or CNN depending on parameters. Output: RVDs/Static/(%06i).obj
    • 'C': takes as input RVDs/Temp/%06i.obj files and outputs cleaned and colored COFF meshes as RVDs/Temp/
    • 'P': compute per-frame reconstruction, using RVDs/Static/(%06i).obj as approximation of the surface. Recompute depth maps around it and output surface in RVDs/StaticReestim/%06i.obj
    • 'E': Extract surface from precomputed PhotoUtils binaries. Can be used to check the effects of different filtering parameters. Output surfaces in RVDs/Static/%06i.obj (overwrite if already existing)
    • 'D': Temporal Integration using MESHHOG_Assocs/%03i_%03i.txt precomputed motion field between adjacent frames. The output surfaces are RVDs/Dynamic/%06i.obj
    • #Motion field file format: every line contains a 3D point to 3D point match, following the MeshHOG output syntax:
    • (int)match_index (float)x[t] (float)y[t] (float)z[t] (float)x[t+1] (float)y[t+1] (float)z[t+1]
    • # Typically, motion fields were obtained using meshHOG MeshMatching function with (roughly) the following parameters:
    • ./MeshMatching -op_mode M -src_mesh_file source_mesh.obj -src_mesh_desc_file 0 -dst_mesh_file target_mesh.obj -dst_mesh_desc_file 0 -matches_dst_src_file 0 -gui false -save_output_file assocs_out.txt -detector_type 1 -detector_method 0 -detector_thresh 50000 -scale_space true -no_scales 3 -corner_thresh -10 -non_max_sup true -feature_type 0 -feat_uses_det_scale true -no_rings 5 -rings_as_percentage true -no_bins_centroid 36 -no_bins_groups 4 -spatial_influence 3.0 -matching_2nd_ratio 0.8 -groundtruth 2 -save_output_format sum -noise false -noise_sigma_colour 0 -noise_sigma_colour_shotnoise 0 -noise_sigma_geom_noise 0 -noise_sigma_geom_shotnoise 0 -noise_sigma_geom_rotate 0 -noise_sigma_geom_scale 0 -noise_sigma_geom_local_scale 0 -noise_sigma_geom_sampling 0 -noise_sigma_geom_holes 0 -noise_sigma_geom_micro_holes 0 -noise_sigma_geom_topology 0



Deep4DCVT is distributed under a dual licensing scheme. The code is provided for non-commercial research purposes only. In this case, you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. If you use this code or part of it in a publication, you agree to cite one or both the papers in the references. If the terms and conditions would prevent you from using Deep4DCVT, please consider the option to obtain a commercial license for a fee.


Vincent Leroy

  • INRIA Grenoble Rhône-Alpes
  • 655, avenue de l’Europe, Montbonnot
  • 38334 Saint Ismier, France
  • Email: