diff --git a/_projects/renders.md b/_projects/renders.md new file mode 100644 index 0000000..70313bc --- /dev/null +++ b/_projects/renders.md @@ -0,0 +1,180 @@ +--- +layout: page +title: Renders +description: Video rendering of LiDAR point clouds and 3D images. +img: /assets/img/republique-480.webp +importance: 3 +category: thesis +--- + +I made lots of videos to illustrate my slides during my PhD! You can see +some of them below, and how I made them. + +## Point clouds + +To visualize and render point clouds videos I used [CloudCompare]. +CloudCompare is very versatile to visualize and manipulate 3D point +clouds and comes with a large number of plugins. I used it to create +videos like this one: + +[CloudCompare]: https://www.cloudcompare.org/ + +![Video: 3D point cloud of +Rennes.](/assets/vid/1080_x264_crf35.mp4){.figure-img .img-fluid +.rounded .z-depth-1 loop=true} + +On this video you can see a point cloud with shading and color. The +colors represent the LiDAR intensity, using a color map ranging from +purple for low intensities, to yellow for high intensities. The shading +allow a better perception of the 3D depth of the point cloud. I used +*Portion de Ciel Visible* (PCV) to compute this shading. Unfortunately, +CloudCompare allow only one color scalar field to be displayed at a +time. So I rendered the frames with PCV, then a second time with +intensities. I then produced composite images with the PCV as luminance +and the intensity as chroma with this script: + + +```bash +#!/bin/env bash + +LUMA_FRAMES=$1 +CHROMA_FRAMES=$2 +OUT_DIR=$3 + +CRF=10 +SIZE='1920x1080' + +mkdir -p $OUT_DIR + +# Compose luma and chroma frames + +for frame in $(ls $LUMA_FRAMES/*.png); do + fname=$(basename $frame) + echo -en "\rProcessing $fname..." + if ! montage $LUMA_FRAMES/$fname $CHROMA_FRAMES/$fname -geometry +25+0 $OUT_DIR/$fname + then + echo 'Error somehow' + exit 2 + fi +done + +# Resize and crop frames + +mogrify -path $OUT_DIR \ + -alpha off \ + -resize $SIZE^ \ + -gravity Center \ + -extent $SIZE \ + $IN_DIR/* + +# Encode video + +ffmpeg -r 50 \ + -i $OUT_DIR/frame_000%3d.png \ + -c:v libx264 \ + -crf $CRF \ + -preset veryslow \ + -pix_fmt yuv420p \ + x264_crf${CRF}.mp4 + +``` + +## Voxels + +I tried a lot of different software to visualise and render voxels, but +nothing really convinced me. I brought out the big guns and went back to +basics! +For the rendering of the voxels I used [Blender] with the Cycles +renderer. + +[Blender]: https://www.blender.org/ + +![Video: 3D image of the same dataset.](/assets/vid/parlement_c20.mp4){.figure-img +.img-fluid .rounded .z-depth-1 loop=true} + +Here you can see a voxelization of the previous point cloud. The colors +represent the LiDAR intensity, using the same color map ranging from +purple for low LiDAR intensities, to yellow for high intensities. The +colors are shaded with a virtual sun by using a realist ray-tracing and +a logarithmic exposure sensitivity. The colors appear a little washed +out, as they would be on a real camera under similar lighting +conditions. + +Nothing really provide support to open voxels in Blender. So I write +this Python script to open voxel files from my [Idefix Python +package][idefix] directly in Blender. This script is a quick draft and +would need a good refactoring, but hey! It works! + +```python +import bpy +from matplotlib import pyplot as plt +import numpy as np + +data_in = 'voxels.npz' +data = np.load(data_in) + +coords = data['coords'] +vmin, vmax = np.quantile(data['intensity'], (0.01, 0.99)) +colors = ((np.clip(data['intensity'], vmin, vmax) + - vmin) + / (vmax - vmin) + * 255).astype(np.int) + + +def gen_cmap(mesh, name='viridis'): + cm = plt.get_cmap(name) + for i in range(256): + mat = bpy.data.materials.new(name) + mat.diffuse_color = cm(i) + #mat.specular_color = cm(i) + mesh.materials.append(mat) + + +def gen_voxels(coords, colors): + + # make mesh + + vertices_base = np.array(((0, 0, 0), (0, 1, 0), (1, 1, 0), (1, 0, 0), + (0, 0, 1), (0, 1, 1), (1, 1, 1), (1, 0, 1))) + faces_base = np.array([(0,1,2,3), (7,6,5,4), (7,4,0,3), + (6,7,3,2), (5,6,2,1), (4,5,1,0)]) + + vxl_count = coords.shape[0] + + vertices = (coords[None].repeat(8, axis=0).swapaxes(1, 0) + + vertices_base).reshape(-1, 3) + faces = (faces_base[None].repeat(vxl_count, axis=0) + + (np.arange(vxl_count) + * 8)[None].repeat(6, axis=0).T[...,None]).reshape(-1, 4) + colors = colors.repeat(6) + + new_mesh = bpy.data.meshes.new('vxl_mesh') + new_mesh.from_pydata(vertices, [], faces.tolist()) + new_mesh.update() + + # make object from mesh + + new_object = bpy.data.objects.new('vxl_object', new_mesh) + + # make collection + + new_collection = bpy.data.collections.new('vxl_scene') + bpy.context.scene.collection.children.link(new_collection) + + # add object to scene collection + + new_collection.objects.link(new_object) + + # Set colors + + gen_cmap(new_mesh) + new_object.data.polygons.foreach_set('material_index', colors) + + +gen_voxels(coords, colors) +``` + + +![](/assets/img/republique.png){.img-fluid .rounded .z-depth-1} + +[idefix]: https://github.com/fguiotte/idefix/ diff --git a/_projects/sap.md b/_projects/sap.md index 811dc06..787540f 100644 --- a/_projects/sap.md +++ b/_projects/sap.md @@ -8,7 +8,7 @@ category: thesis --- SAP (for Simple Attribute Profiles) is a Python package to easily -compute attribute profiles of images. I have developed this package as +compute *attribute profiles* of images. I have developed this package as part of my PhD thesis. The source code is available on [github][git]. I used this project to diff --git a/_projects/spectra.md b/_projects/spectra.md index e6c18d1..7007247 100644 --- a/_projects/spectra.md +++ b/_projects/spectra.md @@ -2,7 +2,7 @@ layout: page title: Spectra description: Application using the morphological hierarchies and LiDAR data. -img: /assets/img/spectra.png +img: /assets/img/spectra-480.webp importance: 1 category: thesis --- diff --git a/assets/img/republique.png b/assets/img/republique.png new file mode 100644 index 0000000..1497b21 Binary files /dev/null and b/assets/img/republique.png differ diff --git a/assets/vid/1080_x264_crf35.mp4 b/assets/vid/1080_x264_crf35.mp4 new file mode 100644 index 0000000..06ce932 Binary files /dev/null and b/assets/vid/1080_x264_crf35.mp4 differ diff --git a/assets/vid/3D_spectrum.mp4 b/assets/vid/3D_spectrum.mp4 new file mode 100644 index 0000000..5260460 Binary files /dev/null and b/assets/vid/3D_spectrum.mp4 differ diff --git a/assets/vid/parlement_c20.mp4 b/assets/vid/parlement_c20.mp4 new file mode 100644 index 0000000..965e664 Binary files /dev/null and b/assets/vid/parlement_c20.mp4 differ