Add renders with videos

This commit is contained in:
Florent Guiotte 2023-02-14 15:57:50 +02:00
parent c0718c6a66
commit 2c5ec3957b
7 changed files with 182 additions and 2 deletions

180
_projects/renders.md Normal file
View File

@ -0,0 +1,180 @@
---
layout: page
title: Renders
description: Video rendering of LiDAR point clouds and 3D images.
img: /assets/img/republique-480.webp
importance: 3
category: thesis
---
I made lots of videos to illustrate my slides during my PhD! You can see
some of them below, and how I made them.
## Point clouds
To visualize and render point clouds videos I used [CloudCompare].
CloudCompare is very versatile to visualize and manipulate 3D point
clouds and comes with a large number of plugins. I used it to create
videos like this one:
[CloudCompare]: https://www.cloudcompare.org/
![Video: 3D point cloud of
Rennes.](/assets/vid/1080_x264_crf35.mp4){.figure-img .img-fluid
.rounded .z-depth-1 loop=true}
On this video you can see a point cloud with shading and color. The
colors represent the LiDAR intensity, using a color map ranging from
purple for low intensities, to yellow for high intensities. The shading
allow a better perception of the 3D depth of the point cloud. I used
*Portion de Ciel Visible* (PCV) to compute this shading. Unfortunately,
CloudCompare allow only one color scalar field to be displayed at a
time. So I rendered the frames with PCV, then a second time with
intensities. I then produced composite images with the PCV as luminance
and the intensity as chroma with this script:
```bash
#!/bin/env bash
LUMA_FRAMES=$1
CHROMA_FRAMES=$2
OUT_DIR=$3
CRF=10
SIZE='1920x1080'
mkdir -p $OUT_DIR
# Compose luma and chroma frames
for frame in $(ls $LUMA_FRAMES/*.png); do
fname=$(basename $frame)
echo -en "\rProcessing $fname..."
if ! montage $LUMA_FRAMES/$fname $CHROMA_FRAMES/$fname -geometry +25+0 $OUT_DIR/$fname
then
echo 'Error somehow'
exit 2
fi
done
# Resize and crop frames
mogrify -path $OUT_DIR \
-alpha off \
-resize $SIZE^ \
-gravity Center \
-extent $SIZE \
$IN_DIR/*
# Encode video
ffmpeg -r 50 \
-i $OUT_DIR/frame_000%3d.png \
-c:v libx264 \
-crf $CRF \
-preset veryslow \
-pix_fmt yuv420p \
x264_crf${CRF}.mp4
```
## Voxels
I tried a lot of different software to visualise and render voxels, but
nothing really convinced me. I brought out the big guns and went back to
basics!
For the rendering of the voxels I used [Blender] with the Cycles
renderer.
[Blender]: https://www.blender.org/
![Video: 3D image of the same dataset.](/assets/vid/parlement_c20.mp4){.figure-img
.img-fluid .rounded .z-depth-1 loop=true}
Here you can see a voxelization of the previous point cloud. The colors
represent the LiDAR intensity, using the same color map ranging from
purple for low LiDAR intensities, to yellow for high intensities. The
colors are shaded with a virtual sun by using a realist ray-tracing and
a logarithmic exposure sensitivity. The colors appear a little washed
out, as they would be on a real camera under similar lighting
conditions.
Nothing really provide support to open voxels in Blender. So I write
this Python script to open voxel files from my [Idefix Python
package][idefix] directly in Blender. This script is a quick draft and
would need a good refactoring, but hey! It works!
```python
import bpy
from matplotlib import pyplot as plt
import numpy as np
data_in = 'voxels.npz'
data = np.load(data_in)
coords = data['coords']
vmin, vmax = np.quantile(data['intensity'], (0.01, 0.99))
colors = ((np.clip(data['intensity'], vmin, vmax)
- vmin)
/ (vmax - vmin)
* 255).astype(np.int)
def gen_cmap(mesh, name='viridis'):
cm = plt.get_cmap(name)
for i in range(256):
mat = bpy.data.materials.new(name)
mat.diffuse_color = cm(i)
#mat.specular_color = cm(i)
mesh.materials.append(mat)
def gen_voxels(coords, colors):
# make mesh
vertices_base = np.array(((0, 0, 0), (0, 1, 0), (1, 1, 0), (1, 0, 0),
(0, 0, 1), (0, 1, 1), (1, 1, 1), (1, 0, 1)))
faces_base = np.array([(0,1,2,3), (7,6,5,4), (7,4,0,3),
(6,7,3,2), (5,6,2,1), (4,5,1,0)])
vxl_count = coords.shape[0]
vertices = (coords[None].repeat(8, axis=0).swapaxes(1, 0)
+ vertices_base).reshape(-1, 3)
faces = (faces_base[None].repeat(vxl_count, axis=0)
+ (np.arange(vxl_count)
* 8)[None].repeat(6, axis=0).T[...,None]).reshape(-1, 4)
colors = colors.repeat(6)
new_mesh = bpy.data.meshes.new('vxl_mesh')
new_mesh.from_pydata(vertices, [], faces.tolist())
new_mesh.update()
# make object from mesh
new_object = bpy.data.objects.new('vxl_object', new_mesh)
# make collection
new_collection = bpy.data.collections.new('vxl_scene')
bpy.context.scene.collection.children.link(new_collection)
# add object to scene collection
new_collection.objects.link(new_object)
# Set colors
gen_cmap(new_mesh)
new_object.data.polygons.foreach_set('material_index', colors)
gen_voxels(coords, colors)
```
![](/assets/img/republique.png){.img-fluid .rounded .z-depth-1}
[idefix]: https://github.com/fguiotte/idefix/

View File

@ -8,7 +8,7 @@ category: thesis
---
SAP (for Simple Attribute Profiles) is a Python package to easily
compute attribute profiles of images. I have developed this package as
compute *attribute profiles* of images. I have developed this package as
part of my PhD thesis.
The source code is available on [github][git]. I used this project to

View File

@ -2,7 +2,7 @@
layout: page
title: Spectra
description: Application using the morphological hierarchies and LiDAR data.
img: /assets/img/spectra.png
img: /assets/img/spectra-480.webp
importance: 1
category: thesis
---

BIN
assets/img/republique.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 MiB

Binary file not shown.

BIN
assets/vid/3D_spectrum.mp4 Normal file

Binary file not shown.

Binary file not shown.