5.0 KiB
layout | title | description | img | importance | category |
---|---|---|---|---|---|
page | Renders | Video rendering of LiDAR point clouds and 3D images. | /assets/img/republique-480.webp | 3 | thesis |
I made lots of videos to illustrate my slides during my PhD! You can see some of them below, and how I made them.
Point clouds
To visualize and render point clouds videos I used CloudCompare. CloudCompare is very versatile to visualize and manipulate 3D point clouds and comes with a large number of plugins. I used it to create videos like this one:
{.figure-img .img-fluid
.rounded .z-depth-1 loop=true}
On this video you can see a point cloud with shading and color. The colors represent the LiDAR intensity, using a color map ranging from purple for low intensities, to yellow for high intensities. The shading allow a better perception of the 3D depth of the point cloud. I used Portion de Ciel Visible (PCV) to compute this shading. Unfortunately, CloudCompare allow only one color scalar field to be displayed at a time. So I rendered the frames with PCV, then a second time with intensities. I then produced composite images with the PCV as luminance and the intensity as chroma with this script:
#!/bin/env bash
LUMA_FRAMES=$1
CHROMA_FRAMES=$2
OUT_DIR=$3
CRF=10
SIZE='1920x1080'
mkdir -p $OUT_DIR
# Compose luma and chroma frames
for frame in $(ls $LUMA_FRAMES/*.png); do
fname=$(basename $frame)
echo -en "\rProcessing $fname..."
if ! montage $LUMA_FRAMES/$fname $CHROMA_FRAMES/$fname -geometry +25+0 $OUT_DIR/$fname
then
echo 'Error somehow'
exit 2
fi
done
# Resize and crop frames
mogrify -path $OUT_DIR \
-alpha off \
-resize $SIZE^ \
-gravity Center \
-extent $SIZE \
$IN_DIR/*
# Encode video
ffmpeg -r 50 \
-i $OUT_DIR/frame_000%3d.png \
-c:v libx264 \
-crf $CRF \
-preset veryslow \
-pix_fmt yuv420p \
x264_crf${CRF}.mp4
Voxels
I tried a lot of different software to visualise and render voxels, but nothing really convinced me. I brought out the big guns and went back to basics! For the rendering of the voxels I used Blender with the Cycles renderer.
{.figure-img
.img-fluid .rounded .z-depth-1 loop=true}
Here you can see a voxelization of the previous point cloud. The colors represent the LiDAR intensity, using the same color map ranging from purple for low LiDAR intensities, to yellow for high intensities. The colors are shaded with a virtual sun by using a realist ray-tracing and a logarithmic exposure sensitivity. The colors appear a little washed out, as they would be on a real camera under similar lighting conditions.
Nothing really provide support to open voxels in Blender. So I write this Python script to open voxel files from my Idefix Python package directly in Blender. This script is a quick draft and would need a good refactoring, but hey! It works!
import bpy
from matplotlib import pyplot as plt
import numpy as np
data_in = 'voxels.npz'
data = np.load(data_in)
coords = data['coords']
vmin, vmax = np.quantile(data['intensity'], (0.01, 0.99))
colors = ((np.clip(data['intensity'], vmin, vmax)
- vmin)
/ (vmax - vmin)
* 255).astype(np.int)
def gen_cmap(mesh, name='viridis'):
cm = plt.get_cmap(name)
for i in range(256):
mat = bpy.data.materials.new(name)
mat.diffuse_color = cm(i)
#mat.specular_color = cm(i)
mesh.materials.append(mat)
def gen_voxels(coords, colors):
# make mesh
vertices_base = np.array(((0, 0, 0), (0, 1, 0), (1, 1, 0), (1, 0, 0),
(0, 0, 1), (0, 1, 1), (1, 1, 1), (1, 0, 1)))
faces_base = np.array([(0,1,2,3), (7,6,5,4), (7,4,0,3),
(6,7,3,2), (5,6,2,1), (4,5,1,0)])
vxl_count = coords.shape[0]
vertices = (coords[None].repeat(8, axis=0).swapaxes(1, 0)
+ vertices_base).reshape(-1, 3)
faces = (faces_base[None].repeat(vxl_count, axis=0)
+ (np.arange(vxl_count)
* 8)[None].repeat(6, axis=0).T[...,None]).reshape(-1, 4)
colors = colors.repeat(6)
new_mesh = bpy.data.meshes.new('vxl_mesh')
new_mesh.from_pydata(vertices, [], faces.tolist())
new_mesh.update()
# make object from mesh
new_object = bpy.data.objects.new('vxl_object', new_mesh)
# make collection
new_collection = bpy.data.collections.new('vxl_scene')
bpy.context.scene.collection.children.link(new_collection)
# add object to scene collection
new_collection.objects.link(new_object)
# Set colors
gen_cmap(new_mesh)
new_object.data.polygons.foreach_set('material_index', colors)
gen_voxels(coords, colors)