Piero V.

RealSense D400 and infrared streams

A while ago, I started working on a dataset I captured a few years ago with a Microsoft Kinect One.

I immediately realized the data looked much cleaner than the newer datasets I created with my Intel RealSense D435.

I had already noticed that, above a certain distance, the depth data was full of craters. I already knew the error is proportional to the squared distance, but for me, it was much bigger than expected. Therefore, I calibrated the sensors and now I stay closer to my targets during the acquisitions.

But for the last dataset I captured, I tried another strategy: I decided to save also the raw IR footage to process it offline.

Stereo vision

RealSense cameras are RGBD sensors: they provide simultaneously a color (RGB) and depth (D) stream.

There are several types of techniques to measure depth. For example, the original Kinect for the Xbox 360 uses “structured light”, and the Kinect One included a time-of-flight camera.

The RealSense D400 series is based on stereo vision, which works by matching the same point in frames captured by two different cameras. There is a relation between the displacement of this point (disparity), the relative position of the two cameras, and the depth. … [Leggi il resto]

My RGBD toy box

In the last few months, in my free time, I have been developing a small application to elaborate RGBD datasets capturing people. In particular, my goal is to create 3D scans of heads. I do not expect these models to be well-made or even usable without some processing, but I wonder if I can transform this data at least to starter models.

I first started to work with RGBD cameras during my internship at Altair. At the time, I built a small pipeline to reconstruct models based on Kinect Fusion.

Sadly, Kinect Fusion is an online method. The advantage is that it will give immediate feedback if it loses the camera tracking. But the disadvantage is that it needs a pretty powerful GPU. I have one on my desktop, but I took all my datasets using laptops.

Also, in my experience, getting a usable dataset with Kinect Fusion requires several attempts and time (the acquisition must proceed very slowly), which, generally speaking, is not always compatible with… people 😄️. They might move, or lose patience, etc etc… … [Leggi il resto]

Constructive Solid Geometry in JS

From time to time, I like playing with 3D graphics, especially from a programmer’s point of view.

Constructive solid geometry, or CSG, is a modeling technique involving combining simpler objects with Boolean operations to obtain more complex ones.

A while ago, I found an implementation as a JavaScript library: CSG.js.

It really is brilliant because it is a complete tool in less than 600 rows, half of which are explanatory comments.

Its foundations are the clipping and the inversion operation. The former removes parts of a BSP tree inside another BSP tree, the latter swaps solid and empty space.

CSG.js combines them cleverly to implement Boolean union, subtraction, and intersection.

Even from a software engineering perspective, its author made some acute choices. And as a result, this project did not need to be updated in the latest ten years!

For example, library users can implement vertices with custom properties thanks to duck typing. They just need to implement a few methods and have a pos member. In this way, it is possible, for instance, to manage texture coordinates. … [Leggi il resto]

PyElas

Recently, I started experimenting with stereo vision.

It is a technique to produce depth maps using images captured by close positions. Then, with these maps, it is possible to create 3D representations.

The core of this workflow is the matching algorithm, which takes pairs of post-processed images and creates a “disparity” map. The disparity is the distance between a point in the two images. Depth and disparity are inverses, so it is easy to switch from one to the other.

OpenCV contains some stereo matching algorithms, but they produced a lot of noise. So I looked for another library, and I found libelas.

It is a GPLv3 C++/MATLAB library with many parameters to tune, but I could not find a Python version. My options were to switch to C++ or to port it by myself. I chose the latter, hoping that also others can benefit from it 🙂️.

Long story short, I published my first package on PyPI: PyElas.

How to use it

You can install it using pip. Then you just have to do this: … [Leggi il resto]

On acquiring 3D models of people

For my Master’s degree thesis, I dove into acquiring static objects with a RGBD sensor. Eventually, I decided to use the Kinect Fusion algorithm, which produced decent results.

I found this topic fascinating, so I continued on my own, but in another direction: acquiring people. So, in the last few months, I have been experimenting with scanning myself and my friends with a Microsoft Kinect One.

Conditioned by my previous results, initially, I tried with point clouds.

Deformation graph approach

One approach I found in several papers consists in:

  1. acquiring only a few scans (from 6-8 points of view), with the person as still as they can;
  2. performing a rough global alignment;
  3. running ICP to improve the rigid alignment locally;
  4. downsampling the point cloud to build a deformation graph;
  5. resolving an optimization problem;
  6. deforming the denser point clouds.

Reaching point 3 is not trivial because people move. ICP can be very unforgiving, and in some cases, you also need some luck to obtain good results at this stage.

For downsampling (Point 4), I used Open3D’s voxelization followed by averaging point coordinates. I do not know how this can influence the final results compared to something like a clustering algorithm. … [Leggi il resto]