Cognite Reveal: Cannot handle Large Point Clouds

  • 29 March 2023
  • 5 replies
  • 99 views

Userlevel 3

Cognite Reveal

We are being told that our models need to be downsampled which is resulting in better performance in CDF but poor quality on the viewer. We have consistently noticed that we are unable to have one large point cloud together in the viewer and also the viewing quality is very poor compared to what laser scanning vendor solutions show. I need some attention on this from Product?

Why should we downsample and decrease the density of the Point Cloud? does not appear to be the right thing to do

Why should we chop up the 3D model in pieces?


5 replies

Userlevel 2

Hi Ibrahim and thanks for reaching out. 

 

As a leader in the point cloud visualization industry, it's crucial for us to take your request seriously. I'd like to clarify a few points to avoid any misunderstandings:


 

  • While CDF doesn't require models to be downsampled, we do recommend cleaning them for visualization purposes. This involves removing bad data points, such as duplicates or areas with very high density (e.g. laser scans with a million points per cubic centimeter in certain regions). Artifacts in the source models can often be cleaned up during post-processing as part of the point cloud acquisition.

  • Splitting models into multiple regions has both pros and cons. Cognite's viewers have a limited point budget, so splitting models into smaller areas can make the point budget more “focused”. However, this can also result in lost context, as users will only see a small region. CDF currently recommends splitting point clouds above 20 billion points into multiple regions. Doing so can also help by allowing the pre-processor to be run with settings tuned for specific regions.

 

We understand that even with state of the art technology, visualization results may not always produce satisfactory results. As we strive to be at the forefront of the industry, we are heavily investing in improving our processing pipeline. We are currently working on providing out-of-the-box point filtering mechanisms to clean up common cases of "noise" in source models. We are also exploring ways to push the limits for the number of points that can be visualized.

Userlevel 3

please take a look at 

  1. Right picture is downsampled
  2. Left picture is full resolution

Clearly does not work well.

Userlevel 3

are you using pottery off the shelf sdk/

Userlevel 3

@Lars Moastuen please let me know if you had a chance to look at the above comment

Userlevel 2

Hi Ibrahim. We are basing the visualization on Potree, but have done several customizations - some frontend and some backend.

As for the specific example points are dynamically loaded based on camera position. In some cases, this heuristic doesn’t make the optimal choices as in the image you have shared. In general it can’t be expected that details such as tags is readable in point clouds, and we recommend pairing point clouds with 360 images for having the best fidelity.

Note that we are looking into the ability to let users control fidelity settings to alleviate some of the situations you are referring to.

Reply