Featured Video
This Week in Quality Digest Live
CMSC Features

More Features

CMSC News

More News

CMSC

The Dangers of Polygonization

Published: Thursday, November 17, 2016 - 16:17

The advancement of dimensional scanning technology has allowed massive amount of point cloud data to be obtained quickly. This data is used for analysis and reverse-engineering functions. Generally, once this data is collected, it will be polygonized into a mesh which allows for easier data analysis and noise reduction. However, polygonizing can potentially alter the true representation of the data. The amount of alteration can be significant if you are trying to understand the true characteristics of the part you are measuring. This article will discuss some of the pitfalls that come with polygonization of scanned data.

Introduction

Because data analysis can be labor-intensive,collection of substantial amounts of data via a scanner was originally reserved for reverse engineering. Scanning of a small noncomplex part can easily produce a point cloud with tens of thousands of points. The point cloud can grow exponentially as the size and complexity of the part increases. Often, this makes the point cloud too cumbersome to work with in a meaningful way. However, with advancements in computing hardware and software, these large point clouds are becoming manageable. Today, point clouds captured by scanners are being used for as-built verification of parts, predictive assembly, and (especially in the aerospace industry) predictive shimming.

The common practice when collecting data from a scanner is to turn the resulting point cloud into a polygonized mesh. This process was developed for reverse engineering, as a polygonized mesh can be used to create a non-uniform rational basis spline (NURBS) computer-aided design (CAD) model. However, the creation of a polygonized mesh for analysis of parts can lead to erroneous results (outcomes) in the data due to the nature of polygonization. This article will explain the nature of polygonization and discuss a phenomenon that was discovered during some common measurement and data analysis investigation.

Polygonization

The generation of polygonal meshes for a three-dimensional representation of complex geometric objects has become a preferred approach to visualize and measure large data sets. Some of the main advantages of using a mesh are the ability to sort outlier data from the original point cloud and the removal of overlapping data. The technique also allows for a more realistic representation of the measured data by creating a normalized surface with, in most cases, graphical shading to aid viewer recognition and appreciation of the surface.

There are many differing approaches in creating a polygonal mesh. Most of these differences are based upon algorithmic decisions regarding aspects such as processing speed and point value optimization. Because the range of scanning devices and point processing tools vary widely, this article does not aim to compare these processes. Rather, it focuses upon a nondisclosed specific software and scanning system, looking past the mathematical and algorithmic dependencies that are involved in surface scanning and creation of a mesh. This allows a concentration on the feasible differences that can be generated from the point of view of the user as he or she goes from the point phase to a polygonal mesh.

In simplest terms, polygonization is the creation of a polygonal mesh from point cloud data—the joining of points by vertices with known coordinate values to create triangles and thus plane features from point to point. Mesh generation from these triangles is achieved by joining three kinds of elements: vertices, edges, and faces. These elements are interpreted together by their connectivity, or topological relationship to one another and the relationship of the mesh’s intrinsic connectivity. Because meshes are usually large and complex, multiple operations must be performed on the mesh to create compact data structures. Thus, a mesh may become an interpretation of a point cloud dataset constrained by the settings used to generate the surface and the algorithmic functions employed by the software.

According to Botsch,1 “Representing a given (real or virtual) surface geometry by a polygonal mesh is usually an approximation process. Hence there is no unique polygonal 3D model, but the density and distribution of sample points and the specific way how these samples are connected by triangles provide many degrees of freedom.”

As seen in figure 1, the quality of a mesh can be interpreted based on the intended quality of the desired mesh. Thus, a mesh’s geometric characteristics can be defined by aspects such as edge length, outlier removal, and possible smoothing of the data. In this case, the user may affect the measured data by overconstraining a result and simplifying the polygonal mesh output. Mesh smoothing itself reduces noise in scanned surfaces by generalizing signal processing techniques to irregular or overly complex sections of triangle meshes and ordering the triangles in question to blend together, changing the surface contour.

During the measurements collected for this project, the data were collected as raw xyz point data and imported as a simple ASCII file. This technique dictates that a normal had to be assigned to each coordinate (this was based on a nominal model) and multiple meshes were generated using a variety of settings that are compared below.


Figure 1: The original object: a) bunny; b) bunny representation after phase 1; c) bunny representation after phase 2.2

Discovered issue

We were tasked with measuring a part and creating a report on its final installation state for end-item acceptance. Because the part in question is measured at the manufacturer in a locked state that is similar to the final state of the part, large deviations were not expected. However, because this part is a critical one and the final state is a completed assembly, an end-item survey is required. The main concerns for the part were contour deviation, cross-sectional deviation, and a floating deviation over portions of the cross section. After the data was collected and the analysis performed, the comparison with the report from the vendor showed massive differences in the floating deviations (as seen in figure 2).


Figure 2: Floating deviation differentiation from supplier to receiver.

Figure 2 shows that the plotted curve for the vendor data is smooth and the transition from one direction to another is more gradual. The jagged curve from the assembled data also causes more portions of the curve to be out of tolerance in comparison to the vendor data. This was an unexpected result as the counter for each data set was similar and did not show any massive variations. The vendor was not required to provide cross-sectional data so no comparison was made at that time to what they had.

Root cause analysis

Once the discrepancy was discovered we brainstormed and created a list of possible factors. The initial thought was that the scanner being used on the assembled part was creating more noise in the data than the scanner used by the vendor, as the accuracy of each scanner was within 0.002 in. To help determine if this was the issue, we measured a section of the same part in the same setup with each instrument and then performed a contour, cross-section, and floating deviation analysis on each measurement set using the same software. Figures 3, 4, and 5 are derived from the results of this analysis and show that the comparison of the instrument did not show massive variations that would cause the differences observed between the vendor data and the assembled data. However, the data did show that when we performed the analysis, the floating-point deviation results from both scanners were jagged, resembling the analysis of the data collected in the final assembled state.


Figure 3: Contour deviation analysis of vendor and assembled data.


Figure 4: Cross-sectional deviation of vendor and assembled data.


Figure 5: Floating deviation of vendor and assembled data.

Because the instruments were producing common results similar to the analysis of the data collected for the assembled part, it seemed clear that something was vastly different with the analysis routine for the assembled parts. To try to determine the issue, we looked into the process the vendor was performing. The vendor sent us a copy of the complete job folder. We looked at the alignment techniques and analysis routines for the vendor measurement and found no differences there. However, we did discover one major issue between the data sets: The vendor was polygonizing the point cloud captured by their scanning system, whereas we were using the raw point-cloud data.

As discussed above, polygonization changes the data and post-processing performed on the polygonized data, such as smoothing, can change the data set sustainably more. This is why we were not performing polygonization on the assembled part data as we were trying to capture the true representation of the data. With this discovery, we went back and polygonized the data we had collected in our testing and found that this process changed the results of the data analysis dramatically for the floating-point deviation. The results showed the floating deviation was closer to what was been provided by the vendor in that it was smoother (as seen in figure 6).


Figure 6: Floating deviation polygonized data of vendor and assembled data.

Conclusion

Polygonization is not a bad thing and has its place when performing data analysis. This article is not intended to discredit polygonization as an analysis tool, but to provide an insight into some of the considerations that need to be considered when doing data analysis on point clouds. A lot of the software currently used by scanners to collect data and do post-processing of the scan data steer the user to create polygonized meshes out of the collected data. This may suit the intention of the measurements, but if you are looking for the true representation or variations in the data, then polygonization may not be the appropriate method for post processing.

Figure 7 shows the difference between a point cloud that was just polygonized and one that had post-processing techniques applied to it. If you use polygonization, you need to be cognizant of the affects polygonization post-processing techniques and parameters can have on the data set. They can drastically change the data set therefore change the data being represented in your report.


Figure 7: Topological differences between smoothed and unsmoothed datasets.

References

1 Botsch, M., Pauly, M., Kobbelt, L., Alliez, P., Levy, B., Bischoff, S., and Rossl, C., “Geometric Modeling Based on Polygonal Meshes,” SIGGRAPH course, 2007.
2 Boissonnat, J.-D. and Cazals, F., “Smooth Surface Reconstruction via Natural Neighbour Interpolation of Distance Functions,” Proc. 16th Annual Symposium on Computational Geometry , pp. 223–232, ACM Press, 2000.

Discuss

About The Authors

Ben Rennison’s picture

Ben Rennison

Ben Rennison has been actively involved in metrology for more than eight years focusing on reverse engineering practices using structured light scanning and photogrammetry. His past research as faculty at Clemson University included developing pathways for entry into the metrology field and measuring the associated learning outcomes in order to become a professional metrologist. Rennison’s research interests include SDK programming for worker assistance, self-built structured light scanning methods, and texture-mapped heritage documentation.

Chris Greer’s picture

Chris Greer

Chris Greer is the Lead Engineer for the Boeing Research and Technology Measurement Technology group in Charleston, SC and a TLE (Technical Lead Engineer) for this group. Greer holds a BS in Computer Science with a minor in Mathematics and a MS in Information Technology.  He is currently working on his Doctorate in Computer Science with a completion date of spring 2019.  His area of emphasis for his doctorate will be in the areas of computer vision and image processing.