Featured Product
This Week in Quality Digest Live
CMSC Features
Atul Minocha
It’s all about ROI
Ryan E. Day
September Speaker Series in review
Ryan E. Day
Realigning a cornerstone of industry
David H. Parker
Practical implications for electronic distance measurement
Belinda Jones
Users of CMMs, industrial scanners, and portable metrology systems are urged to participate

More Features

VSL hosts special edition of show at new center in Rotterdam
Latest line touts comprehensive coverage, ease of use
Blackfly S GigE line grows
Voting for 2022–2023 term will open at CMSC 2022 starting July 25
New facility in Toronto area will showcase multiple Hexagon product lines
API division named ‘Top External Provider 2018’
Exact Metrology selected for project
Faster and more powerful than ever before
Accurate measurement out of the box

More News

Stuart Robson, et al

Stuart Robson, et al’s default image


Developing the Light-Controlled Factory

Published: Wednesday, November 15, 2017 - 14:45

The light-controlled factory (LCF) is a UK development project running for five years, from July 2013 to July 2018. It is directed by the University of Bath and supported by University College London (UCL) and Loughborough University. This ambitious project aims to demonstrate a “ubiquitous” seven-dimensional (7D) measurement environment across the factory space and integrated with the production and assembly processes. UCL’s contribution to this will be a multi-camera system which can track and align multiple manufactured components across this large manufacturing environment.

Multi-camera systems already have commercial application, for example at lower accuracies as motion-capture systems for entertainment industries, and at metrology accuracies in automotive design applications. The challenge for UCL is to make metrology-level tracking work over wide spaces using systems which, when commercialized, will be affordable. Vision metrology, based on multi-camera, real-time photogrammetry, and ideally using low-cost cameras, is taken here as the starting point. Challenges addressed include optimized camera calibration and the effects of refraction on measurement lines of sight.

To deliver a comprehensive overview of the LCF project, this article also looks briefly at the work of the other partners. Lead partner Bath University is evaluating how the object itself deforms under the influence of the environment and gravitational loading, and how that can be compensated. Loughborough University is investigating scanning systems for local part identification, location, and pose. The Loughborough and UCL solutions must interact to ensure complete coverage of part movement and assembly across the factory space.


The LCF project is intended to develop networks of light-based measurement systems which enable increased automation in manufacturing and provide flexibility to evolve and adapt to changing demands.

The project’s work is divided into five themes:
1. Measurement-assisted assembly integrated with processing technology. Technology integration coupled with actuation for correcting dimensional errors.
2. The spatial uncertainty of complex objects due to thermal and gravitation effects. Model the resultant object distortions to establish a reference object shape.
3. A ubiquitous 7D measurement system environment for the entire factory space. An integrated system for tracking, positioning and assembling tools and parts.
4. A technology demonstrator with integrated experimental validation. Test and verify stages 1-3 and develop a route to stage 5.
5. An operating LCF network. Promote the project’s results to the widest possible industry and academic audience.

On a practical level, this development should provide metrology solutions which will increase production capacity and drive down costs. From a technological perspective, adopting the project’s direction and output would connect and embed metrology at different accuracy levels into all aspects of production and assembly processes. This is the LCF of the future.

In its five-year lifespan, this project will address many issues and not all are covered in this article. Example of tasks not described in detail here include:
Machine control by laser tracker.1 Poster presentation of FARO tracker used to compensate errors in a 3-axis machine tool.
Optimized laser tracker networks.2 Process for positioning laser trackers in a network for maximum accuracy.
High-accuracy reference lengths.3 Spheres connected as scale bars and artefacts with interferometrically measured separations.
Errors in hole scanning. Detection and mitigation of edge errors due to surface scanning.4

For the project overview, this paper will briefly present the following topics:
• Optimized camera calibration
• Refraction correction
• Scanning for object ID
• Six degree of freedom (6DoF) “throw and catch”
• The deforming object
• Large-volume robotics demonstrator

The presentation will conclude with comments on public impact. There is an opportunity to do this partly through a separate UCL project to develop an online knowledge base and industry contact point for 3D metrology. This aims to be a self-sustaining resource and so would provide a continuing “shop window” for LCF after the project concludes.

Optimized camera calibration (UCL)

The LCF will require a network of cameras operating across a wide factory space. Well-calibrated cameras give the best performance and monochromatic illumination enables the use of a camera model tuned to a particular frequency of light.5

Figure 1: Variation of focal length with color and chromatic aberration

The focal length of a lens is dependent on color and the different focal lengths between the blue and red extremes of the spectrum give rise to chromatic aberration in an image and hence errors in locating image positions in color images.

Figure 2: Variation of principal distance with wavelength (color) of light

One of the critical parameters of a camera is principal distance (PD). This is the distance between lens perspective center and image focal plane, and is roughly equivalent to focal length at infinity focus.

Figure 3: Distortion introduced by use of the wrong principal distance

Real cameras are modeled as pinhole cameras, and the PD is critical in determining how measurement rays from image positions of target features are projected back out into the measurement space. Here they are typically intersected with corresponding rays from other camera locations to determine the 3D locations of the target points.

As figure 3 shows, a shorter PD will cause the measurement rays to be spread more widely in angle, hence leading to distortions in the object and the need for accurate parameters.

Refraction correction (UCL)

In optically based 3D measurement, it is assumed that light travels in straight lines. However, if there is a variation in refractive index, most typically caused by a variation in temperature in the workspace, then the measurement lines of sight will bend and 3D errors will result. These may not be significant over short distances (e.g., 5m–10m) but may well be significant over longer ranges which are still typical of large objects and the extended factory spaces considered by the LCF project (e.g., 15m–30m).

With MathCAD as an analysis tool, LUMINAR, a now-completed partner project, gave UCL the chance to simulate refraction errors and explore ways of mitigating them. The LUMINAR work is currently being taken forward into LCF and is being further explored with a partner in in the accelerator alignment community with similar requirements.6

To put refraction in an aerospace perspective, Palmateer7 quotes the following for vertical temperature gradients typical of assembly halls at Boeing:
• In relatively small areas, refraction errors are not significant.
• For full aircraft measurement, effects can be significant, e.g., 0.26 mm.

The UCL analysis offers some specific figures for vertical temperature gradients:
• A linear vertical temperature gradient of 0.5° C per m will cause target deflection of around 200 m on a 30 m horizontal line (10x more).
• A linear vertical temperature gradient of 1.5° C per m will cause target deflection of around 175 m at 15 m horizontally and 6 m vertically.

A solution to eliminate the error, based on a dispersometer, has been successfully evaluated in the past for geodetic measurements. Geodesy demands very high accuracies over long ranges, and by measuring the angular pointing difference to a target using blue and red light, the error angle itself can be determined. This angular difference is due to dispersion, i.e., the difference in refraction of light at different wavelengths. For the typical blue and red wavelengths used at the time, the dispersion angle is some 42 times smaller than the error angle. Huiser and Gächter8 at Wild Leitz (now part of Hexagon Manufacturing Intelligence) successfully built a working dispersometer for refraction correction at ranges of 100 m.

However, dispersion measurement in photogrammetry is challenging. In the earlier example of a 1.5° C per m temperature gradient, dispersion at the target corresponds to around 5 m, which is a small value to detect by a camera approximately 16 m away. However, dispersion measurement has the advantage that only the dispersion is measured at the instrument, with no need to determine the environmental state along the line.

Figure 4: Refraction analysis for a 5 m diameter tunnel with approximately 2° C temperature difference between center and wall

Figure 4 shows the MathCAD analysis tool applied to a particle accelerator tunnel of some 5 m diameter, with an approximate 2° C temperature difference between tunnel center and wall. Over a 30 m long laser tracker line there would be a 3D error of approximately 175 m. If the thermal state of the environment could be captured to sufficient detail, then refraction bending could be calculated along the line and a correction applied. This is still under investigation.

Scanning for object ID (Loughborough)

Object recognition and localization is a fundamental problem in automated assembly. In the absence of markers on the object of interest, algorithms must utilize object features. Here these features are extracted from a dense 3D point cloud of the scene, which is generated by a fringe projection (FP) scanner whose measurement resolution can approach 100 mm.

Current technology identifies interest points (“key-points”) in the 3D point clouds of the measured scene, and an accurately scanned model of the object of interest.9 These are generally points of high curvature on the surface of the object. Each key-point is coded by a descriptor that uniquely describes the distribution of the neighborhood points. A random search method matches key-point descriptors between the scene and model. From this match, the corresponding location and pose of the object is determined. Refinement is possible, although the standard algorithms used can be time-consuming.

Unfortunately, key-point descriptors are not readily found in manufactured components due to the frequent presence of areas of smoothly varying curvature. When found, descriptors are likely to be unreliable because FP scanner errors are highest in high-curvature regions.12 The Loughborough team has therefore used an alternative technique based on surfaces and curves that are abundant in typical manufactured objects, in combination with a new algorithm based on maximum likelihood estimation for both rough pose estimation and pose refinement.10

Figure 5: Detected and segmented surfaces and curves of a CRT monitor

Figure 6: 3D surface (l) and curve model (r) obtained by scanning multiple views of the objects

Reference models are derived from point clouds by grouping surface normals to create surfaces, as seen in the left-hand side of figure 5, and principal directions of curvature to create curves, as seen in the right-hand side of figure 5. From multiple views these can be merged into full models, as seen in figure 6. Multiple surface patches in the models are then matched to scene patches using a probabilistic technique. The required object location and pose is the one which maximizes the likelihood of the matching process.

Figure 7: Pose error estimation map of CRT monitor

Figure 7 shows the results of matching a CRT monitor model with a cluttered scene. The blue areas show a good fit where the monitor is located and a bad fit elsewhere, as expected. In this case the mean alignment error on the object of interest was 0.6 mm.

6DoF “throw and catch” (UCL and Loughborough)

Multiple types of measurement system are therefore expected to operate in the future LCF, and they need to communicate and exchange information on a robot, tool, or part’s current 6DoF as it moves through the factory space. For example, a local surface scanner may hand over an object’s current 6DoF and associated uncertainty values to a robot, which presents the object to a vision metrology system, which in turn hands it on to a laser tracking system.

This is a critical element of the LCF project, as all the measuring systems must be integrated into a seamless network if the “ubiquitous” 7D measurement environment is to be realized.

Figure 8: Localization data exchange (6DoF "throw and catch") between systems as a part is processed

There are a number of challenges to be resolved here. One challenge is to accommodate the continuous 6DoF tracking of camera systems with the intermittent analysis provided by FP scanners. Another is due to the different targeting techniques used by different systems, e.g., reflective discs for photogrammetry, retroreflectors for laser trackers, and natural features for surface scanners. FP scanners can, of course, also use photogrammetric-style markers, one of the possible resolutions. In fact, there is potential to fix objects within frames to which markers are attached.

This concept can be extended to obtain dense point clouds of large objects. FP scanners could capture part of the object and frame as a dense point cloud and locate the object relative to the frame. Single scanners in multiple positions or multiple scanners in fixed or multiple positions could create more complete object scans.

The deforming object (Bath University)

Thermal changes not only cause variations in atmospheric refractive index, and hence potentially the unacceptable bending of light rays used for measurement as described earlier, but they also cause the measured object to deform through differential expansion and contraction. (In fact, a similar situation arises with an object’s loading configuration. A loading analysis similar to the thermal one described here can also be applied.)

To return a measurement situation to a reference one, it is common to make some limited temperature measurements, average them, and then apply a simple scaling factor to convert the object’s measured dimensional state to one equivalent to its shape at 20° C. This assumes an object is a uniform construction from a single material with a known coefficient of thermal expansion, a situation which does not occur in reality. A number of materials may make up an object, the subcomponents may be connected in different ways (welds, bolts, rivets, etc.), and thermal effects will be differential, e.g., one side may be warmer than the other.

Figure 9: Thermal deformation of a simple object evaluated by finite element analysis (l) and photogrammetry (r)

Lead partner Bath University has evaluated this situation, using photogrammetry as a measurement tool against which a prediction of an object’s deformation due to thermal (and gravitational) effects using finite element simulation can be compared (as seen in figure 9).

This basic concept has been tested with relatively simple objects to which local heating has been applied. Initial results show that even simplified finite element models can sensibly predict deformations, and hence corrections back to a reference state.11

However, this still remains a challenging task and at an even more basic level the research shows that simple linear change may itself have considerable uncertainty due to uncertainties in the coefficient of linear expansion.12

Large-volume robotics demonstrator (UCL)

Robotics will be an integral element of future factories and UCL’s new test facilities, soon to be operational on the former Olympics site in East London, will allow experimental and development work on integrated robotics and 3D metrology at factory scales.

In the existing, smaller laboratory space at UCL there is a network of low-cost cameras which can perform real-time photogrammetric 6DoF tracking over a volume of 8m2 to an accuracy approaching 100 μm. Within this space is a highly flexible 2.5 m snake-arm robot from OC Robotics, which has targeted links with integrated cameras along the arm.

Figure 10: Instrumented snake-arm robot in test lab (top) and concept application (bottom)

The external cameras can locate the elements of the snake-arm. The integrated cameras, or other sensors, allow the snake-arm robot to act as a sensor platform with its flexible design allowing for optimal placement for close up measurement of diverse objects (as seen in figure 10).

By combining the small-scale measurements made by the robot with the large-scale 6DoF tracking of the robot itself, it is possible to achieve large-volume, high-accuracy, and high-resolution measurement. This approach avoids the stitching and alignment errors often seen when aiming to achieve these types of measurement with a single sensor system.

Further work will see the operating volume of this system expanded through combining the snake-arm robot with a KUKA KR500 arm mounted on a 10 m KUKA track.

Public impact (UCL)

Because it is intended to bring this research and development into the real world, it must be promoted to potential end users in an accessible way. One of the many options to be explored is to utilize the output of another project at UCL as a “shop window” for the LCF project.

Figure 11: Adding an innovation zone to 3DIMPact-online.com

This second project is the development of an online knowledge base and industrial contact point for 3D measurement and metrology. Following 12 months of seed funding (Mar. 2016–Mar. 2017) a demonstrator website is now active at www.3dimpact-online.com. Although not yet open to public access, due to the need for further content, facilities, and usage permissions, bona fide researchers can request access and join the wider development support team.

Figure 11 shows the current website landing page (left) and how an “Innovation Zone” might be added. Amongst other features, this zone would bring together a worldwide network of research groups with interests in 3D metrology. Here, for example, they could present their groups’ profiles alongside a selection of their projects using videos, slideshows, articles, etc.

When operational, the website is intended to be self-sustaining, for example, generating revenue through sponsorship. This will give it long-term continuity, in contrast to the limited lifespan of most research projects. With a presence on a resource which aims to attract a large, worldwide audience and the interest of commercial end users, this should expose the showcased projects to industries seeking collaboration and commercialization.


The LCF project is ambitious in its scope, but this is a requirement if it is to provide the basis for the highly automated factories of the future, in particular ensuring that 3D metrology is embedded in the manufacturing and assembly processes.

It is addressing and progressing with key issues such as the effect of the environment on both measurements and measured objects, the extraction of critical object information from point clouds, and the integration of these diverse elements in a single, factory-wide, measurement environment.


The LCF is funded by the UK’s Engineering and Physical Sciences Research Council (EPSRC) through grant number EP/K018124/1.


1 Wang, Z. and Maropoulos, P., Real-Time Laser Tracker Compensation of a 3-Axis Positioning System, poster presentation at LVMC (now EPMC). Available from online archives at www.epmc.events, 2014.

2 Wang, Z., Forbes, A., and Maropoulos, P., “Laser Tracker Position Optimization,” The Journal of the CMSC, Vol. 9, No. 1, 2014.

3 Muelaner, J., Wadsworth, W., Azini, M., Mullineux, G., Hughes, B., and Reichold, A., “Absolute Multilateration Between Spheres,” Measurement Science and Technology, Vol. 28, No. 4, 045005, 2017.

4 Yuxiang, W,. Dantanarayana, H. G., Huimin, Y., and Huntley, J. M., “Accurate Characterization of Hole Geometries by Fringe Projection Profilometry,” SPIE Optical Metrology 2017 (Accepted).

5 Robson, S., MacDonald, L., Kyle, S., Boehm, J., and Shortis, M., “Optimized Multi-Camera Systems for Dimensional Control in Factory Environments,” Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 0954405416654936, 2016.

6 Kyle, S., Robson, S., MacDonald, L., and Shortis, M., Compensating for the Effects of Refraction in Photogrammetric Metrology, International Workshop on Accelerator Alignment (IWAA), Grenoble, 2016.

7 Palmateer, J., Effect of Stratified Thermal Gradients on Measurement Accuracy with Application to Tracking Interferometer and Theodolite Measurement, 7 Congrès de Métrologie, 1995.

8 Huiser, A. M. J. and Gächter, B. F., “A Solution to Atmospherically Induced Problems in Very High-Accuracy Alignment and Levelling, Journal of Physics D: Applied Physics, Vol. 22, pp. 1630–1638, 1989.

9 Andreopoulos, A. and Tsotsos, J.K., “50 Years of Object Recognition: Directions Forward, Computer Vision and Image Understanding, Vol. 117, No. 8, pp. 827–891, 2013.

10 Dantanarayana, H. G. and Huntley, J.M., “Object Recognition in 3D Point Clouds with Maximum Likelihood Estimation,” SPIE Optical Metrology, pp. 95300F–95300F, International Society for Optics and Photonics, 2015.

11 Ross-Pinnock, D. and Mullineux, G., “Thermal Compensation of Photogrammetric Dimensional Measurements in Non-Standard Anisothermal Environments,” Procedia CIRP, Vol. 56, pp. 416–21, 2016.

12 Muelaner, J. E., Ross-Pinnock, D., Mullineux, G., and Keogh, P. S., “Uncertainties in Dimensional Measurements Due to Thermal Expansion,” Laser Metrology and Machine Performance XII (Lamdamap 2017), Renishaw Innovation Centre, UK, 2017.


About The Author

Stuart Robson, et al’s default image

Stuart Robson, et al

Stuart Robson is the head of civil, environmental, and geomatic engineering (CEGE), University College London (UCL); Stephen Kyle is senior research fellow, CEGE, UCL; Jan Boehm is senior lecturer, CEGE, UCL; Ben Sargeant is a Ph.D. candidate, CEGE, UCL; Mark Shortis is professor of measurement science, RMIT University, Melbourne, Australia; Patrick Keogh is professor in machine systems, University of Bath; Glen Mullineux is professor of design technology, University of Bath; Jody Muelaner is a research fellow, University of Bath; David Ross-Pinnock is a research engineer, Manufacturing Technology Centre (MTC); Jonathan M. Huntley is professor of applied mechanics, Loughborough University; and Harshana G. Dantanarayana is a postdoctoral research fellow, Loughborough University.