3D plant reconstruction image on laptop screen for phenotyping

New data processing pipeline crucial for plant phenotyping

3 June 2024

The fact that data is not the same as information and that interference can literally cloud the vision of researchers and plant breeders is reinforced by the so-called Photoneo system, which also stands for the unique camera technique used. In the context of his PhD graduation, Bart van Marrewijk tells us more.

With the Netherlands Plant Eco-phenotyping Centre, NPEC for short, Utrecht University and Wageningen University & Research (WUR) and thus Vision+Robotics have access to hypermodern facilities for monitoring, analysing and managing crop growth in a fully controlled and controllable greenhouse horticulture environment. During tests and research, mainly for external clients, plants are also ‘put on transport’ with a pre-set frequency to be photographed from top to bottom. Via an automatic conveyor system, they move to a closed cabinet where each individual plant is photographed as a still life by 15 RGB cameras. Using the images and data, each plant is then digitally and automatically dissected to determine its architecture or composition. That architecture is then used to determine the phenotype of each plant. For example, what effect do environmental factors and genetics have on the shape, growth and production of tomato plants? In turn, this information is relevant for plant breeders who are keen to reap the fruits of their breeding as soon as possible. That will soon be possible using the automatic data processing pipeline developed by Bart van Marrewijk.

Plant reconstruction with voxel carving

The above cabinet with 15 RGB cameras is part of what Vision + Robotics call the ‘Maxi-MARVIN‘. “The Maxi-MARVIN is a neat piece of technology developed at Vision + Robotics to enable plants, for example, to be scanned in 3D in several seconds. In other techniques, like 3D scanners, the plant needs to be moved to be able to photograph it from all different angles. Because so many plants are scanned daily in the NPEC, it is also important that it doesn’t take too much time. That means that many techniques and methods are not suitable.”

MaxiMarvin developed by Vision+Robotics at Netherland Plant Eco-Phenotyping Centre in Wageningen

Maxi-MARVIN at Netherlands Plant Eco-phenotyping Centre in Wageningen

The technology on which the Maxi-MARVIN is based is what we in the business call voxel carving. “This is a technique using ‘ordinary RGB cameras to reconstruct a plant in 3D by carving out ‘voxels’. In other words: if a camera observes a pixel that is not part of the plant, you can remove all the voxels which are projected on these pixels. If you do this for all 15 cameras, within seconds you then have a really good 3D point cloud of a plant. Particularly for open plant structures, this 3D point cloud is of a very high quality. These might include tomatoes, cucumbers or pepper plants. In these plants, there are enough open spaces to carve away those voxels. However, for compact plants like a dwarf tomato or lettuce plant, there are no or very few open spaces. Voxel carving therefore works less well for these plants. A plant like a head of lettuce, for example. becomes a big blob, which is useless to us.”

Switch to 3D cameras with depth

To tackle the disadvantages of the Maxi-MARVIN imaging and analysis, we are now working on alternative hardware and an automatic data processing pipeline. “Previously, the data processing never really got off the ground because the models and algorithms were too limited and very little use was made of artificial intelligence. As a result, the models were unable to adequately deal with variation.” To analyse plants better, Vision + Robotics currently has a setup with two 3D RGB cameras manufactured by Photoneo. Thanks to stereovision and parallel structured light, these cameras can see depth. They are mounted on a frame that rotates 360 degrees around a plant in a closed cabinet at a constant speed. “So, fewer cameras than in the Maxi-MARVIN which, thanks to the 360-degree recording, still provide a complete and even better image of a plant because the problem of voxel carving is solved. Obviously, by using 3D cameras, it doesn’t matter how compact a plant is. A unique feature of the system is that it is much faster than was previously possible with 3D cameras. It is still slightly slower than the Maxi-MARVIN, but the benefits outweigh that. The setup directly generates a 3D point cloud and despite being slower, still falls within the margins to scan a whole greenhouse of plants.”

Another 3D scan platform that will soon become available in NPEC is the BABETTE pro (Bucher And Brouwer’s Environment for Time Traversing Experiments). BABETTE has been present in the NPEC greenhouse since 2020 and will soon be replaced by a more professional version. This version was developed and built in Johan Bucher’s EngD project. The platform enables a single plant or plant organ to grow and scans it over a longer period because it is a closed cabinet with growth-LED (white, blue and far red) and a water/nutrition supply system. The irrigation system was developed with Rick Hendriksen (TUPOLA/WDS). This customised platform is currently in the last phase of construction in the WUR workplace TUPOLA. The latest improvements to the MARVIN were inspired among others by the combination of features of BABETTE. The objective of Johan Bucher’s EngD project is to develop 4D models which consist of time lapses of a 3D scanned plant which are available for research and education. Augmented Reality of 3D scans is one of the points currently being implemented.

Rendered 3D broccoli from the BABETTE PRO system at NPEC

An impression of BABETTE PRO and a rendered 3D broccoli.

Dissecting point cloud to plant architecture

“One hypothesis from my PhD research is that flattening 3D images into 2D images makes it easier to distinguish between main and lateral stems, leaves and stalk. This flattening process is also called 3D – 2D reprojection. One of the many advantages of this is that you assess your plant from several angles, resulting in greater accuracy. Furthermore, the algorithms in 2D are much further developed. On the other hand, 3D algorithms are improving and the latest algorithms are nearing 3D – 2D reprojection. It is possible that 3D algorithms will surpass reprojection in the future, but for now we want to determine the plant phenotype with the highest possible quality. For that reason, reprojection is currently the best solution for our applications.”
“To come back to the point cloud, we therefore want to dissect it so that the plant architecture, the structure of a plant, becomes visible. In most plants, that is the stalk, main and lateral stems, branches, the distance between them and the leaf surface and any fruit. However, we found that the algorithms used for dissecting plants did not properly distinguish between lateral stems, lateral and main stems and trunk. Fortunately, this was resolved with our reprojection method.”

Example reprojection making plant architecture visible, distinguishing between leaves, main stem, side stems of the plant as well as the stick and the pot.

Pipeline deconstructs plant

When determining that plant architecture, Bart’s data processing pipeline comes in useful. “That pipeline digitally dissects a plant and divides it into the parts I just mentioned. This gives us a digital 3D image of the skeleton of the plant. Just like in vertebrate creatures, that skeleton consists of a spine (the main stem), lateral stems and the so-called internode lengths. That is the distance between branches. We also analyse the angle of the lateral stems compared with the main stem and the radius. All this information is important to assess the effect of breeding, adapting genetics and genotypes. Furthermore, we can also conduct simulations with the digital plants, or digital twins as we now call them. At the moment, however, such a skeleton still contains too much interference. That is something that has surprised us as researchers. In the coming years, we want to resolve that as part of my PhD research using artificial intelligence.”
Another thing that surprises Bart is the fact that the environment is changing so fast. “The speed at which algorithms are developing is incredible. At the same time, the availability of good 3D datasets is poor.”

The pipeline digitally dissects the plant. The result is a 3D image of the plant skeleton as seen in this example.

Very strong partner for phenotyping

The combination of advanced camera techniques and automated data processing should soon enable Vision + Robotics to offer extremely reliable and fast phenotyping to other researchers and breeding companies. Depending on the area of fruit-bearing plants like tomatoes, cucumbers and maize and potted ornamental plants in the NPEC. By working together with companies that implement camera and analysis techniques in machines and installations, locations outside the NPEC and even open field crops and fish farms will come within reach. “Overall, this will create a very strong partner for phenotyping in numerous agrifood applications.

Bart van Marrewijk Vision Robotics

BM (Bart) van Marrewijk MSc

Researcher

Contact Bart van Marrewij