Deep learning is being used in various fields now. It is being used in plants for various purposes, too. 3D plant shoot segmentation has significantly progressed by integrating deep learning techniques with point clouds. Traditionally, 2D methods were used but faced challenges in in-depth perception and structural determination. 3D imaging has addressed the limitations, providing better trait analysis in plant phenotypic trait extraction. However, 3D imaging also has the challenge that each point in the image must be carefully labeled, which is an expensive and time-consuming operation. So, researchers have been investigating the use of supervised learning models, which use fewer labeled points.
Consequently, in a recent study named Eff-3DPSeg: 3D Organ-Level Plant Shoot Segmentation Using Annotation-Efficient Deep Learning, researchers have introduced Eff-3DPSeg, a weakly supervised deep learning framework for plant organ segmentation. This framework Uses a Multi-view Stereo Pheno Platform (MVSP2) and acquires point clouds from individual plants. These point clouds are then annotated using a Meshlab-based Plant Annotator (MPA).
For this framework, the researchers procured two steps. First, they reconstructed high-resolution point clouds of soybean plants using a low-cost photogrammetry system, and a Meshlab-based Plant Annotator was developed for plant point cloud annotation. After this, they used a weakly supervised deep-learning method for plant organ segmentation. To do this, first, they pretrained the model with just approximately 0.5 percent of labeled points, then fine-tuned it utilizing Viewpoint Bottleneck loss to learn meaningful intrinsic structure representation from raw point clouds. Then they extracted three phenotypic traits were then extracted: the leaves’ length, width, and stem diameter.
Next, the researchers tested the framework’s performance on various growth stages on a large soybean spatiotemporal dataset. They compared this with completely labeled techniques on tomato and soybean plants. The stem-leaf segmentation results were accurate but had small misclassifications at junctions and leaf edges. Furthermore, the approach performed better on less complex plant structures and attained greater accuracy with larger training sets. Also, quantitative results showed notable gains over baseline techniques, particularly in less supervised environments.
However, the study also faced certain limitations. It had limitations of data gaps and the need for separate training for different segmentation tasks. The researchers emphasized focusing on refining the framework in the future. They also want to expand the range of plant classifications that this framework does and growth phases and enhance the method’s diversity.
In conclusion, the Eff-3DPSeg framework can prove out to be a significant step forward in 3D plant shoot segmentation. Its efficient annotation process and accurate segmentation capabilities have great potential for enhancing high throughput. Also, Eff-3DPSeg overcomes the challenges of expensive and time-consuming labeling processes through its weakly supervised deep learning and innovative annotation techniques.