Next Article in Journal
RBFNN-Based Adaptive Fixed-Time Sliding Mode Tracking Control for Coaxial Hybrid Aerial–Underwater Vehicles Under Multivariant Ocean Disturbances
Next Article in Special Issue
Online Autonomous Motion Control of Communication-Relay UAV with Channel Prediction in Dynamic Urban Environments
Previous Article in Journal
Jamming and Spoofing Techniques for Drone Neutralization: An Experimental Study
Previous Article in Special Issue
Feature-Enhanced Attention and Dual-GELAN Net (FEADG-Net) for UAV Infrared Small Object Detection in Traffic Surveillance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Drone-Based LiDAR and Multispectral Data for Tree Monitoring

1
Department of Earth and Environmental Sciences, University of Milano-Bicocca, Piazza della Scienza 1, 20126 Milan, Italy
2
National Biodiversity Future Center (NBFC), Piazza Marina 61, 90133 Palermo, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Drones 2024, 8(12), 744; https://doi.org/10.3390/drones8120744
Submission received: 31 October 2024 / Revised: 3 December 2024 / Accepted: 5 December 2024 / Published: 10 December 2024

Abstract

:
Forests are critical for providing ecosystem services and contributing to human well-being, but their health and extent are threatened by climate change, requiring effective monitoring systems. Traditional field-based methods are often labour-intensive, costly, and logistically challenging, limiting their use for large-scale applications. Drones offer advantages such as low operating costs, versatility, and rapid data collection. However, challenges remain in optimising data processing and methods to effectively integrate the acquired data for forest monitoring. This study addresses this challenge by integrating drone-based LiDAR and multispectral data for forest species classification and health monitoring. We developed the methodology in Ticino Park (Italy), where intensive field campaigns were conducted in 2022 to collect tree species compositions, the leaf area index (LAI), canopy chlorophyll content (CCC), and drone data. Individual trees were first extracted from LiDAR data and classified using spectral and textural features derived from the multispectral data, achieving an accuracy of 84%. Key forest traits were then retrieved from the multispectral data using machine learning regression algorithms, which showed satisfactory performance in estimating the LAI (R2 = 0.83, RMSE = 0.44 m2 m−2) and CCC (R2 = 0.80, RMSE = 0.33 g m−2). The retrieved traits were used to track species-specific changes related to drought. The results obtained highlight the potential of integrating drone-based LiDAR and multispectral data for cost-effective and accurate forest health monitoring and change detection.

1. Introduction

Maintaining and expanding forest resources is crucial for sustainable development, biodiversity conservation and the provision of essential ecosystem services, including climate mitigation through carbon storage and the provision of goods essential for human well-being [1]. However, forest ecosystems face increasing threats from economic pressures, natural hazards, and human-induced disturbances, exacerbated by climate change [2,3]. In this context, accurate data are essential for the development of targeted forest management strategies aimed at the improvement and long-term conservation of forest ecosystems. Traditional field-based monitoring methods involving in situ data collection are often labour-intensive, costly, logistically challenging, and generally limited to small-scale applications [4]. As a result, remote sensing has proven to be a viable way to implement forest monitoring on a large scale, enabling the cost- and time-effective acquisition of different vegetation properties, especially structural data [5,6]. Among the remote sensing technologies, drones are gaining popularity due to their ability to efficiently collect high-spatial-resolution data, their ease of use, low operational cost, versatility in hosting different sensors, and high-intensity data collection [5,7]. Drones can be employed in a wide range of applications, like forest inventory, species mapping, and biophysical properties estimation at different scales, from forest landscapes to individual trees and leaves [8,9]. However, despite the advances in drone technology, challenges remain in optimising data processing techniques and methodologies for effective forest monitoring [5,10,11].
This study aims to advance methods and data processing techniques for drone-based forest monitoring by integrating Light Detection and Ranging (LiDAR) and optical sensors, which together can provide complementary information on forest structure and biophysical properties.
LiDAR technology has become an important tool for assessing and characterising forest ecosystems [12,13]. LiDAR generates high-resolution three-dimensional (3D) point clouds that can provide detailed information on forest structure, including canopy height models, crown density, biomass, and other morpho-functional parameters essential for effective forest management, e.g., [11,14].
In parallel, multispectral optical cameras can provide other valuable data by measuring vegetation reflectance at different wavelengths. Traditionally, vegetation indices like the NDVI (Normalised Difference Vegetation Index), the EVI (Enhanced Vegetation Index), and other similar vegetation indices are frequently utilised as a proxy of plant health [15]. However, these do not provide a direct measurement of physiological changes in plants; rather, they evaluate variations in tree greenness as an indicator of vegetation health, which does not allow for the immediate or direct identification of plants’ physiological states [16,17]. To overcome this limitation, the retrieval and monitoring of specific plant traits, such as the leaf area index (LAI) and canopy chlorophyll content (CCC) are essential in forest ecosystem monitoring, as they can provide a more direct insight into the physiological status of trees [18,19,20]. The LAI, which measures the total leaf area per unit ground area, expresses the forest photosynthetic capacity, canopy structure, and tree productivity [21]. Similarly, CCC reflects the chlorophyll content of plants, which is essential for photosynthesis and provides an indication of plant health and nutrient status [22]. Both the LAI and CCC are directly related to tree structure and functioning, making them essential metrics for assessing vegetation health and understanding how vegetation responds to environmental change and stressors [19,20,23,24]. Despite their importance, only a few studies have investigated the possibility of retrieving the LAI and CCC from drone-based multispectral sensors in forest ecosystems, e.g., [25,26].
With the overall goal of demonstrating how drones can properly support effective forest monitoring, we conducted a study in 2022 in the Ticino Forest (Northern Italy) aiming to achieve the following objectives:
-
Detect individual trees within a natural broadleaf forest using LiDAR drone point cloud data;
-
Identify tree species by applying object-based classification techniques to multispectral drone imagery;
-
Retrieve plant traits (i.e., LAI and CCC) from multispectral imagery using machine learning algorithms;
-
Analyse vegetation trait changes in the Ticino Regional Park Forest during the 2022 summer drought by examining the multifactorial interactions between species-specific responses and microclimatic variability.
By addressing these objectives, this study will provide insights into how drone-based technologies can advance forest monitoring and management practises.

2. Materials and Methods

2.1. Study Area

This study was carried out in the “La Fagiana” Nature Reserve in Magenta (MI), located in Ticino Regional Park, Italy (Figure 1). The park has a total area of 91,800 ha, of which 22,000 ha is the natural forest. It belongs to the Ticino Val Grande Verbano Biosphere Reserve (UNESCO—Man and Biosphere Programme), one of the largest natural riverside parks in Europe. Ticino Regional Park plays a key role in preserving biodiversity in one of the most urbanised areas in Europe. It represents a unique refuge for autochthonous vegetation and constitutes a fundamental ecological corridor between the Alps and the Apennines. The Ticino temperate mixed forests are dominated by oaks (typically English oak, Quercus robur L.) and white hornbeam (Carpinus betulus L.), with a more scattered presence of black alder (Alnus glutinosa L.), sweet chestnut (Castanea sativa Mill.), pines (Pinus spp.), and allochthonous invasive species such as black cherry (Prunus serotina Ehrh.) and black locust (Robinia pseudoacacia L.). The forest of “La Fagiana” is largely dominated by the presence of English oak, white hornbeam, and black locust. The reserve includes areas characterised by different microclimatic conditions: xerophilic (red shaded area in Figure 1), mesophilic (yellow shaded area in Figure 1), and meso-hygrophilic (green shaded area in Figure 1). Each of them was classified based on plant species composition, density, and tree habits. The xerophilic area is characterised by the sporadic presence of English oak, which, due to the low soil moisture availability, exhibits shorter heights and different growth habits compared to trees of similar age in the mesophilic and meso-hygrophilic areas. The mesophilic area is characterised by the prevalence of English oak, while the meso-hygrophilic one is composed primarily of the English oak–white hornbeam association [27]. The Ticino Forest experienced an unprecedented severe and prolonged drought in the summer of 2022, with detrimental impacts on vegetation health [28,29].

2.2. Field Plant Trait Data Collection

In the summer of 2022, plant traits were sampled within two time windows: 21–22 June (1st campaign) and 7–8 September (2nd campaign). The field measurements were collected in the correspondence of five sampling sites of ~30 m × 30 m. The sampling sites were geo-located with a Garmin GPSMAP 66sr (Garmin Ltd., Schaffhausen, Switzerland) and are shown as red squares in Figure 1a. The LAI was estimated through digital hemispherical photos in correspondence of 4 subplots (~15 m × 15 m) for each sampling site. In each subplot, we acquired 5 up-looking digital hemispherical photos (i.e., centre and corners of each subplot) with a Laowa 4 mm f/2.8 fish-eye lens (Venus Optics, Hefei, China) mounted on a Canon EOS M50 Mark II (Canon Inc., Tokyo, Japan). The LAI of the overstory was then calculated using the CAN-EYE software v6.4.95 (https://www6.paca.inrae.fr/can-eye, accessed on 10 October 2024). We processed one subplot at a time and obtained a mean and standard deviation for each subplot, for a total of 20 LAI values in both June and September 2022 (n = 40).
The LCC was obtained from destructive measurements conducted on leaf disc samples. In each site, for each species recognised as dominant, we collected samples from three different trees. From each tree, we sampled 12 leaves from different sunlit branches pulled down with a slingshot from the top of the canopy. Three pigment extractions, i.e., twelve 0.635 cm diameter leaf discs each, were conducted from twelve leaves. In total, we sampled 56 trees, which included 31 English oaks (372 leaves, 93 pigment extractions), 13 white hornbeams (156 leaves, 39 pigment extractions), and 12 black locusts (144 leaves, 36 extractions). The methodology used for the sample preparation is described in [30]. The concentrations of chlorophyll a (Chla) and b (Chlb) were then determined by spectrophotometry (V-630 UV-VIS, Jasco, Pfungstadt, Germany) in a 100% methanol extract at 665.2 and 652.4 nm, respectively, while turbidity was checked by measuring absorbance at 750 nm. Chla and Chlb concentrations were calculated using the extinction coefficients proposed by [31]. The LCC (μg cm−2) was calculated according to Equation (1):
LCC = Chl a + Chl b Area tot
The LCC values of the single trees were aggregated at the sampling site level by weighting for the abundance of the corresponding species. The CCC of each sampling site was then calculated according to Equation (2):
CCC = LAI × LCC

2.3. Drone Data Collection

The drone surveys were conducted using a DJI Matrice 300 RTK (DJI Ltd., Shenzhen, China) mounting different payloads (Table 1): a LiDAR DJI Zenmuse L1 sensor (DJI Ltd., Shenzhen, China), a high-resolution RGB camera Zenmuse P1 (DJI Ltd., Shenzhen, China), and a multispectral MAIA S2 camera (SAL Engineering S.r.l., Russi, Italy; EOPTIS, Trento, Italy; Fondazione Bruno Kessler, Trento, Italy).
The DJI Zenmuse L1 laser scanner combines data from an RGB sensor and the IMU unit in a stabilised 3-axis gimbal, providing a true-colour point cloud from the RGB sensor. The L1 laser scanner has a beam divergence of 0.28° (vertical) × 0.03° (horizontal) and a maximum of 3 registered reflections. It can operate at a maximum distance of 450 m at 80% reflectivity (190 m at 10% reflectivity) with a recording speed of 480,000 points/second for multiple return acquisition (240,000 points/second for single return). It has a horizontal and vertical system accuracy of 10 cm and 5 cm per 50 m, respectively, and a distance measurement accuracy of 3 cm per 100 m. The DJI Zenmuse P1 camera was used to acquire data for photogrammetric processing. The sensor is a 45 Mpixel CMOS with a size of 35.9 × 24 mm and a pixel size of 4 µm, capable of taking photos with a resolution of 8192 × 5460 pixels. In this study, the camera was mounted with the DL 35 mm F2.8 LS ASPH lens. Detailed information on the L1 and P1 cameras can be found in [32]. L1 and P1 flights were performed with a vertical gimbal pitch of −90° (i.e., nadiral) at an altitude of 80 m and 120 m above ground, respectively. LiDAR acquisition results in a point cloud density of 459 points/m2, while a ground sampling distance of 1.5 cm was obtained for RGB imagery. DJI Pilot software was used for the acquisition, following a single-grid flight pattern with a constant height relative to the take-off point. The flat terrain ensured a constant pixel size and point density in the model without the use of a terrain adaptive flight.
MAIA S2 is composed of an array of nine monochromatic sensors, each with a 1.2 Mpixel resolution (pixel resolution: 1280 × 960, pixel size: 3.75 × 3.75 µm) and their relative pass-band filters. The sensors have the same central wavelength and bandwidth as the first nine bands of the ESA Sentinel-2 multispectral instruments [33] and acquire data simultaneously using global shutters, allowing synchronised multi-band measurements in a single shot. The sensors have a horizontal and vertical field of view of 35 and 26 degrees, respectively, with a fixed focal length of 7.6 mm. The system is equipped with a GNSS receiver to record the position and time of each camera shutter activation. The shutter speed was set to automatic mode to minimise motion blur, aiming for 20% exposure. To accurately estimate the reflectance factor, the incoming radiation was measured simultaneously with the multispectral acquisition and in the same bands using a cosine incident light sensor (ILS) mounted on top of the DJI Matrice 300 RTK. The ILS features a GNSS receiver which allowed the ILS and MAIA S2 shots to be synchronised using the GPS time to produce reflectance images with the incoming radiation measured at the exact time of each MAIA S2 acquisition. The images were acquired following a single-grid flight pattern at a constant altitude of 110 m above ground, resulting in a ground sampling distance of 5.5 cm.
The DJI Matrice 300 RTK has a vertical and horizontal hovering accuracy (i.e., manufacturer’s declared values) of ±0.1 m in D-RTK mode [34]. Drone GNSS receivers implementing the Network Real-Time Kinematic (NRTK) technique were used in the study areas as GNSS signals and mobile network coverage were available. Ref. [32] conducted full-scale tests on the DJI Zenmuse L1 sensor, demonstrating a positioning accuracy better than the manufacturer’s claim, with a precision of 3.5 cm in all directions. Reference [35] combined the use of LiDAR and multispectral data for forest biodiversity measurements by using the initial georeferencing provided by the GNSS systems. LiDAR and RGB drones with Real-Time (RTK) or Post-Processing Kinematic (PPK) georeferencing systems were tested by [36]. These authors showed that these technologies have potential in hard-to-reach areas (e.g., forest) and produce unbiased point clouds, being the most cost-effective method. According to the previous study, we considered the georeferencing accuracy provided by the NRTK systems to be adequate for our study, and the co-registration or manual alignment of the drone products (i.e., LiDAR point cloud, RGB and MAIA orthophotos) was not necessary.

2.3.1. LiDAR Data Processing

The LiDAR point cloud data were acquired with a DJI Zenmuse L1 mounted on a DJI Matrice 300 RTK in April 2022 for individual tree detection (ITD) and segmentation (ITS). The flight plan settings are shown in Table 1.
Raw point cloud (Figure 2a) data were first optimised using DJI Terra software (DJI Ltd., Shenzhen, China), manually cleaned from noise and outliers with CloudCompare [37] and then processed in the R environment using “lidR”, “rLidar”, and “ForestTools” packages [38,39,40]. Those packages are widely used to process point cloud data in forestry applications [41]. The ground point classification (Figure 2b) was performed using a progressive morphological filter (PMF) described by [42]. After point ground classification, a 1 m resolution digital terrain model (DTM) was generated (Figure 2c) using an Inverse distance weighting (IDW) method. The point cloud was normalised (Figure 2d) to create a canopy height model (hereafter CHM). The CHM was generated using point-to-raster (P2R) algorithms, which consist of creating a grid and assigning each pixel the elevation of the highest point it belongs to; in our case, we set the CHM resolution grid to 0.2 m. An IDW method was used to interpolate “empty pixels”. Although some authors suggest smoothing the CHM with filters (e.g., Gaussian filter, median filter) prior to crown delineation, our preliminary test showed that smoothing decreased the accuracy of delineation; this issue was also noted by [43]. In addition, the CHM smoothing could also affect tree height estimation [44]. For these reasons, we decided not to smooth the CHM. For the ITD (Figure 2e), a local maximum filter (LMF) described by [45] was applied to the CHM. According to previous studies, we chose a circular shaped window [46] with a 5 m diameter and a minimum height of 2 m.
For the ITS, many segmentation algorithms are described in the literature [43,47] showing high accuracy especially for coniferous forests, while, for deciduous forests, there is still a need for further research, as no widely accepted method has yet been established. We used a marker-controlled watershed (MCWS) method for ITS [48]. The markers used to guide the segmentation process were treetops determined by previous ITD. To avoid the tree edges overlapping, shadowing, and possible geometric shifts between LiDAR and MAIA products, we decided to use circular regions of interest (ROI) of 4 m diameter centred on the treetop instead of crown polygons for the classification step, which required the most rigorous training definition. Crown polygons were used only for mapping purposes.
Reference data for the ITD included field observations and photo-interpretations of 15 square 30 m × 30 m plots distributed across the study area (Figure 1a), as in [49]. A detected tree was considered matched if the distance between the treetop to the reference tree (treetop/trunk) was less than 2.5 m. To evaluate the detection accuracy, which is assumed to be point accuracy [50], the recall (r), precision (p), and F-score (F) equations from [51,52] were calculated according to Equations (3)–(5):
r = TP TP + FN
p = TP TP + FP
F = 2 rp r + p
where TP is the number of treetops correctly detected, FN is the number of trees not detected, and FP is the number of extra trees (commission error). r measures the number of trees detected, p is the fairness of detected trees, or precision, and F is the harmonic mean of r and p, thus representing the overall accuracy. All these indices range from 0 to 1. The higher the F value, the better the accuracy of tree detection.

2.3.2. RGB and Multispectral Imagery Processing

Drone-based RGB imagery of our study area was acquired on the same date as LiDAR data with a DJI Zenmuse P1 sensor (DJI Ltd., Shenzhen, China) mounted on a DJI Matrice 300 RTK. Sensor characteristics are shown in Table 1. High-resolution imagery was used to verify the position of trees and to identify tree species. The flight patterns were designed in DJI Pilot GUI, (see Table 1 for the main settings). All acquired imagery was processed using Drone2Map software (ESRI, Redlands, CA, USA) to generate two orthomosaics (i.e., one for each flight) with a spatial resolution of 1.5 cm. Drone-based multispectral imagery was acquired using a MAIA S2 multispectral camera. Multispectral image processing followed [53]. The raw MAIA S2 imagery underwent geometric and radial distortion correction using MultiCam Stitcher Pro (SAL Engineering S.r.l., Russi, Italy). The software, which is integrated with the MAIA, allows the images from each band to be co-registered into a single multispectral image with precise pixel-to-pixel alignment. Pseudo-reflectance was calculated for each pixel as the ratio of the radiance measured by the MAIA S2 to the incident solar radiation measured by the ILS in each spectral channel. The pseudo-reflectance images were then imported in Agisoft Metashape v1.7.2 (Agisoft, St. Petersburg, Russia) to produce the multispectral orthomosaic. The empirical line method [54] was finally applied to the pseudo-reflectance orthomosaic to obtain at-ground multispectral reflectance using the calibration coefficients described in [53].

2.4. Dataset Preparation for Classification

An object-based classification was performed on the multispectral image collected in July 2022. In total, 190 trees were selected, representing the most common tree species in the study area: 80 English oaks, 80 white hornbeams, and 30 black locusts. The number of samples per species reflects the proportion of each species in the study area. The locations of trees in this area were originally collected by GPS and manually improved using the high-resolution RGB orthomosaics.
For each treetop location, a circular ROI of 4 m in diameter was defined and used to extract the spectral and textural features used as input to the random forest classifier.
Regarding the texture features, a grey level co-occurrence matrix (GLCM) was calculated in ENVI 5.6.1 (NV5 Geospatial Solutions Inc., Broomfield, CO, USA) on the red and NIR bands of the multispectral orthomosaic using a 5 × 5 pixel-size window, and the following features were extracted: mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation.
The classification was performed with the Random Forest classifier. It is an ensemble classifier that utilises a set of classification and regression algorithms (CARTs) to make a prediction [55]. The classification result (the response) is determined by a majority vote for each tree [56]. In the last few years, RF has become widely used in several remote sensing applications [56,57,58]. Thereby, the RF classifier was chosen to classify the tree species in “La Fagiana” Forest in the Ticino Park Valley. The training set was composed of 133 randomly selected trees. After the training phase, the RF algorithm was applied to the output of the segmentation, consisting of a polygon per tree crown with associated average spectral and textural features extracted from the MAIA S2 data.
The RF classification was performed using the R package “randomForest” [59], one of the most used RF implementations [56]. According to [60], the parameter Ntree was set to 500, and Mtry used for each node was the square root of the total number of input variables. The accuracy of the tree species classification using MAIA multispectral images was assessed through the confusion matrix (CM), by calculating the overall accuracy (OA), Kappa coefficient (k), producer’s accuracy (PA), and user’s accuracy (UA) [61].

2.5. Plant Trait Retrieval Workflow

2.5.1. Machine Learning

Plant trait retrieval was performed using a machine learning approach developed using a broader forest dataset of LAI and CCC field measurements coupled with hyperspectral data acquired by the PRISMA spaceborne sensor [62]. The dataset consists of 50 paired field and spectral data (n = 50) and was collected in Ticino Park over a considerably larger area that could not be covered by the drone flights [63]. This allowed a wide range of conditions in terms of forest microclimatic conditions, structures and condition to be included in the training dataset. The PRISMA spectra were resampled to the MAIA S2 bands (Figure 3), and several State-of-the-Art machine learning regression algorithms (MLRAs) were tested within the Automated Radiative Transfer Model Operator (ARTMO) machine learning regression algorithm toolbox [64,65]. The MLRA tested included: Gaussian Processes Regression (GPR), Support Vector Regression (SVR), Partial Least Squares Regression (PLSR), Neural Networks (NNs), and Random Forest (RF).
The MLRA models were cross-validated using a k-fold cross validation strategy (k = 6), and the model performance was evaluated in terms of standard goodness-of-fit statistics: coefficient of determination (R2), root mean square error (RMSE), normalised RMSE (nRMSE) (i.e., RMSE/range of measured values), bias (i.e., mean of estimated values—mean of measured values), and relative bias (rbias) (i.e., bias/mean of measured values) between measured and estimated values. The developed models were then applied to the real MAIA S2 spectra collected in Ticino Park and validated against the independent field dataset collected in the Fagiana Forest near simultaneously the drone overpasses and described above (n = 40). To produce the drone-based maps, the MLRA models that provided the best results for the LAI and CCC were applied to the segmentation output, consisting of a polygon per tree crown with the associated average MAIA S2 spectrum. The maps were generated for the MAIA S2 images collected on both 1 July 2022 and 31 August 2022.

2.5.2. Analysis of Plant Traits

A statistical analysis was carried out to test whether the retrieved traits (i.e., the LAI and CCC) differed based on three factors: forest microclimatic condition, species, and time. The latter was tested to analyse whether the drought that occurred in the summer of 2022 had an effect on the LAI and CCC, considering that plants in the Ticino Park normally enter the senescence phase towards late October. Instead, here, we tested the differences between the values obtained from the drone images taken in early (1 July 2022) and late (31 August 2022) summer. The analysis was performed using a three-way analysis of variance (ANOVA). Where results were significant, a post hoc Tukey test was performed to test the significance of the differences between pairs of group means. Both tests were performed in R 3.6.3.

3. Results and Discussion

3.1. Individual Tree Detection

The implementation of ITD from LiDAR data gave satisfactory results for the 236 trees analysed. The overall recall rate (r) was 0.73, ranging from 0.53 to 1, the overall precision rate (p) was 0.74, also ranging from 0.53 to 1, and the overall F-score was 0.74, with a range from 0.55 to 1 (Table 2). These metrics indicate a robust performance for tree detection. Higher F-scores were obtained in stands with lower tree density and reduced canopy overlap, while denser canopies and greater crown overlap resulted in lower F-scores. This trend is common in deciduous mixed forests, where complex crown shapes and vertical structures can reduce the detection accuracy [43,50]. The lower accuracy of some plots may be linked to the use of an LMF to define the treetop, which can lead to overcounting in trees with complex canopy structures, such as in oak–hornbeam forests, where it is often not straightforward to identify a single crown centre treetop [66,67]. Furthermore, in dense forest stands with a closed canopy, crown overlap further complicates the search for and the correct identification of local maxima [68,69].
Our accuracy results were achieved by testing many processing parameter combinations, such as local maxima window size (3, 5, and 7 m diameter) and filtering (e.g., Gaussian). In fact, the choice of point cloud processing parameters, such as CHM resolution [70], filtering method [44], size and shape of the local maxima window [46,71], and segmentation algorithm used [47,72] strongly influence the ITD accuracy.
Our results are consistent with similar studies conducted in structurally complex mixed forests. Reference [12] reported an average F-score of 0.66 ± 0.01 using a similar algorithm in a mixed broadleaved forest. The higher accuracy achieved in our study might have been determined by the use of an unprocessed CHM, which, according to [44], is more reliable than the smoothed CHM because of the more accurate reproduction of the crown shapes. Ref. [47] achieved, in mixed natural forest plots, an r of 0.72–0.85, a p of 0.58–0.51, and an F-score of about 0.64 using a local maximum algorithm. Notably, the stands analysed by [47] are made up of a mixture of deciduous and coniferous species, a feature that simplifies the treetop identification process and therefore probably determines the higher precision of tree identification in some stands compared to our results. This was also the case of [49]. They applied a similar workflow to mixed plots dominated by longleaf pine (Pinus palustris) and turkey oak (Quercus laevis Walter), achieving an F-score of 0.86. Their higher tree detection performance is likely due to the inclusion of both deciduous and evergreen coniferous species. In fact, coniferous species are generally easier to detect and distinguish due to their characteristic conical crown shape, which makes treetop identification easier compared to broadleaved trees [43,47]. Moreover, as highlighted by [50], most segmentation algorithms assume a conical crown structure, thus ensuring better results when applied to coniferous species. On the contrary, tree identification for deciduous forests is still an open challenge, as there is no widely accepted and accurate method.
In addition to the species composition, variations in ITD accuracy can be linked to other ecological factors such as canopy closure [46,47,73]. Reference [46] tested different LM window sizes in forest types with different density conditions, finding that the optimal LM window size is highly species- and density-specific, significantly influencing detection accuracy (F-scores = 0.82–0.91). In our study, we used a single LM window size across all species and canopy density conditions due to the presence of mixed-species plots in our study area, which likely explains our comparatively lower accuracy. Ref. [72] achieved an r of 0.85, a p of 0.70, and an F-score of 0.77 in a mixed plot dominated by sycamore (Acer pseudoplatanus L.) and English oak using an MCWS algorithm. Overall, the better F-score value may be related to the number of reference trees used, which was significantly lower than ours.

3.2. Classification of Forest Species

The random forest classification achieved a high overall accuracy of 84% and a Kappa coefficient of 0.74 (Table 3). Black locust was classified with 100% accuracy, while some misclassification occurred between white hornbeam and English oak (PA = 75% and 88%, respectively) due to their higher spectral similarity. The algorithm was trained with 133 tree crown objects and validated with 57 tree crown objects. The resulting drone-based species map (Figure 4) illustrates the spatial distribution of English oak, white hornbeam, and black locust.
Comparing our results with other studies is not so straightforward due to differences in the number of species classified, training size, and the different features taken into consideration. However, examining comparable research on object-based tree species classification reveals several factors likely contributing to the high accuracy in our study.
Firstly, the inclusion of textural metrics in addition to spectral features in our classification process likely enhanced accuracy. For instance, our classification accuracy was slightly higher than similar studies that did not incorporate textural metrics, such as [74], who reported 78% accuracy with four species. Similar findings were reported by [75] (73% of accuracy), [76] (64.85% of accuracy), [77] (77% of accuracy), and [7] (78% of accuracy) who consistently observed improved results when texture-based metrics were included in the classification process. Secondly, the fact that MAIA S2 is one of the multispectral sensors with the highest number of spectral channels likely contributed to our improved classification performance compared to studies that used lower-resolution sensors, such as [74,75].
However, our accuracy was lower compared to studies focusing on conifer species. Ref. [74] reported a user accuracy of 87% in a coniferous forest using multispectral drone images. Ref. [78] achieved 95% accuracy and a Kappa coefficient of 0.95 in an RF classification in a temperate forest, where two of the four species were coniferous, using the eight spectral bands of the WorldView-2 satellite (comparable to those of the MAIA sensor used in this study). In general, conifers are characterised by easier tree top identification and a greater spectral differentiation between species [50]. These findings were also confirmed by [79], who achieved an overall species classification accuracy of 90% for conifers and 80% for broadleaves when analysing multi-temporal datasets of Sentinel-2 images.
Concerning the species distribution in our study area, Figure 4 shows how English oak is the dominant species in the xerophilic area of the Fagiana Forest (highlighted in red in Figure 1), although it is also evident that English oak density is relatively low. This likely reflects the trees’ adaptation to the limited water availability. Indeed, reduced forest density increases the trees’ resistance to water scarcity by minimising competition for scarce resources [80].
In the mesophilic areas of the Fagiana Forest (highlighted in yellow in Figure 1), a noticeably denser English oak canopy cover can be observed (Figure 4), indicating more favourable soil moisture conditions, which also support the sporadic presence of white hornbeam [81]. The meso-hygrophilic areas of the Fagiana Forest (highlighted in green in Figure 1) are dominated by white hornbeam trees, which, mixed with English oaks, form an oak–hornbeam association, typical of the Ticino Forest. This association is indicative of moister soil conditions, as evidenced by the taller tree and denser foliage compared to the xerophilic zones. In contrast, black locust is the least represented species, confined to distinct patches mainly along the edges of the mapped area, and with few individual trees scattered throughout the different forest microclimatic conditions (Figure 4).

3.3. Plant Trait Retrieval

The cross-validation statistics of the models calculated between the measured field data and the estimated data retrieved from the PRISMA dataset resampled to the MAIA S2 spectral configuration (n = 50) are shown in Table 4. Overall, both the LAI and CCC showed very high performance in cross-validation. More specifically, the LAI was estimated with high accuracy with all the investigated MLRA (R2 = 0.84–0.90, nRMSE = 8.66–11.01%), whereas CCC showed a more variable performance depending on the MLRA used (R2 = 0.68–0.83, nRMSE = 9.17–13.08%). Among the MLRA, the kernel-based algorithms (i.e., SVR, GPR) and PLSR showed the highest predictive capacity.
The goodness-of-fit metrics calculated between the field dataset collected near-simultaneously the drone acquisitions and the drone-based retrievals obtained by applying the developed MLRA models to the MAIA S2 spectra (n = 40) are shown in Table 5. As expected, the results obtained using an independent validation dataset are slightly worse than those obtained in cross-validation on the PRISMA dataset resampled to the MAIA S2 spectral resolution. Still, both the LAI and CCC were accurately estimated. GPR, SVR, and PLSR showed the highest predictive capacity for both the LAI (R2 = 0.81–0.83, nRMSE = 14.18–16.39%) and CCC (R2 = 0.79–0.80, nRMSE = 22.45–27.7%). Using the fully independent dataset and the actual MAIA S2 data, all models showed a slight to moderate tendency to overestimate compared to the field data (rbias = 4.8–43.23%), especially for CCC. The scatter plots showing the measured and estimated values obtained from the MAIA S2 sensor for the LAI and CCC are shown in Figure 5 and Figure 6, respectively.
Overall, the results obtained in this study are promising towards the development of effective and consolidated retrieval schemes for the estimation of forest traits using drone-based sensors. The LAI and CCC were accurately estimated using an effective approach based on an MLRA trained on a reasonable amount of data, which could be easily applied to similar conditions or updated by adding more training samples to extend its applicability. Previous studies using drone data to estimate plant traits have mainly focused on crops, e.g., [82,83,84,85], while only a few studies have dealt with forests, e.g., [25,26]. The literature reports promising results in the retrieval of leaf or canopy chlorophyll content using a look-up table (LUT) or MLRA-based approaches. Ref. [86] used a hybrid approach based on radiative transfer simulations coupled with an artificial neural network to estimate the LCC and CCC of apple orchards from DJI Phantom 4 multispectral data, achieving an R2 of 0.73 and 0.79 and RMSE of 6.63 and 28.48 μg cm−2, respectively. Ref. [84] used the same sensor to estimate the LCC of sugarcane using the MLRA applied on vegetation indices, with R2 = 0.68–0.98. Ref. [82] used MicaSense Dual multispectral data to retrieve the LCC and CCC of maize using an LUT-based approach, obtaining RMSE = 3.74–4.92 μg cm−2 and RMSE = 33.1 μg cm−2, respectively. Such results are in line with ours (R2 = 0.80, RMSE = 0.33 g m−2, nRMSE = 24.02%), though we targeted a forest canopy which adds complexity to the retrieval because of the structure. In these ecosystems, previous studies have found contrasting results: [26] estimated the LCC of Norway spruce from Parrot Sequoia data with moderate accuracy (R2 = 0.45–0.49), while [25] achieved very good results in retrieving the LCC of Himalayan pine from MicaSense RedEdge data using an LUT-based approach (R2 = 0.94, RMSE = 6.20 μg cm−2).
The retrieval of the LAI is often reported to be more challenging, especially in complex canopies with mixed sunlit and shaded pixels [82,86]. The presence of shadows, rows, and varying leaf angles in fact confounds the signal, posing difficulties in the accurate quantification of the LAI. Ref. [82] obtained an RMSE of 0.61–0.7 m2 m−2 in the estimation of the LAI of maize, which has a more complex geometry compared to turbid medium crops, and [86] achieved R2 = 0.74 and RMSE = 0.28 m2 m−2 in the retrieval of the LAI of apple orchards. In beech forests, [87] estimated the LAI from a drone-based RGB camera with R2 = 0.59–0.7. In our study, the good results obtained (R2 = 0.83, RMSE = 0.44 m2 m−2) indicate that the problem of different lighting conditions at the high spatial resolution of the drones is probably mitigated by using the average crown reflectance instead of a pixel-based retrieval, which is in line with the findings of [26].
Figure 7 shows the LAI and CCC maps of the Fagiana Forest obtained by applying the best performing MLRA to the MAIA S2 segmented images acquired on 01/07/22 and 31/08/22. A significant reduction in the LAI and CCC (indicative of biomass loss and chlorosis) is evident between these two dates (Figure 7e,f), likely as a result of the persistent drought of summer 2022 [28,29]. In the Fagiana Forest, CCC exhibited a stronger decline (Figure 7f) compared to the LAI (Figure 7e). This suggests that CCC may be a more sensitive indicator in detecting the effects of water shortage on vegetation functionality compared to the LAI. This result is consistent with previous studies showing the higher sensitivity of chlorophyll content compared to the LAI for assessing the condition of English oak trees in Ticino Park [88,89].

3.4. Functional Trait Analysis

The ANOVA results showed that LAI values were significantly influenced by time, forest microclimatic conditions, and species (Table 6). The pairwise interaction between time and forest microclimatic conditions indicated a poorly significant interaction, suggesting that the LAI varies consistently through time across the three different microclimatic conditions. Differently, the pairwise interaction analysis indicated a highly significant interaction between time and species, suggesting that temporal changes in the LAI differ considerably across different species (Table 6, Figure 8a). Specifically, the Tukey post hoc test revealed a significant decrease in the LAI between 1 July 2022 and 31 August 2022 for both white hornbeam (−16%) and English oak trees (−12%). In contrast, no statistically significant change was observed for black locust during this period (Figure 8a). Notably, black locust already had a low mean LAI value in June compared to the other two species.
The ANOVA analysis also revealed a statistically significant interaction between forest microclimatic conditions and species, indicating that the LAI differs significantly across various forest microclimatic conditions for the same species (Table 6, Figure 8b). For the black locust species, the LAI of the mesophilic forest is 22% lower than the meso-hygrophilic one and 55% lower when comparing the meso-hygrophilic and xerophilic environments (Figure 8b). In the case of white hornbeam, only the differences in the LAI between meso-hygrophilic and xerophilic forest were significant (with a reduction of 20%) (Figure 8b). For English oak, the differences in the LAI among the different forest microclimatic conditions were also significant, with a reduction of 7% between meso-hygrophilic and mesophilic, and a reduction of 18% when comparing the meso-hygrophilic and xerophilic forest.
The CCC analysis and, in this case, the ANOVA results, indicated that CCC values were significantly influenced by time, forest microclimatic condition, and species (Table 7). A significant interaction between time and species was observed, suggesting that temporal changes in CCC differ substantially between species (Table 7, Figure 9a). The Tukey post hoc test revealed a significant decrease in CCC between 1 July 2022 and 31 August 2022 for black locust (−14%), white hornbeam (−21%), and English oak trees (−18%). Similar to the LAI results, black locust had a lower initial CCC compared to the other two species (Figure 9a). However, in this case, the decrease in CCC for black locust between July and the end of August was statistically significant.
This suggests that CCC may be a more sensitive indicator of drought-induced vegetation stress compared to the LAI. In fact, as the product between LCC and the LAI, CCC can capture information on vegetation plant chlorosis and canopy biomass, thus providing a deeper insight into the physiological status of plants.
The ANOVA analysis also revealed a statistically significant interaction between forest microclimatic conditions and species, indicating that CCC differs for the same species significantly across various forest microclimatic conditions (Table 7, Figure 9b). The Tuckey post hoc test confirmed a statistically significant difference in CCC between the meso-hygrophilic forest and xeric environments for the three species analysed (Figure 8b). In the case of black locust, the reduction in CCC between meso-hygrophilic and xeric forest was −56%, whereas, for white hornbeam and English oak, it was −18%.
Overall, both white hornbeam and English oak showed a decrease in the LAI and CCC when comparing the values obtained on 1 July 2022 and 31 August 2022, thus confirming the efficiency of the drone-based estimation of functional traits to detect a drought-induced variation. This result is in line with what was observed by PRISMA in the Ticino Forest between June and early September [63]. Moreover, in the case of English oak, our results align with [18], who described discolouration episodes in 10 different English oak stands in Ticino Park during the 2003 summer heatwave, highlighting the susceptibility of English oak to water scarcity and high temperatures and the effectiveness of remotely sensed CCC in detecting English oak stress.
In contrast, black locust showed a less pronounced decline in functional traits between June and the end of August 2022 compared to the other two species analysed. However, it already exhibited an unusually low LAI in June, with a mean value of 1.9 m2 m−2, for a broadleaf species at the peak of the vegetative season. Field observations during the sampling campaign confirmed that black locust trees had relatively small canopies and appeared to be in poor health. This pre-existing condition could explain the lack of a significant reduction in the LAI over time, as the species was already under stress. Thus, although black locust is generally recognised for its stress tolerance, its non-significant reduction in functional traits under drought conditions in the Fagiana Forest context could be due to its compromised health, rather than a distinctive tolerance to the lack of water. Ref. [90] reported the decline of this species in the Ticino Forest, underlying the species’ vulnerability to climate change in this region. Reference [91] further supports this by suggesting that black locust distribution models predict its decline in Southern Europe, potentially leading to a northward range shift favoured by future warmer climatic conditions.
The drone-based functional trait retrieval effectively captured the variation in the LAI and CCC among the three analysed species within the different forest microclimatic conditions. Both LAI and CCC values were higher in the meso-hygrophilic and mesophilic forest area, compared with the xerophilic one. This reflects the general smaller tree leaf size and reduced canopy in the drier forest area, as a form of adaptation to the lower water moisture availability [92].

3.5. Strength and Limitations

The results of this study highlighted the importance of integrating drone-based multiple sensors, which together provided a comprehensive and accurate analysis of the Fagiana forest ecosystem. On the one hand, we obtained the precise spatial reconstruction of both forest structure and species composition. On the other hand, the proposed integrated approach allowed the accurate quantification of forest functional traits (the LAI and CCC). The drone-based high-resolution tree-level data obtained offer valuable and detailed insights into forest structure and ecological processes, accounting for variations related to species and forest microclimatic conditions. The processing workflow was optimised for automation and based entirely on open-source software, ensuring both efficiency and accessibility. Therefore, this approach can be applied in similar contexts.
The developed methodology, by providing an automated workflow for the accurate reconstruction of forest structure and plant trait retrieval, represents an important step forward in the understanding and parameterisation of process-based ecological models for estimating the gross primary productivity of forest ecosystems, which is fundamental for assessing and predicting fluctuations in carbon storage due to inter- and intra-seasonal variations in climate variables.
Despite the promising results, certain limitations were encountered in the proposed processing workflow. Firstly, the tree detection accuracy was not always high due to challenges in segmenting complex canopies and correctly identifying treetops. To address this, advances in segmentation algorithms for broadleaf species are needed to improve the automatic identification of treetops and tree crowns. Moreover, varying the processing parameters (e.g., CHM resolution, CHM smoothing, and LM window size) according to canopy conditions (dense or sparse) and species types could further improve detection accuracy [46]. Secondly, the species classification accuracy could potentially be enhanced through the use of multitemporal multispectral data [93,94]. Ref. [93] demonstrated that utilising data from three different acquisition dates significantly improved species classification accuracy compared to relying on data from a single date (no further benefits were observed beyond three dates). Finally, although the plant traits were accurately quantified, the retrieval approach used has some limitations that may hinder the application of the workflow in different contexts. Data-driven approaches based on machine regression algorithms, although remarkably powerful, typically come at the expense of transferability [95,96]. In addition, they can be biassed by the characteristics of the sensor used in the training phase, in this case, the PRISMA data. To broaden the applicability of the developed methodology, the use of hybrid approaches based on the combination of radiative transfer simulations and machine learning regression could be explored. However, this task is not trivial in forest ecosystems and at the high spatial resolution of drones due to the complexity of the canopy structure, which requires the use of geometric models that are difficult to parameterise and computationally expensive [97,98]. The proposed solution provides a relatively simple retrieval workflow, which represents a trade-off between generalisability and operability.

4. Conclusions

In this study, we demonstrated the effectiveness of integrating drone-based LiDAR and multispectral cameras to support forest monitoring. Specifically, our proposed method successfully detected individual trees within a dense broadleaf forest using drone-based LiDAR point cloud data, achieving high accuracy despite the inherent challenges of distinguishing broadleaf species, which are generally harder to differentiate than conifers.
The high tree detection accuracy also contributed to the strong performance of the object-based classification techniques applied to the multispectral MAIA S2 images. The classification accuracy was slightly higher than in similar studies, likely due to the inclusion of textural metrics and the higher resolution of the MAIA S2 sensor. Black locust was classified with greater accuracy compared to white hornbeam and English oak, likely due to spectral similarities between the latter species.
The retrieval of plant traits such as the LAI and CCC from multispectral imagery using machine learning models also performed well, showing high accuracy when compared to field data. The best-performing algorithm for both the LAI and CCC was GPR. The functional trait maps obtained revealed a significant reduction in the LAI and CCC in the Fagiana Forest when comparing the acquisition in July and the end of August 2022, as a consequence of the severe drought in the summer of 2022. English Oak and white hornbeam species experienced marked reductions in both traits, while black locust showed less pronounced changes, possibly due to its pre-existing poor health in Ticino Regional Park.
These results underline the effectiveness of the proposed workflow in effectively collecting information on forest structure and species composition, as well as quantitatively estimating forest functional traits. This method provides a powerful tool for forest managers to track and respond to climate impacts on diverse forest species.

Author Contributions

Conceptualization, C.P. and M.R.; methodology, G.T., L.V., R.G. (Rodolfo Gentili), C.P. and M.R.; software, L.V. and R.G. (Roberto Garzonio); validation, G.T. and L.V.; formal analysis, L.V., G.T. and R.G. (Roberto Garzonio); investigation, B.S., R.G. (Rodolfo Gentili), L.V. and C.P.; data curation, L.V. and G.T.; writing—original draft preparation, B.S., G.T. and L.V.; writing—review and editing, B.S., G.T., L.V., R.G. (Roberto Garzonio), R.G. (Rodolfo Gentili), C.P. and M.R.; visualisation, L.V. and G.T.; supervision, C.P. and M.R.; project administration, M.R. and C.P.; and funding acquisition, M.R. All authors have read and agreed to the published version of the manuscript.

Funding

Field data collection was partially conducted within the Contract “PRIS4VEG” n. 2022-5-U.0 (CU 43C22000000005). L. Vignali was supported by the Italian National Recovery and Resilience Plan (NRRP), Mission 4 Component 2 Investment 1.4—Call for tender No. 3138 of 16 December 2021, rectified by Decree n.3175 of 18 December 2021 of the Italian Ministry of University and Research, funded by the European Union—NextGenerationEU. Award Number: Project code CN_00000033, Concession Decree No. 1034 of 17 June 2022, adopted by the Italian Ministry of University and Research, CUP, H43C22000530001, Project title “National Biodiversity Future Center—NBFC”. The drones are part of the GEMMA (Geo-Environmental Measuring and Monitoring from multiple plAtforms) laboratory of the Department of Earth and Environmental Sciences (UNIMIB), financed by MIUR—Dipartimenti di Eccellenza 2023–2027, Project TECLA, Department of Earth and Environmental Sciences, University of Milano-Bicocca.

Data Availability Statement

Dataset available upon request from the authors.

Acknowledgments

We thank R. Castrovinci (Ticino regional park) and L. Gallia (UNIMIB) for the support in the field data collection, R. Darvishzadeh Varchehi (UT) for the suggestions on leaf sampling in the forest, V. Picchi and A. Calzone (CREA-IT) for the leaf chlorophyll content extractions, and V. Parco and F. Caronni (Ticino regional park) for the support and scientific exchange in Ticino Park.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Krieger, D.J. Economic Value of Forest Ecosystem Services: A Review; The Wilderness Society: Washington, DC, USA, 2001. [Google Scholar]
  2. Achard, F.; Hansen, M.C. Global Forest Monitoring from Earth Observation; CRC Press Taylor & Francis: Boca Raton, FL, USA, 2012. [Google Scholar]
  3. Seidl, R.; Thom, D.; Kautz, M.; Martin-Benito, D.; Peltoniemi, M.; Vacchiano, G.; Reyer, C.P. Forest disturbances under climate change. Nat. Clim. Chang. 2017, 7, 395–402. [Google Scholar] [CrossRef] [PubMed]
  4. Guo, Q.; Zhang, J.; Guo, S.; Ye, Z.; Deng, H.; Hou, X.; Zhang, H. Urban tree classification based on object-oriented approach and random forest algorithm using unmanned aerial vehicle (UAV) multispectral imagery. Remote Sens. 2022, 14, 3885. [Google Scholar] [CrossRef]
  5. Tang, L.; Shao, G. Drone remote sensing for forestry research and practices. J. For. Res. 2015, 26, 791–797. [Google Scholar] [CrossRef]
  6. Wang, R.; Gamon, J.A. Remote sensing of terrestrial plant biodiversity. Remote Sens. Environ. 2019, 231, 111218. [Google Scholar] [CrossRef]
  7. Gini, R.; Sona, G.; Ronchetti, G.; Passoni, D.; Pinto, L. Improving tree species classification using UAS multispectral images and texture measures. ISPRS Int. J. Geo-Inf. 2018, 7, 315. [Google Scholar] [CrossRef]
  8. Guimarães, N.; Pádua, L.; Marques, P.; Silva, N.; Peres, E.; Sousa, J.J. Forestry remote sensing from unmanned aerial vehicles: A review focusing on the data, processing and potentialities. Remote Sens. 2020, 12, 1046. [Google Scholar] [CrossRef]
  9. Torresan, C.; Berton, A.; Carotenuto, F.; Di Gennaro, S.F.; Gioli, B.; Matese, A.; Miglietta, F.; Vagnoli, C.; Zaldei, A.; Wallace, L. Forestry applications of UAVs in Europe: A review. Int. J. Remote Sens. 2017, 38, 2427–2447. [Google Scholar] [CrossRef]
  10. Shahbazi, M.; Théau, J.; Ménard, P. Recent applications of unmanned aerial imagery in natural resource management. GIScience Remote Sens. 2014, 51, 339–365. [Google Scholar] [CrossRef]
  11. Wallace, L.; Lucieer, A.; Malenovský, Z.; Turner, D.; Vopěnka, P. Assessment of forest structure using two UAV techniques: A comparison of airborne laser scanning and structure from motion (SfM) point clouds. Forests 2016, 7, 62. [Google Scholar] [CrossRef]
  12. van Leeuwen, M.; Nieuwenhuis, M. Retrieval of forest structural parameters using LiDAR remote sensing. Eur. J. Forest Res. 2010, 129, 749–770. [Google Scholar] [CrossRef]
  13. Toivonen, J.; Kangas, A.; Maltamo, M.; Kukkonen, M.; Packalen, P. Assessing biodiversity using forest structure indicators based on airborne laser scanning data. For. Ecol. Manag. 2023, 546, 121376. [Google Scholar] [CrossRef]
  14. Chen, Q.; Gao, T.; Zhu, J.; Wu, F.; Li, X.; Lu, D.; Yu, F. Individual tree segmentation and tree height estimation using leaf-off and leaf-on UAV-LiDAR data in dense deciduous forests. Remote Sens. 2022 14, 2787. [CrossRef]
  15. Rogers, B.M.; Solvik, K.; Hogg, E.H.; Ju, J.; Masek, J.G.; Michaelian, M.; Goetz, S.J. Detecting early warning signals of tree mortality in boreal North America using multiscale satellite data. Glob. Chang. Biol. 2018, 24, 2284–2304. [Google Scholar] [CrossRef] [PubMed]
  16. Anderegg, W.R.L.; Anderegg, L.D.L.; Huang, C.W. Testing early warning metrics for drought-induced tree physiological stress and mortality. Glob. Chang. Biol. 2019, 25, 2459–2469. [Google Scholar] [CrossRef] [PubMed]
  17. Le, T.S.; Harper, R.; Dell, B. Application of remote sensing in detecting and monitoring water stress in forests. Remote Sens. 2023, 15, 3360. [Google Scholar] [CrossRef]
  18. Panigada, C.; Rossini, M.; Busetto, L.; Meroni, M.; Fava, F.; Colombo, R. Chlorophyll concentration mapping with MIVIS data to assess crown discoloration in the Ticino Park oak forest. Int. J. Remote Sens. 2010, 31, 3307–3332. [Google Scholar] [CrossRef]
  19. Darvishzadeh, R.; Skidmore, A.; Abdullah, H.Y.; Cherenet, E.; Ali, A.M.; Nieuwenhuis, W.; Paganini, M. Mapping leaf chlorophyll content from sentinel-2 and rapideye data in spruce stands using the invertible forest reflectance model. Int. J. Appl. Earth Obs. Geoinf. 2019, 79, 58–70. [Google Scholar] [CrossRef]
  20. Campos-Taberner, M.; Moreno-Martínez, Á.; García-Haro, F.; Camps-Valls, G.; Robinson, N.; Kattge, J.; Running, S. Global estimation of biophysical variables from Google Earth Engine platform. Remote Sens. 2018, 10, 1167. [Google Scholar] [CrossRef]
  21. Chen, J.M.; Black, T.A. Defining leaf area index for non-flat leaves. Plant Cell Environ. 1992, 15, 421–429. [Google Scholar] [CrossRef]
  22. Gitelson, A.A.; Viña, A.; Ciganda, V.; Rundquist, D.C.; Arkebauer, T.J. Remote estimation of canopy chlorophyll content in crops. Geophys. Res. Lett. 2005, 32, L08403. [Google Scholar] [CrossRef]
  23. Myneni, R.B.; Ramakrishna, R.; Nemani, R.O.E.N.T.G.E.N.; Running, S.W. Estimation of global leaf area index and absorbed PAR using radiative transfer models. IEEE Trans. Geosci. Remote Sens. 1997, 35, 1380–1393. [Google Scholar] [CrossRef]
  24. Wang, L.; Chang, Q.; Li, F.; Yan, L.; Huang, Y.; Wang, Q. Effects of growth stage development on paddy rice leaf area index prediction models. Remote Sens. 2019, 11, 361. [Google Scholar] [CrossRef]
  25. Singh, P.; Srivastava, P.K.; Verrelst, J.; Mall, R.K.; Rivera, J.P.; Dugesar, V.; Prasad, R. High resolution retrieval of leaf chlorophyll content over Himalayan pine forest using Visible/IR sensors mounted on UAV and radiative transfer model. Ecol. Inform. 2023, 75, 102099. [Google Scholar] [CrossRef]
  26. Kopačková-Strnadová, V.; Koucká, L.; Jelének, J.; Lhotáková, Z.; Oulehle, F. Canopy top, height and photosynthetic pigment estimation using parrot sequoia multispectral imagery and the unmanned aerial vehicle (UAV). Remote Sens. 2021, 13, 705. [Google Scholar] [CrossRef]
  27. Del Favero, R. I Tipi Forestali Della Lombardia; Cierre: Verona, Italy, 2002. [Google Scholar]
  28. Faranda, D.; Pascale, S.; Bulut, B. Persistent anticyclonic conditions and climate change exacerbated the exceptional 2022 European-Mediterranean drought. Environ. Res. Lett. 2023, 18, 034030. [Google Scholar] [CrossRef]
  29. Gharun, M.; Shekhar, A.; Xiao, J.; Li, X.; Buchmann, N. Effect of the 2022 Summer Drought across Forest Types in Europe. EGUsphere 2024. [Google Scholar] [CrossRef]
  30. Tagliabue, G.; Boschetti, M.; Bramati, G.; Candiani, G.; Colombo, R.; Nutini, F.; Pompilio, L.; Rivera-Caicedo, J.P.; Rossi, M.; Rossini, M.; et al. Hybrid retrieval of crop traits from multi-temporal PRISMA hyperspectral imagery. ISPRS J. Photogramm. Remote Sens. 2022, 187, 362–377. [Google Scholar] [CrossRef]
  31. Lichtenthaler, H.K.; Buschmann, C. Chlorophylls and Carotenoids Measurement and UV-VIS characterization by UV-VIS Spectroscopy. Curr. Protoc. Food Anal. Chem. 2001, 3, 1–8. [Google Scholar] [CrossRef]
  32. Štroner, M.; Urban, R.; Línková, L. A New Method for UAV Lidar Precision Testing Used for the Evaluation of an Affordable DJI ZENMUSE L1 Scanner. Remote Sens. 2021, 13, 4811. [Google Scholar] [CrossRef]
  33. Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D. Geometric calibration and radiometric correction of the maia multispectral camera. International Archives of the Photogrammetry, Remote Sensing and Spatial Information. Sci.-ISPRS Arch. 2017, 42, 149–156. [Google Scholar] [CrossRef]
  34. Czyża, S.; Szuniewicz, K.; Kowalczyk, K.; Dumalski, A.; Ogrodniczak, M.; Zieleniewicz, Ł. Assessment of Accuracy in Unmanned Aerial Vehicle (UAV) Pose Estimation with the REAL-Time Kinematic (RTK) Method on the Example of DJI Matrice 300 RTK. Sensors 2023, 23, 2092. [Google Scholar] [CrossRef] [PubMed]
  35. Karolos, I.A.; Bellos, K.; Alexandridisale, V.; Chrysafis, I.; Georgiadis, H.; Pikridas, C.; Mallinis, G. Advancing forest biodiversity conservation with the EL-BIOS digital twin: An integration of LiDAR and multispectral earth observation data. In Proceedings of the Tenth International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2024), SPIE, Paphos, Cyprus, 8–9 April 2024; Volume 13212, pp. 361–375. [Google Scholar]
  36. Gómez-Gutiérrez, Á.; Sánchez-Fernández, M.; de Sanjosé-Blasco, J.J.; Gudino-Elizondo, N.; Lavado-Contador, F. Is it possible to generate accurate 3D point clouds with UAS-LIDAR and UAS-RGB photogrammetry without GCPs? A case study on a beach and rocky cliff. Landsc. Ecol. 2024, 39, 191. [Google Scholar] [CrossRef]
  37. Girardeau-Montaut, D. CloudCompare; EDF R&D Telecom ParisTech: Paris, France, 2016; p. 11. [Google Scholar]
  38. Silva, C.A.; Crookston, N.L.; Hudak, A.T.; Vierling, L.A.; Klauberg, C.; Silva, M.C.A. Package ‘rLiDAR’. The CRAN Project. 2017. Available online: https://github.com/carlos-alberto-silva/rLiDAR (accessed on 2 February 2023).
  39. Plowright, A. R Package ‘ForestTools’. 2018. Available online: https://github.com/andrew-plowright/ForestTools (accessed on 31 January 2023).
  40. Roussel, J.R.; Auty, D.; Coops, N.C.; Tompalski, P.; Goodbody, T.R.H.; Meador, A.S.; Bourdon, J.F.; de Boissieu, F.; Achim, A. lidR: An R package for analysis of Airborne Laser Scanning (ALS) data. Remote Sens. Environ. 2020, 251, 112061. [Google Scholar] [CrossRef]
  41. Hardenbol, A.A.; Korhonen, L.; Kukkonen, M.; Maltamo, M. Detection of standing retention trees in boreal forests with airborne laser scanning point clouds and multispectral imagery. Methods Ecol. Evol. 2023, 14, 1610–1622. [Google Scholar] [CrossRef]
  42. Zhang, K.; Chen, S.C.; Whitman, D.; Shyu, M.L.; Yan, J.; Zhang, C. A progressive morphological filter for removing nonground measurements from airborne LIDAR data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 872–882. [Google Scholar] [CrossRef]
  43. Hastings, J.H.; Ollinger, S.V.; Ouimette, A.P.; Sanders-DeMott, R.; Palace, M.W.; Ducey, M.J.; Sullivan, F.B.; Basler, D.; Orwig, D.A. Tree species traits determine the success of LiDAR-based crown mapping in a mixed temperate forest. Remote Sens. 2020, 12, 309. [Google Scholar] [CrossRef]
  44. Mielcarek, M.; Stereńczak, K.; Khosravipour, A. Testing and evaluating different LiDAR-derived canopy height model generation methods for tree height estimation. Int. J. Appl. Earth Obs. Geoinf. 2018, 71, 132–143. [Google Scholar] [CrossRef]
  45. Popescu, S.C.; Wynne, R.H. Seeing the Trees in the Forest. Photogramm. Eng. Remote Sens. 2013, 70, 589–604. [Google Scholar] [CrossRef]
  46. Gao, T.; Gao, Z.; Sun, B.; Qin, P.; Li, Y.; Yan, Z. An integrated method for estimating Forest-canopy closure based on UAV LiDAR data. Remote Sens. 2022, 14, 4317. [Google Scholar] [CrossRef]
  47. Ma, K.; Chen, Z.; Fu, L.; Tian, W.; Jiang, F.; Yi, J.; Du, Z.; Sun, H. Performance and Sensitivity of Individual Tree Segmentation Methods for UAV-LiDAR in Multiple Forest Types. Remote Sens. 2022, 14, 298. [Google Scholar] [CrossRef]
  48. Meyer, F.; Beucher, S. Morphological segmentation. J. Vis. Commun. Image Represent. 1990, 1, 21–46. [Google Scholar] [CrossRef]
  49. Mohan, M.; Leite, R.; Broadbent, E.; Wan Mohd Jaafar, W.; Srinivasan, S.; Bajaj, S.; Dalla Corte, A.; do Amaral, C.; Gopan, G.; Saad, S.; et al. Individual tree detection using UAV-lidar and UAV-SfM data: A tutorial for beginners. Open Geosci. 2021, 13, 1028–1039. [Google Scholar] [CrossRef]
  50. Zhen, Z.; Quackenbush, L.J.; Zhang, L. Trends in automatic individual tree crown detection and delineation—Evolution of LiDAR data. Remote Sens. 2016, 8, 333. [Google Scholar] [CrossRef]
  51. Goutte, C.; Gaussier, E. A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In European Conference on Information Retrieval; Springer: Berlin/Heidelberg, Germany, 2005; pp. 345–359. [Google Scholar]
  52. Sokolova, M.; Japkowicz, N.; Szpakowicz, S. Beyond accuracy, F-score and ROC: A family of discriminant measures for performance evaluation. In Australasian Joint Conference on Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2006; pp. 1015–1021. [Google Scholar]
  53. Rossini, M.; Garzonio, R.; Panigada, C.; Tagliabue, G.; Bramati, G.; Vezzoli, G.; Cogliati, S.; Colombo, R.; Di Mauro, B. Mapping Surface Features of an Alpine Glacier through Multispectral and Thermal Drone Surveys. Remote Sens. 2023, 15, 3429. [Google Scholar] [CrossRef]
  54. Smith, G.M.; Milton, E.J. The Use of the Empirical Line Method to Calibrate Remotely Sensed Data to Reflectance. Int. J. Remote Sens. 1999, 20, 2653–2662. [Google Scholar] [CrossRef]
  55. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  56. Belgiu, M.; Drăgu, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  57. Ma, M.; Liu, J.; Liu, M.; Zeng, J.; Li, Y. Tree species classification based on sentinel-2 imagery and random forest classifier in the eastern regions of the qilian mountains. Forests 2021, 12, 1736. [Google Scholar] [CrossRef]
  58. Liu, M.; Liu, J.; Atzberger, C.; Jiang, Y.; Ma, M.; Wang, X. Zanthoxylum bungeanum Maxim mapping with multi-temporal Sentinel-2 images: The importance of different features and consistency of results. ISPRS J. Photogramm. Remote Sens. 2021, 174, 68–86. [Google Scholar] [CrossRef]
  59. Liaw, A.; Wiener, M. Classification and Regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
  60. Lawrence, R.L.; Wood, S.D.; Sheley, R.L. Mapping invasive plants using hyperspectral imagery and Breiman Cutler classifications (randomForest). Remote Sens. Environ. 2006, 100, 356–362. [Google Scholar] [CrossRef]
  61. Story, M.; Congalton, R.G. Remote Sensing Brief Accuracy Assessment: A User’s Perspective. Photogramm. Eng. Remote Sens. 1986, 52, 397–399. [Google Scholar]
  62. Cogliati, S.; Sarti, F.; Chiarantini, L.; Cosi, M.; Lorusso, R.; Lopinto, E.; Miglietta, F.; Genesio, L.; Guanter, L.; Damm, A.; et al. The PRISMA imaging spectroscopy mission: Overview and first performance analysis. Remote Sens. Environ. 2021, 262, 112449. [Google Scholar] [CrossRef]
  63. Tagliabue, G.; Panigada, C.; Savinelli, B.; Vignali, L.; Gallia, L.; Gentili, R.; Picchi, V.; Calzone, A.; Colombo, R.; Rossini, M. Exploitation of PRISMA spaceborne hyperspectral observations for improved functional trait retrievals in mid-latitude forest ecosystems. In Proceedings of the IGARSS 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 16–21 July 2023; pp. 1261–1264. [Google Scholar] [CrossRef]
  64. Caicedo, J.P.R.; Verrelst, J.; Munoz-Mari, J.; Moreno, J.; Camps-Valls, G. Toward a semiautomatic machine learning retrieval of biophysical parameters. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1249–1259. [Google Scholar] [CrossRef]
  65. Verrelst, J.; Rivera, J.P.; Alonso, L.; Moreno, J. ARTMO: An Automated Radiative Transfer Models Operator toolbox for automated retrieval of biophysical parameters through model inversion. In Proceedings of the EARSeL 7th SIG-Imaging Spectroscopy Workshop, Edinburgh, UK, 11–13 April 2011; pp. 11–13. [Google Scholar]
  66. Tanhuanpää, T.; Saarinen, N.; Kankare, V.; Nurminen, K.; Vastaranta, M.; Honkavaara, E.; Hyyppä, J. Evaluating the performance of high-altitude aerial image-based digital surface models in detecting individual tree crowns in mature boreal forests. Forests 2016, 7, 143. [Google Scholar] [CrossRef]
  67. Huang, H.; Li, X.; Chen, C. Individual tree crown detection and delineation from very-high-resolution UAV images based on bias field and marker-controlled watershed segmentation algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2253–2262. [Google Scholar] [CrossRef]
  68. Vauhkonen, J.; Ene, L.; Gupta, S.; Heinzel, J.; Holmgren, J.; Pitkänen, J.; Maltamo, M. Comparative testing of single-tree detection algorithms under different types of forest. Forestry 2012, 85, 27–40. [Google Scholar] [CrossRef]
  69. Yang, Q.; Su, Y.; Jin, S.; Kelly, M.; Hu, T.; Ma, Q.; Guo, Q. The influence of vegetation characteristics on individual tree segmentation methods with airborne LiDAR data. Remote Sens. 2019, 11, 2880. [Google Scholar] [CrossRef]
  70. Miraki, M.; Sohrabi, H.; Fatehi, P.; Kneubuehler, M. Individual tree crown delineation from high-resolution UAV images in broadleaf forest. Ecol. Inform. 2021, 61, 101207. [Google Scholar] [CrossRef]
  71. Lisiewicz, M.; Kamińska, A.; Kraszewski, B.; Stereńczak, K. Correcting the results of CHM-based individual tree detection algorithms to improve their accuracy and reliability. Remote Sens. 2022, 14, 1822. [Google Scholar] [CrossRef]
  72. Zaforemska, A.; Xiao, W.; Gaulton, R. Individual tree detection from UAV LiDAR data in a mixed species woodland. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 657–663. [Google Scholar] [CrossRef]
  73. Stereńczak, K. Factors Influencing Individual Tree Crowns Detection Based on Airborne Laser Scanning Data. Forest Research Papers 2013, 74, 323–333. [Google Scholar] [CrossRef]
  74. Franklin, S.E.; Ahmed, O.S. Deciduous tree species classification using object-based analysis and machine learning with unmanned aerial vehicle multispectral data. Int. J. Remote Sens. 2017, 39, 5236–5245. [Google Scholar] [CrossRef]
  75. Mishra, N.B.; Mainali, K.P.; Shrestha, B.B.; Radenz, J.; Karki, D. Species-level vegetation mapping in a Himalayan treeline ecotone using unmanned aerial system (UAS) imagery. ISPRS Int. J. Geo-Inf. 2018, 7, 445. [Google Scholar] [CrossRef]
  76. Xu, Z.; Shen, X.; Cao, L.; Coops, N.C.; Goodbody, T.R.; Zhong, T.; Wu, X. Tree species classification using UAS-based digital aerial photogrammetry point clouds and multispectral imageries in subtropical natural forests. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102173. [Google Scholar] [CrossRef]
  77. Sivanandam, P.; Lucieer, A. Tree Detection and Species Classification in a Mixed Species Forest Using Unoccupied Aircraft System (UAS) RGB and Multispectral Imagery. Remote Sens. 2022, 14, 4963. [Google Scholar] [CrossRef]
  78. Immitzer, M.; Atzberger, C.; Koukal, T. Tree species classification with random forest using very high spatial resolution 8-band WorldView-2 satellite data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef]
  79. Immitzer, M.; Neuwirth, M.; Böck, S.; Brenner, H.; Vuolo, F.; Atzberger, C. Optimal Input Features for Tree Species Classification in Central Europe Based on Multi-Temporal Sentinel-2 Data. Remote Sens. 2019, 11, 2599. [Google Scholar] [CrossRef]
  80. Hille Ris Lambers, J.; Clark, J.; Beckage, B. Density-dependent mortality and the latitudinal gradient in species diversity. Nature 2002, 417, 732–735. [Google Scholar] [CrossRef]
  81. Neuhäusl, R. Comparative ecological study of European oak-hornbeam forests. Nat. Can. 1977, 104, 109–117. [Google Scholar]
  82. Chakhvashvili, E.; Siegmann, B.; Muller, O.; Verrelst, J.; Bendig, J.; Kraska, T.; Rascher, U. Retrieval of crop variables from proximal multispectral UAV image data using PROSAIL in maize canopy. Remote Sens. 2022, 14, 1247. [Google Scholar] [CrossRef] [PubMed]
  83. Singhal, G.; Bansod, B.; Mathew, L.; Goswami, J.; Choudhury, B.U.; Raju, P.L.N. Chlorophyll estimation using multi-spectral unmanned aerial system based on machine learning techniques. Remote Sens. Appl. Soc. Environ. 2019, 15, 100235. [Google Scholar] [CrossRef]
  84. Narmilan, A.; Gonzalez, F.; Salgadoe, A.S.A.; Kumarasiri, U.W.L.M.; Weerasinghe, H.A.S.; Kulasekara, B.R. Predicting canopy chlorophyll content in sugarcane crops using machine learning algorithms and spectral vegetation indices derived from UAV multispectral imagery. Remote Sens. 2022, 14, 1140. [Google Scholar] [CrossRef]
  85. Abdelbaki, A.; Schlerf, M.; Retzlaff, R.; Machwitz, M.; Verrelst, J.; Udelhoven, T. Comparison of crop trait retrieval strategies using UAV-based VNIR hyperspectral imaging. Remote Sens. 2021, 13, 1748. [Google Scholar] [CrossRef] [PubMed]
  86. Zhang, C.; Chen, Z.; Yang, G.; Xu, B.; Feng, H.; Chen, R.; Yang, H. Removal of canopy shadows improved retrieval accuracy of individual apple tree crowns LAI and chlorophyll content using UAV multispectral imagery and PROSAIL model. Comput. Electron. Agric. 2024, 221, 108959. [Google Scholar] [CrossRef]
  87. Chianucci, F.; Disperati, L.; Guzzi, D.; Bianchini, D.; Nardino, V.; Lastri, C.; Corona, P. Estimation of canopy attributes in beech forests using true colour digital images from a small fixed-wing UAV. Int. J. Appl. Earth Obs. Geoinf. 2016, 47, 60–68. [Google Scholar] [CrossRef]
  88. Savinelli, B.; Panigada, C.; Tagliabue, G.; Vignali, L.; Gentili, R.; Fassnacht, F.E.; Rossini, M. Monitoring functional traits of complex temperate forests using Sentinel-2 data during a severe drought period. Sci. Total Environ. 2024, 957, 177428. [Google Scholar] [CrossRef]
  89. Rossini, M.; Panigada, C.; Meroni, M.; Busetto, L.; Castrovinci, R.; Colombo, R. Monitoraggio delle condizioni della farnia (Quercus robur L.) nel Parco del Ticino mediante tecniche di telerilevamento iperspettrale. For. J. Silvic. For. Ecol. 2007, 4, 194. [Google Scholar]
  90. Colangelo, M.; Camarero, J.J.; Ripullone, F.; Gazol, A.; Sánchez-Salguero, R.; Oliva, J.; Redondo, M.A. Drought decreases growth and increases mortality of coexisting native and introduced tree species in a temperate floodplain forest. Forests 2018, 9, 205. [Google Scholar] [CrossRef]
  91. Puchałka, R.; Dyderski, M.K.; Vítková, M.; Sádlo, J.; Klisz, M.; Netsvetov, M.; Jagodziński, A.M. Black locust (Robinia pseudoacacia L.) range contraction and expansion in Europe under changing climate. Glob. Chang. Biol. 2021, 27, 1587–1600. [Google Scholar] [CrossRef]
  92. Meier, I.C.; Leuschner, C. Leaf size and leaf area index in Fagus sylvatica forests: Competing effects of precipitation, temperature, and nitrogen availability. Ecosystems 2008, 11, 655–669. [Google Scholar] [CrossRef]
  93. Grybas, H.; Congalton, R.G. A comparison of multi-temporal RGB and multispectral UAS imagery for tree species classification in heterogeneous New Hampshire Forests. Remote Sens. 2021, 13, 2631. [Google Scholar] [CrossRef]
  94. Lisein, J.; Michez, A.; Claessens, H.; Lejeune, P. Discrimination of deciduous tree species from time series of unmanned aerial system imagery. PLoS ONE 2015, 10, e0141006. [Google Scholar] [CrossRef] [PubMed]
  95. Wan, L.; Ryu, Y.; Dechant, B.; Lee, J.; Zhong, Z.; Feng, H. Improving retrieval of leaf chlorophyll content from Sentinel-2 and Landsat-7/8 imagery by correcting for canopy structural effects. Remote Sens. Environ. 2024, 304, 114048. [Google Scholar] [CrossRef]
  96. Wang, Z.; Féret, J.B.; Liu, N.; Sun, Z.; Yang, L.; Geng, S.; Townsend, P.A. Generality of leaf spectroscopic models for predicting key foliar functional traits across continents: A comparison between physically-and empirically-based approaches. Remote Sens. Environ. 2023, 293, 113614. [Google Scholar] [CrossRef]
  97. Qi, J.; Xie, D.; Guo, D.; Yan, G. A Large-Scale Emulation System for Realistic Three-Dimensional (3-D) Forest Simulation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4834–4843. [Google Scholar] [CrossRef]
  98. Miraglio, T.; Adeline, K.; Huesca, M.; Ustin, S.; Briottet, X. Joint use of PROSAIL and DART for fast LUT building: Application to gap fraction and leaf biochemistry estimations over sparse oak stands. Remote Sens. 2020, 12, 2925. [Google Scholar] [CrossRef]
Figure 1. (a) RGB image of the “La Fagiana” nature reserve. The red dots indicate the centre of the sites (15 m × 15 m) where the plant traits were sampled, and the yellow dots are the centre of the validation sites (30 m × 30 m) for the individual tree detection. The shaded areas indicate the three main forest areas classified according to the microclimatic condition of the forest: meso-hygrophilic (green), mesophilic (yellow), and xerophilic (red). The Google satellite image of the area in grey scale is used as the basemap. (b) The extension of Ticino Park in Northern Italy (green polygon) and the location of the Fagiana area (red polygon).
Figure 1. (a) RGB image of the “La Fagiana” nature reserve. The red dots indicate the centre of the sites (15 m × 15 m) where the plant traits were sampled, and the yellow dots are the centre of the validation sites (30 m × 30 m) for the individual tree detection. The shaded areas indicate the three main forest areas classified according to the microclimatic condition of the forest: meso-hygrophilic (green), mesophilic (yellow), and xerophilic (red). The Google satellite image of the area in grey scale is used as the basemap. (b) The extension of Ticino Park in Northern Italy (green polygon) and the location of the Fagiana area (red polygon).
Drones 08 00744 g001
Figure 2. Illustration of the LiDAR data processing workflow. DTM = digital terrain model; ITD = individual tree detection.
Figure 2. Illustration of the LiDAR data processing workflow. DTM = digital terrain model; ITD = individual tree detection.
Drones 08 00744 g002
Figure 3. (a) Hyperspectral reflectance spectra collected by the PRISMA satellite in correspondence of the sampling sites where the field data were collected (n = 50); (b) PRISMA spectra resampled to the MAIA S2 spectral bands and used for the training of the machine learning regression algorithms (n = 50).
Figure 3. (a) Hyperspectral reflectance spectra collected by the PRISMA satellite in correspondence of the sampling sites where the field data were collected (n = 50); (b) PRISMA spectra resampled to the MAIA S2 spectral bands and used for the training of the machine learning regression algorithms (n = 50).
Drones 08 00744 g003
Figure 4. Drone-based classification of the tree species in the Fagiana Forest obtained from the MAIA S2 multispectral sensor using a random forest classifier. The Google satellite image of the area in grey scale is used as the basemap.
Figure 4. Drone-based classification of the tree species in the Fagiana Forest obtained from the MAIA S2 multispectral sensor using a random forest classifier. The Google satellite image of the area in grey scale is used as the basemap.
Drones 08 00744 g004
Figure 5. Scatter plots showing the measured and estimated leaf area index (LAI) values obtained from the MAIA S2 sensor with different machine learning regression algorithms: (a) Gaussian processes regression (GPR); (b) support vector regression (SVR); (c) partial least squares regression (PLSR); (d) neural network (NN); and (e) random forest (RF). The grey shaded areas indicate the confidence intervals (0.95) of the regression lines (solid lines) using reduced major axis (RMA) regression. The dotted line represents the 1:1 line.
Figure 5. Scatter plots showing the measured and estimated leaf area index (LAI) values obtained from the MAIA S2 sensor with different machine learning regression algorithms: (a) Gaussian processes regression (GPR); (b) support vector regression (SVR); (c) partial least squares regression (PLSR); (d) neural network (NN); and (e) random forest (RF). The grey shaded areas indicate the confidence intervals (0.95) of the regression lines (solid lines) using reduced major axis (RMA) regression. The dotted line represents the 1:1 line.
Drones 08 00744 g005
Figure 6. Scatter plots showing the measured and estimated canopy chlorophyll content (CCC) values obtained from the MAIA S2 sensor with different machine learning regression algorithms: (a) Gaussian processes regression (GPR); (b) support vector regression (SVR); (c) partial least squares regression (PLSR); (d) neural network (NN); and (e) random forest (RF). The grey shaded areas indicate the confidence intervals (0.95) of the regression lines (solid lines) using reduced major axis (RMA) regression. The dotted line represents the 1:1 line.
Figure 6. Scatter plots showing the measured and estimated canopy chlorophyll content (CCC) values obtained from the MAIA S2 sensor with different machine learning regression algorithms: (a) Gaussian processes regression (GPR); (b) support vector regression (SVR); (c) partial least squares regression (PLSR); (d) neural network (NN); and (e) random forest (RF). The grey shaded areas indicate the confidence intervals (0.95) of the regression lines (solid lines) using reduced major axis (RMA) regression. The dotted line represents the 1:1 line.
Drones 08 00744 g006
Figure 7. Drone-based maps obtained from the MAIA S2 sensor using machine learning regression algorithms: (a,b) maps of the leaf area index (LAI) and canopy chlorophyll content (CCC) obtained from drone images collected on 1 July 2022; (c,d) maps of the LAI and CCC obtained from drone images collected on 31 August 2022; and (e,f) maps of the delta LAI and CCC obtained as the difference between the LAI and CCC values retrieved from the drone images collected on 31 August 2022 and 1 July 2022.
Figure 7. Drone-based maps obtained from the MAIA S2 sensor using machine learning regression algorithms: (a,b) maps of the leaf area index (LAI) and canopy chlorophyll content (CCC) obtained from drone images collected on 1 July 2022; (c,d) maps of the LAI and CCC obtained from drone images collected on 31 August 2022; and (e,f) maps of the delta LAI and CCC obtained as the difference between the LAI and CCC values retrieved from the drone images collected on 31 August 2022 and 1 July 2022.
Drones 08 00744 g007
Figure 8. Boxplot of LAI against retrieval day (a) and forest microclimatic conditions (b). Different lowercase letters indicate statistically significant differences, while equal lowercase letters indicate no statistically significant difference.
Figure 8. Boxplot of LAI against retrieval day (a) and forest microclimatic conditions (b). Different lowercase letters indicate statistically significant differences, while equal lowercase letters indicate no statistically significant difference.
Drones 08 00744 g008
Figure 9. Boxplot of the CCC against retrieval day (a) and forest type (b). Different lowercase letters indicate statistical differences, while equal lowercase letters indicate no statistical difference.
Figure 9. Boxplot of the CCC against retrieval day (a) and forest type (b). Different lowercase letters indicate statistical differences, while equal lowercase letters indicate no statistical difference.
Drones 08 00744 g009
Table 1. Summary of the sensors and technical details of the drone acquisitions performed in Ticino Park in 2022.
Table 1. Summary of the sensors and technical details of the drone acquisitions performed in Ticino Park in 2022.
Drone Platform DJI Matrice 300 RTK
DJI Zenmuse L1DJI Zenmuse P1MAIA S2
Dates of Acquisition28/04/2228/04/2201/07/22 and 31/08/22
Flight Height80 m120 m110 m
Point Cloud Density459 point/m2--
Ground Sampling Distance (GSD)-1.5 cm5.5 cm
Acquisition Speed5 m/s10 m/s6 m/s
Side Overlap50%70%70%
Forward Overlap80%80%90%
Table 2. Individual tree detection (ITD) accuracy from LiDAR. TP = number of correctly detected treetops; FN = number of trees not detected; and FP = number of extra trees (commission error).
Table 2. Individual tree detection (ITD) accuracy from LiDAR. TP = number of correctly detected treetops; FN = number of trees not detected; and FP = number of extra trees (commission error).
Site IDGround-Truth/Photointerp.ITDTPFNFPRecall Rate (r)Precision Rate (p)F-Score (F)
17870110.8750.933
2212015650.7140.750.731
399900111
416179780.5620.5290.545
5191614520.7370.8750.8
6574130.80.5710.667
7161713340.8120.7650.788
816169770.5620.5620.562
915148760.5330.5710.552
10191815430.7890.8330.811
11202316470.80.6960.744
12171310730.5880.7690.667
13161813350.8120.7220.765
14262120610.7690.9520.851
15141711360.7860.6470.709
Total23623417363610.7330.7390.736
Table 3. Accuracy of random forest (RF) classification of three tree species based on 133 trees for training and 57 independent trees for validation. PA = Producer Accuracy; UA = User Accuracy; k = Kappa coefficient; and OA = Overall Accuracy.
Table 3. Accuracy of random forest (RF) classification of three tree species based on 133 trees for training and 57 independent trees for validation. PA = Producer Accuracy; UA = User Accuracy; k = Kappa coefficient; and OA = Overall Accuracy.
Reference
ClassWhite hornbeamEnglish oakBlack locustTotalUA
White hornbeam1830210.86
ClassificationEnglish oak6210270.78
Black locust00991
Total2424957OA = 0.84
PA0.750.881 k = 0.74
Table 4. Summary of the cross-validated statistics (k = 6) calculated on the coupled, measured, and estimated values retrieved from the PRISMA dataset resampled to the MAIA S2 spectral configuration (n = 50) for leaf area index (LAI) and canopy chlorophyll content (CCC): algorithm (GPR = Gaussian processes regression; SVR = support vector regression; PLSR = partial least squares regression; NN = neural network; and RF = random forest), the coefficient of determination (R2), root mean square error (RMSE), normalised RMSE (nRMSE), bias, and relative bias (rbias).
Table 4. Summary of the cross-validated statistics (k = 6) calculated on the coupled, measured, and estimated values retrieved from the PRISMA dataset resampled to the MAIA S2 spectral configuration (n = 50) for leaf area index (LAI) and canopy chlorophyll content (CCC): algorithm (GPR = Gaussian processes regression; SVR = support vector regression; PLSR = partial least squares regression; NN = neural network; and RF = random forest), the coefficient of determination (R2), root mean square error (RMSE), normalised RMSE (nRMSE), bias, and relative bias (rbias).
Plant TraitsAlgorithmR2RMSEnRMSEbiasrbias
LAIGPR0.900.26 m2 m−28.66%0.0108 m2 m−20.49%
SVR0.900.27 m2 m−28.81%0.0059 m2 m−20.27%
PLSR0.900.28 m2 m−29.04%0.0113 m2 m−20.51%
NN0.850.33 m2 m−210.69%0.0125 m2 m−20.57%
RF0.840.34 m2 m−211.01%0.0032 m2 m−20.15%
CCCGPR0.680.23 g m−213.08%0.0370 g m−23.89%
SVR0.830.16 g m−29.17%−0.0051 g m−2−0.54%
PLSR0.830.17 g m−29.33%0.0084 g m−20.88%
NN0.790.19 g m−210.69%−0.0488 g m−2−5.13%
RF0.700.22 g m−212.22%0.0153 g m−21.61%
Table 5. Summary of the statistics calculated on the coupled measured and estimated values retrieved from the MAIA S2 sensor (n = 40) for leaf area index (LAI) and canopy chlorophyll content (CCC): algorithm (GPR = Gaussian processes regression; SVR = support vector regression; PLSR = partial least squares regression; NN = neural network; RF = random forest), the coefficient of determination (R2), root mean square error (RMSE), normalised RMSE (nRMSE), bias, and relative bias (rbias).
Table 5. Summary of the statistics calculated on the coupled measured and estimated values retrieved from the MAIA S2 sensor (n = 40) for leaf area index (LAI) and canopy chlorophyll content (CCC): algorithm (GPR = Gaussian processes regression; SVR = support vector regression; PLSR = partial least squares regression; NN = neural network; RF = random forest), the coefficient of determination (R2), root mean square error (RMSE), normalised RMSE (nRMSE), bias, and relative bias (rbias).
Plant TraitAlgorithmR2RMSEnRMSEbiasrbias
LAIGPR0.830.44 m2 m−216.39%0.24 m2 m−214.35%
SVR0.810.39 m2 m−214.36%0.08 m2 m−24.80%
PLSR0.810.38 m2 m−214.18%0.09 m2 m−25.34%
NN0.750.44 m2 m−216.53%0.20 m2 m−211.74%
RF0.580.59 m2 m−222.00%0.29 m2 m−217.12%
CCCGPR0.800.33 g m−224.02%0.26 g m−235.10%
SVR0.790.31 g m−222.45%0.23 g m−231.03%
PLSR0.790.38 g m−227.70%0.32 g m−243.23%
NN0.710.27 g m−219.95%0.08 g m−211.57%
RF0.580.27 g m−219.84%0.10 g m−213.41%
Table 6. LAI results of the ANOVA test (n = 1700). Significance codes: ‘***’ 0.001, ‘**’ 0.01, ‘*’ 0.05, ‘-’ 0.1.
Table 6. LAI results of the ANOVA test (n = 1700). Significance codes: ‘***’ 0.001, ‘**’ 0.01, ‘*’ 0.05, ‘-’ 0.1.
LAIDfSum SqMean SqF ValuePr(>F)Significance
Time155.99255.99157.27<1 × 10−16***
Forest microclimatic conditions299.3849.69139.56<1 × 10−16***
Species2244.65122.32343.58<1 × 10−16***
Time × Forest microclimatic conditions22.0021.0012.810.060-
Time × Species25.422.717.610.0005***
Forest microclimatic conditions × Species420.655.1614.501.20 × 10−11***
Time × Forest microclimatic conditions × Species42.740.681.920.104no
Residuals1682598.830.36
Table 7. CCC results of the ANOVA test (n = 1700). Significance codes: ‘***’ 0.001, ‘**’ 0.01, ‘*’ 0.05, ‘-’ 0.1.
Table 7. CCC results of the ANOVA test (n = 1700). Significance codes: ‘***’ 0.001, ‘**’ 0.01, ‘*’ 0.05, ‘-’ 0.1.
CCCDfSum SqMean SqF ValuePr (>F)Significance
Time138.6795938.67959259.817<1 × 10−16***
Forest microclimatic conditions232.0774616.03873107.735<1 × 10−16***
Species287.6815943.8408294.486<1 × 10−16***
Time × Forest microclimatic conditions20.9996620.4998313.357450.0351*
Time × Species21.8679920.9339966.273810.0019**
Forest microclimatic conditions × Species46.4354991.60887510.80711.19 × 10−8***
Time × Forest microclimatic conditions × Species41.6091870.4022972.702290.0291*
Residuals1682250.40320.148872
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Savinelli, B.; Tagliabue, G.; Vignali, L.; Garzonio, R.; Gentili, R.; Panigada, C.; Rossini, M. Integrating Drone-Based LiDAR and Multispectral Data for Tree Monitoring. Drones 2024, 8, 744. https://doi.org/10.3390/drones8120744

AMA Style

Savinelli B, Tagliabue G, Vignali L, Garzonio R, Gentili R, Panigada C, Rossini M. Integrating Drone-Based LiDAR and Multispectral Data for Tree Monitoring. Drones. 2024; 8(12):744. https://doi.org/10.3390/drones8120744

Chicago/Turabian Style

Savinelli, Beatrice, Giulia Tagliabue, Luigi Vignali, Roberto Garzonio, Rodolfo Gentili, Cinzia Panigada, and Micol Rossini. 2024. "Integrating Drone-Based LiDAR and Multispectral Data for Tree Monitoring" Drones 8, no. 12: 744. https://doi.org/10.3390/drones8120744

APA Style

Savinelli, B., Tagliabue, G., Vignali, L., Garzonio, R., Gentili, R., Panigada, C., & Rossini, M. (2024). Integrating Drone-Based LiDAR and Multispectral Data for Tree Monitoring. Drones, 8(12), 744. https://doi.org/10.3390/drones8120744

Article Metrics

Back to TopTop