Search results
1 – 1 of 1Heru Agus Santoso, Brylian Fandhi Safsalta, Nanang Febrianto, Galuh Wilujeng Saraswati and Su-Cheng Haw
Plant cultivation holds a pivotal role in agriculture, necessitating precise disease identification for the overall health of plants. This research conducts a comprehensive…
Abstract
Purpose
Plant cultivation holds a pivotal role in agriculture, necessitating precise disease identification for the overall health of plants. This research conducts a comprehensive comparative analysis between two prominent deep learning algorithms, convolutional neural network (CNN) and DenseNet121, with the goal of enhancing disease identification in tomato plant leaves.
Design/methodology/approach
The dataset employed in this investigation is a fusion of primary data and publicly available data, covering 13 distinct disease labels and a total of 18,815 images for model training. The data pre-processing workflow prioritized activities such as normalizing pixel dimensions, implementing data augmentation and achieving dataset balance, which were subsequently followed by the modeling and testing phases.
Findings
Experimental findings elucidated the superior performance of the DenseNet121 model over the CNN model in disease classification on tomato leaves. The DenseNet121 model attained a training accuracy of 98.27%, a validation accuracy of 87.47% and average recall, precision and F1-score metrics of 87, 88 and 87%, respectively. The ultimate aim was to implement the optimal classifier for a mobile application, namely Tanamin.id, and, therefore, DenseNet121 was the preferred choice.
Originality/value
The integration of private and public data significantly contributes to determining the optimal method. The CNN method achieves a training accuracy of 90.41% and a validation accuracy of 83.33%, whereas the DenseNet121 method excels with a training accuracy of 98.27% and a validation accuracy of 87.47%. The DenseNet121 architecture, comprising 121 layers, a global average pooling (GAP) layer and a dropout layer, showcases its effectiveness. Leveraging categorical_crossentropy as the loss function and utilizing the stochastic gradien descent (SGD) Optimizer with a learning rate of 0.001 guides the course of the training process. The experimental results unequivocally demonstrate the superior performance of DenseNet121 over CNN.
Details