Abstract:
One of the most significant crops grown by farmers in Ethiopia is the potato, which is used 
for sale and domestic purposes. The quality and productivity of this crop are, however, 
significantly impacted by variables like blight infections that injure the plant during its 
growth. At the moment, an expert must manually identify potato leaf blight diseases using 
visual inspection. On the other extreme, visual crop disease screening is costly, time consuming, and imprecise. As such, a deep learning strategy was proposed to detect potato 
leaf disease, classify the disease type, and determine its level of severity. In so doing, 1200 
images of potato leaves comprising of both healthy and diseased (those with either early or 
late blight) were captured at Rare Research Farm, main campus of Haramaya University, 
using a smartphone. Once the collected data transferred to a personal computer (PC), image 
preprocessing methods such median filters; data augmentation, color-based segmentation, 
and image normalization were utilized in order to enhance the performance of the 
convolutional neural network (CNN). From the dataset, 70% of them were used for teaching, as 
opposed to the remaining 30%, of which 15% were used for testing and 15% for validation. For 
disease identification, three classes: early blight, healthy and late blight, were targeted. For 
severity level detection, potato leave images with early and late blight were isolated and then 
six classes: EB-low, EB-moderate, EB-severe, LB-low, LB-moderate, and LB-severe were 
passed to the deep learning machine. To compare the performance of the proposed model, 
AlexNet (from scratch) as well as VGG16 (from scratch and last layer trained) CNN models 
were employed. The proposed model contains two fully connected layers and four 
convolutional layers. Max pooling, ReLU activation and batch normalization layers were 
placed in-between the main layers. The softmax activation was used in the output layer to 
classify potato leaf diseases as well as categorize the severity level of diseased leaf images. 
Finally, the developed model has been evaluated using the accuracy, precision, recall, and 
F1-score metrics. The results indicate that the proposed model outperforms the two CNN 
architectures. It scored 99.12%, 98.94% and 98.94% in training, validation and testing 
accuracies, respectively, giving rise to an average accuracy of 99%. In severity level 
detection, it gave 95.93%, 94.99% and 94.98% training, validation and testing accuracies, 
respectively, leading up to 96% average accuracy