45

In modern agriculture, precision monitoring and efficient resource management are paramount for maximizing crop yields. This research presents a novel approach to greenhouse productivity estimation by leveraging the state-of-the-art YOLOv5 object detection model, tailored and optimized for a custom tomato dataset. The study focuses on detecting and classifying tomatoes into three categories-green, pink, and red-providing a comprehensive understanding of the ripening process in realtime. The optimized YOLOv5 model demonstrated superior performance compared to the standard version, showcasing enhanced accuracy in tomato identification. The model was deployed in a real-world greenhouse equipped with a meticulously arranged seven-camera system, capturing a row of tomato plants per camera. By extrapolating the results from the single row to the entire greenhouse (comprising eight rows), an accurate estimation of overall productivity was achieved. A web application was developed to facilitate real-time monitoring of tomato plant states and key statistics. The application provides insights into the percentages of green, pink, and red tomatoes, allowing greenhouse operators to make informed decisions on resource allocation and management. The proposed methodology offers a scalable and practical solution for greenhouse productivity assessment, potentially revolutionizing the precision agriculture landscape. The findings contribute to the advancement of computer vision applications in agriculture, fostering sustainable and efficient practices in greenhouse cultivation

  • Название журналаRaqamli iqtisodiyot
  • Номер выпуска7-son
  • Количество просмотров 45
  • Количество прочтений 45
  • Дата публикации 05-07-2024
  • Язык статьиIngliz
  • Страницы598-621
English

In modern agriculture, precision monitoring and efficient resource management are paramount for maximizing crop yields. This research presents a novel approach to greenhouse productivity estimation by leveraging the state-of-the-art YOLOv5 object detection model, tailored and optimized for a custom tomato dataset. The study focuses on detecting and classifying tomatoes into three categories-green, pink, and red-providing a comprehensive understanding of the ripening process in realtime. The optimized YOLOv5 model demonstrated superior performance compared to the standard version, showcasing enhanced accuracy in tomato identification. The model was deployed in a real-world greenhouse equipped with a meticulously arranged seven-camera system, capturing a row of tomato plants per camera. By extrapolating the results from the single row to the entire greenhouse (comprising eight rows), an accurate estimation of overall productivity was achieved. A web application was developed to facilitate real-time monitoring of tomato plant states and key statistics. The application provides insights into the percentages of green, pink, and red tomatoes, allowing greenhouse operators to make informed decisions on resource allocation and management. The proposed methodology offers a scalable and practical solution for greenhouse productivity assessment, potentially revolutionizing the precision agriculture landscape. The findings contribute to the advancement of computer vision applications in agriculture, fostering sustainable and efficient practices in greenhouse cultivation

Ўзбек

Zamonaviy qishloq xo'jaligida hosildorlikni maksimal darajada oshirish uchun aniq monitoring va resurslarni samarali boshqarish muhim ahamiyatga ega hisoblanadi. Ushbu tadqiqot pomidor o'simligi ma'lumotlar to'plami uchun moslashtirilgan va optimallashtirilgan zamonaviy YOLOv5 ob'ektni aniqlash modelidan foydalangan holda issiqxona mahsuldorligini baholashga yangi yondashuvni taqdim etadi. Tadqiqot pomidorlarni uch toifaga - yashil, pushti va qizil rangga aniqlash va tasniflashga qaratilgan - real vaqt rejimida pishib yetilish jarayonini har tomonlama tushunishni ta'minlaydi. Optimallashtirilgan YOLOv5 modeli boshqa standard versiyali moddellarga nisbatan yuqori unumdorlikni namoyish etib, pomidorni aniqlashda yaxshilangan yuqori aniqlikni namoyish etadi. O'tkazilgan tajribada Model tartibga solingan yetti kamerali tizim bilan jihozlangan haqiqiy issiqxonada joylashtirilib har bir kamera orqali bitta qatordagi pomidor o'simliklarini suratga olib, sinchkovlik bilan kuztuv olib borildi. Bitta qatordan olingan natijalarni butun issiqxonaga (sakkiz qatordan iborat) ekstrapolyatsiya qilish orqali umumiy hosildorlikni aniq baholashga erishildi. Pomidor o'simliklari holati va asosiy statistik ma'lumotlarning real vaqt rejimida monitoringini osonlashtirish uchun veb-ilova ishlab
chiqilgan bo’lib, Ilova yashil, pushti va qizil pomidorlarning foizlari haqida ma'lumot beradi. Bu issiqxona operatorlariga resurslarni taqsimlash va boshqarish bo'yicha ongli qarorlar qabul qilish imkonini beradi. Taklif etilayotgan metodologiya issiqxona
mahsuldorligini baholash uchun keng ko'lamli va amaliy yechim taklif etadi. Olingan natijalar qishloq xo'jaligida kompyuter orqali boshqarish dasturlarini rivojlantirishga, issiqxonalar mahsuldorligini oshirishga, issiqxonalarda barqaror va samarali amaliyotlarni rivojlantirishga yordam beradi

Русский

В современном сельском хозяйстве точный мониторинг и эффективное управление ресурсами имеют первостепенное значение для максимизации урожайности. В этом исследовании представлен новый подход к оценке продуктивности теплиц с использованием современной модели обнаружения объектов YOLOv5, адаптированной и оптимизированной для специального набора данных о помидорах. Исследование направлено на обнаружение и классификацию помидоров на три категории - зеленые, розовые и красные - что обеспечивает полное понимание процесса созревания в режиме реального времени. Оптимизированная модель YOLOv5 продемонстрировала превосходную производительность по сравнению со стандартной версией, продемонстрировав повышенную точность идентификации помидоров. Модель была развернута в реальной теплице, оснащенной тщательно продуманной системой из семи камер, каждая из которых фиксирует ряд растений помидора. Путем экстраполяции результатов с одного ряда на всю теплицу (состоящую из восьми рядов) была достигнута точная оценка общей продуктивности. Было разработано веб-приложение для облегчения мониторинга состояния растений помидора и ключевых статистических данных в режиме реального времени. Приложение предоставляет информацию о процентном соотношении зеленых, розовых и красных помидоров, позволяя операторам теплиц принимать
обоснованные решения по распределению и управлению ресурсами. Предлагаемая методология предлагает масштабируемое и практичное решение для оценки продуктивности теплиц, потенциально революционизирующее ландшафт точного земледелия. Полученные результаты способствуют развитию приложений компьютерного зрения в сельском хозяйстве, способствуя устойчивым и эффективным методам выращивания в теплицах

Название ссылки
1 T. Dewi, P. Risma, and Y. Oktarina, “Fruit sorting robot based on color and size for an agricultural product packaging system,” Bull. Electr. Eng. Informatics, vol. 9, no. 4, pp. 1438–1445, 2020, doi: 10.11591/eei.v9i4.2353
2 Z. Tian, W. Ma, Q. Yang, and F. Duan, “Application status and challenges of machine vision in plant factory—A review,” Inf. Process. Agric., vol. 9, no. 2, pp. 195–211, 2022, doi: 10.1016/j.inpa.2021.06.003
3 N. Schor, S. Berman, A. Dombrovsky, Y. Elad, T. Ignat, and A. Bechar, “Development of a robotic detection system for greenhouse pepper plant diseases,” Precis. Agric., vol. 18, no. 3, pp. 394–409, 2017, doi: 10.1007/s11119-017-9503-z
4 I. Bechar, S. Moisan, E. P. I. Pulsar, and I. S. Antipolis-mediterranee, “Online counting of pests in a greenhouse using computer vision,” VAIB 2010 - Vis. Obs. Anal. Anim. Insect Behav., no. August, pp. 1–4, 2010
5 M. V. Giuffrida, “Leaf counting from uncontrolled acquired images from greenhouse workers”.
6 S. Yang, L. Huang, and X. Zhang, “Research and application of machine vision in monitoring the growth of facility seedling crops,” Jiangsu Agric. Sci, vol. 47, pp. 179–187, 2019.
7 Y. Ren, “Development of transplanting robot in facility agriculture based on machine vision,” Dissertation, Zhejiang University, 2007.
8 H. Tian, T. Wang, Y. Liu, X. Qiao, and Y. Li, “Computer vision technology in agricultural automation —A review,” Inf. Process. Agric., vol. 7, no. 1, pp. 1–19, 2020, doi: 10.1016/j.inpa.2019.09.006.
9 K. Lin, J. Chen, H. Si, and J. Wu, “A review on computer vision technologies applied in greenhouse plant stress detection,” Commun. Comput. Inf. Sci., vol. 363, pp. 192–200, 2013, doi: 10.1007/978-3-642-37149-3_23.
10 Z. Li et al., “A high-precision detection method of hydroponic lettuce seedlings status based on improved Faster RCNN,” Comput. Electron. Agric., vol. 182, no. October 2020, 2021, doi: 10.1016/j.compag.2021.106054.
11 Z. Wu, R. Yang, F. Gao, W. Wang, L. Fu, and R. Li, “Segmentation of abnormal leaves of hydroponic lettuce based on DeepLabV3+ for robotic sorting,” Comput. Electron. Agric., vol. 190, no. August, p. 106443, 2021, doi: 10.1016/j.compag.2021.106443.
12 L. Xi, M. Zhang, L. Zhang, T. T. S. Lew, and Y. M. Lam, “Novel Materials for Urban Farming,” Adv. Mater., vol. 34, no. 25, pp. 1–28, 2022, doi: 10.1002/adma.202105009.
13 G. Ares, B. Ha, and S. R. Jaeger, “Consumer attitudes to vertical farming (indoor plant factory with artificial lighting) in China, Singapore, UK, and USA: A multi-method study,” Food Res. Int., vol. 150, no. November, 2021, doi: 10.1016/j.foodres.2021.110811.
14 R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 580–587, 2014, doi: 10.1109/CVPR.2014.81.
15 T. Li et al., “Tomato recognition and location algorithm based on improved YOLOv5,” Comput. Electron. Agric., vol. 208, no. March, p. 107759, 2023, doi: 10.1016/j.compag.2023.107759.
16 S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017, doi: 10.1109/TPAMI.2016.2577031.
17 J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016, doi: 10.1109/CVPR.2016.91.
18 J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018, [Online]. Available: http://arxiv.org/abs/1804.02767
19 A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” 2020, [Online]. Available: http://arxiv.org/abs/2004.10934
20 Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “YOLOX: Exceeding YOLO Series in 2021,” pp. 1–7, 2021, [Online]. Available: http://arxiv.org/abs/2107.08430
21 J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 6517–6525, 2017, doi: 10.1109/CVPR.2017.690.
22 C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors,” pp. 7464–7475, 2023, doi: 10.1109/cvpr52729.2023.00721.
23 W. Liu et al., “SSD: Single Shot MultiBox Detector BT - Computer Vision – ECCV 2016,” B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds., Cham: Springer International Publishing, 2016, pp. 21–37.
24 M. P. Mathew and T. Y. Mahesh, “Leaf-based disease detection in bell pepper plant using YOLO v5,” Signal, Image Video Process., vol. 16, no. 3, pp. 841– 847, 2022, doi: 10.1007/s11760-021-02024-y.
25 D. G. Lowe, “Object recognition from local scale-invariant features,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2, pp. 1150–1157, 1999, doi: 10.1109/iccv.1999.790410.
26 C. P. Papageorgiou, M. Oren, and T. Poggio, “General framework for object detection,” Proc. IEEE Int. Conf. Comput. Vis., pp. 555–562, 1998, doi: 10.1109/iccv.1998.710772.
27 N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Proc. - 2005 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognition, CVPR 2005, vol. I, pp. 886–893, 2005, doi: 10.1109/CVPR.2005.177.
28 J. Donahue et al., “DeCAF: A deep convolutional activation feature for generic visual recognition,” 31st Int. Conf. Mach. Learn. ICML 2014, vol. 2, pp. 988– 996, 2014.
29 G. Yang et al., “Face Mask Recognition System with YOLOV5 Based on Image Recognition,” 2020 IEEE 6th Int. Conf. Comput. Commun. ICCC 2020, vol. 1, no. January 2020, pp. 1398–1404, 2020, doi: 10.1109/ICCC51575.2020.9345042.
30 T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 936–944, 2017, doi: 10.1109/CVPR.2017.106.
31 S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path Aggregation Network for Instance Segmentation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 8759–8768, 2018, doi: 10.1109/CVPR.2018.00913.
32 H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, “Generalized intersection over union: A metric and a loss for bounding box regression,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019- June, pp. 658–666, 2019, doi: 10.1109/CVPR.2019.00075.
В ожидании