International Journal of Advanced Computer Science and Applications, volume 14, issue 3

Investigation of Combining Deep Learning Object Recognition with Drones for Forest Fire Detection and Monitoring

Mimoun YANDOUZI
Mounir GRARI
Mohammed Berrahal
Idriss IDRISSI
Omar MOUSSAOUI
Mostafa Azizi
Kamal Ghoumid
Aissa KERKOUR ELMIAD
Publication typeJournal Article
Publication date2023-04-04
scimago Q3
SJR0.278
CiteScore2.3
Impact factor0.7
ISSN2158107X, 21565570
Alam G.M., Tasnia N., Biswas T., Hossen M.J., Tanim S.A., Miah M.S.
IEEE Access scimago Q1 wos Q2 Open Access
2025-03-17 citations by CoLab: 0
Kritikou G., Xofis P., Souflas K., Moulianitis V.
Fire scimago Q1 wos Q1 Open Access
2024-11-29 citations by CoLab: 0 PDF Abstract  
The surveillance of the National Park Kotychi and Strofylia Wetlands in southwest Greece with Unmanned Aerial Vehicles (UAVs) is studied in this work. As comprehensive coverage of the region cannot be attained with just stationary ground cameras, multiple parallel moving UAVs are utilized. The region is divided into squares, which are further subdivided into regular grids with nodes whose weights are calculated based on the fire risk of the corresponding region. Heuristic methods are proposed for selecting the UAVs’ start and goal graph nodes. The graph with the start and goal nodes serves as input to the A* algorithm, which computes offline short paths that direct the UAVs to cross areas with the highest fire risk. The number of UAVs is progressively increased as the coverage of the previous detections proves insufficient. The authors determine the number of UAVs needed for each section of the divided area. They also demonstrate that the UAVs can scan simultaneously without collisions, as each UAV follows a unique path inaccessible to the others. Finally, the presented computations and results show that the proposed method can effectively contribute to fire scanning in the area.
Özel B., Alam M.S., Khan M.U.
Information (Switzerland) scimago Q2 wos Q3 Open Access
2024-09-03 citations by CoLab: 2 PDF Abstract  
Fire detection and extinguishing systems are critical for safeguarding lives and minimizing property damage. These systems are especially vital in combating forest fires. In recent years, several forest fires have set records for their size, duration, and level of destruction. Traditional fire detection methods, such as smoke and heat sensors, have limitations, prompting the development of innovative approaches using advanced technologies. Utilizing image processing, computer vision, and deep learning algorithms, we can now detect fires with exceptional accuracy and respond promptly to mitigate their impact. In this article, we conduct a comprehensive review of articles from 2013 to 2023, exploring how these technologies are applied in fire detection and extinguishing. We delve into modern techniques enabling real-time analysis of the visual data captured by cameras or satellites, facilitating the detection of smoke, flames, and other fire-related cues. Furthermore, we explore the utilization of deep learning and machine learning in training intelligent algorithms to recognize fire patterns and features. Through a comprehensive examination of current research and development, this review aims to provide insights into the potential and future directions of fire detection and extinguishing using image processing, computer vision, and deep learning.
Sun W., Gao H., Li C.
Fire Safety Journal scimago Q1 wos Q2
2024-09-01 citations by CoLab: 1
Yandouzi M., Boukricha S., Grari M., Berrahal M., Moussaoui O., Azizi M., Ghoumid K., Elmiad A.K.
2024-08-31 citations by CoLab: 0 Abstract  
Forests are essential to our planet's well-being, playing a vital role in climate regulation, biodiversity preservation, and soil protection, thus serving as a cornerstone of our global ecosystem. The threat posed by forest fires highlights the critical need for early detection systems, which are indispensable tools in safeguarding ecosystems, livelihoods, and communities from devastating destruction. In combating forest fires, a range of techniques is employed for efficient early detection. Notably, the combination of drones with artificial intelligence, particularly deep learning, holds significant promise in this regard. Image segmentation emerges as a versatile method, involving the partitioning of images into multiple segments to simplify representation, and it leverages deep learning for fire detection, continuous monitoring of high-risk areas, and precise damage assessment. This study provides a comprehensive examination of recent advancements in semantic segmentation based on deep learning, with a specific focus on Mask R-CNN (Mask Region Convolutional Neural Network) and YOLO (You Only Look Once) v5, v7, and v8 variants. The emphasis is placed on their relevance in forest fire monitoring, utilizing drones equipped with high-resolution cameras.
Caballero-Martin D., Lopez-Guede J.M., Estevez J., Graña M.
Drones scimago Q1 wos Q1 Open Access
2024-07-03 citations by CoLab: 16 PDF Abstract  
The integration of Artificial Intelligence (AI) tools and techniques has provided a significant advance in drone technology. Besides the military applications, drones are being increasingly used for logistics and cargo transportation, agriculture, construction, security and surveillance, exploration, and mobile wireless communication. The synergy between drones and AI has led to notable progress in the autonomy of drones, which have become capable of completing complex missions without direct human supervision. This study of the state of the art examines the impact of AI on improving drone autonomous behavior, covering from automation to complex real-time decision making. The paper provides detailed examples of the latest developments and applications. Ethical and regulatory challenges are also considered for the future evolution of this field of research, because drones with AI have the potential to greatly change our socioeconomic landscape.
Muduli D., Toppo A.U., Singh V., Singh M., Tiwari D.P.
2024-06-24 citations by CoLab: 0
Rolland E.G., Grøntved K.A., Christensen A.L., Watson M., Richardson T.
2024-06-04 citations by CoLab: 1
Cheknane M., Bendouma T., Boudouh S.S.
2024-05-16 citations by CoLab: 8 Abstract  
Fire incidents pose severe threats to life, property, and the environment, accounting for significant losses worldwide. Traditional sensing technologies exhibit limitations in effectively detecting fires, particularly in larger spaces. The application of deep learning techniques on fire detection systems has been widely explored. However, many challenges are associated with fire detection technologies, especially in scenarios like indoor fires and forest fires, as well as whether the fire is accompanied by smoke or not. Which results in substantial environmental losses and long-term recovery periods. Early sensing technologies lacked effectiveness in detecting fire instances in open spaces due to response delays and failure to utilize static and dynamic features. In this paper, we aimed to address these challenges by proposing a two-stage fire detection approach using deep learning techniques. The approach proposed a new Faster R-CNN architecture, including our proposed hybrid feature extractor. Evaluation yields a mAP@0.5 of 90.1% with an accuracy of 96.5%. The outcomes indicate that our new hybrid feature extractor surpasses the effectiveness of conventional single backbone transfer learning methods and Yolo’s one-stage detection approach in accurately identifying flames and smoke in various indoor and outdoor settings, providing an accurate fire detection system.
Pandey A., Ahmad N., Saha D.
2023-12-13 citations by CoLab: 1
Chen X., Liu C., Yen J.
Water (Switzerland) scimago Q1 wos Q2 Open Access
2023-11-08 citations by CoLab: 1 PDF Abstract  
Fishery is vital for Taiwan’s economy, and over 40% of the fishery products come from aquaculture. Traditional aquaculture relies on the visual observation of a water-wheel tail length to assess water quality. However, the aging population, lack of young labor, and difficulty in passing down experience pose challenges. There is currently no systematic method to determine the correlation between the water quality and water-wheel tail length, and adjustments are made based on visual inspection, relying heavily on experience without substantial data for transmission. To address the challenge, a precise and efficient water quality control system is proposed. This study proposes a water-wheel tail length measurement system that corrects input images through image projective transformation to obtain the transformed coordinates. By utilizing known conditions of the water-wheel, such as the length of the base, the actual water-wheel tail length is deduced based on proportional relationships. Validated with two different calibration boards, the projective transformation performance of specification A is found to be better, with an average error percentage of less than 0.25%. Data augmentation techniques are employed to increase the quantity and diversity of the dataset, and the YOLO v8 deep learning model is trained to recognize water-wheel tail features. The model achieves a maximum mAP50 value of 0.99013 and a maximum mAP50-95 value of 0.885. The experimental results show that the proposed water-wheel tail length measurement system can be used feasibly to measure water-wheel tail length in fish farms.

Top-30

Journals

1
1

Publishers

1
2
3
4
1
2
3
4
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex
Found error?