Inspiration

Across the globe, wildfires have emerged as a pressing and relentless threat, intensified by a convergence of factors such as climate change, land use patterns, and increasingly unpredictable weather. These infernos ravage landscapes, jeopardize lives, and decimate ecosystems, leaving a trail of destruction in their wake. The urgency of the issue is compounded by the challenge of accurately predicting their erratic behavior, often resulting in delayed responses and exacerbated damages. Our solution steps in as a crucial game-changer by leveraging AI to rapidly identify wildfire origins and predict their trajectories. By amalgamating diverse data sets—from environmental factors to historical patterns—we empower authorities and communities with timely, accurate information. This enables proactive measures, swift response, and informed decisions, ultimately saving lives, protecting property, and mitigating the catastrophic environmental aftermath of these natural disasters.

What it does

BlazeCast takes in user input highlighting the initial wildfire emergence points. It then pulls real-time weather data including temperature, pressure, humidity, wind speed, elevation, etc, and feeds it into a custom machine-learning model. This model then predicts the spread of the fire and outputs that to show the user where they should be focusing disaster response resources.

BlazeCast stands at the forefront of sustainability innovation, providing a multifaceted solution to the pressing global challenge of wildfires. Our system seamlessly integrates user input and real-time weather data to empower communities and authorities with proactive tools for disaster response. Here's how BlazeCast fosters sustainability:

Early Detection for Environmental Conservation: Enables rapid identification of initial wildfire emergence points, minimizing ecological damage and preserving biodiversity.

Resource-Efficient Data Utilization: Pulls real-time weather data, including temperature, pressure, humidity, wind speed, and elevation, optimizing resource usage for enhanced sustainability.

Optimized Disaster Response: Feeds comprehensive real-time data into a custom machine-learning model, predicting the wildfire's spread with precision. This empowers decision-makers to focus disaster response resources efficiently, reducing unnecessary resource consumption.

Climate Change Mitigation: By predicting fire trajectories accurately, BlazeCast contributes to climate change mitigation efforts. Swift and targeted response strategies help minimize the environmental impact of wildfires intensified by changing climatic conditions.

Community Resilience and Well-being: Empowers communities with timely and accurate information, fostering resilience against wildfires. By reducing the risk of life and property loss, BlazeCast contributes to the long-term well-being of affected communities.

Continuous Improvement for Long-Term Sustainability: Demonstrates a commitment to long-term sustainability by planning future enhancements. This includes the incorporation of additional environmental factors like vegetation and air pressure, aligning with a vision for continuous improvement and adaptation to changing environmental conditions.

In essence, BlazeCast isn't just a wildfire prediction system; it's a sustainable innovation that addresses the challenges posed by wildfires while promoting ecological preservation, resource efficiency, and community resilience.

How we built it

We split BlazeCast into submodules that all came together to create the final product and its features.

Live Weather Data: This submodule specializes in making API calls to the VisualCrossing weather API and Google Maps elevation data API. Based on a 32x32 array of longitude and latitude coordinates of the selected piece of land given by the UI, it calls both APIs and returns a 3D array of 7x32x32 dimension. Along with this, data is super-sampled to work with the cache data to reduce API costs. Each array in this 3D array represents an attribute such as elevation, wind speed, or temperature.

API Data Caching: The API data caching is a JSON structure that stores API calls to limit repeated requests. When an API call is made, that data is saved to this JSON file and cross-referenced when the next API call is going to be made. Data is pulled instead of calling the API if it is available in the cache. This reduces costs and streamlines the efficiency of our project as a whole.

Pre-Processing of Wildfire Dataset: In this submodule, the wildfire dataset undergoes a series of preprocessing steps to make it suitable for training the machine learning model. This includes cleaning and formatting the data, handling missing values, and normalizing the features.

Model Structure: The model is a U-Net model with residual connections. The architecture is designed to effectively capture the spatial dependencies in the input data without any loss. The main challenge we faced while training was the low proportion of 'fire' values in the data. So, we used a special loss function called the dice coefficient. Additionally, we used TensorFlow datasets to optimize tasks like prefetching, batching, shuffling, and preprocessing.

Model Training: The model is trained using a pre-processed wildfire dataset. The training process involves optimizing the model's parameters using a suitable optimization algorithm, Adam. The loss function is defined to quantify the disparity between the predicted and actual wildfire occurrence, and backpropagation is employed to adjust the model weights accordingly. The training set is split into training and validation sets to monitor the model's performance and prevent overfitting. We monitored precision and recall as well while training due to the imbalanced nature of our dataset.

User Interface: The user interface uses leaflet in javascript to implement an interactive map for the user to highlight grid boxes for the wildfire location selection. The interface has four buttons for zoom functionality into a 32-kilometer by 32-kilometer area, grid creation to overlap on the map, user selection of select grid boxes for wildfire locations, and finalization of the map.

Model Input: This module takes all of the input arrays from the live weather data module and converts them into individual grids to be predicted. The input is a 3D array of the latitude and longitude of an area that was selected as having a wildfire. This array is directly taken from the user interface.

Model Output: The model output is the result of running the input array through our machine-learning model. It is represented in the form of a map, highlighting areas where fire is likely to spread in the next hour.

Challenges we ran into

  • Efficiently allocating API tokens: We realized that canvassing a 64x64 kilometer area of land and getting real-time data for each cell requires a lot of API calls. One of our biggest issues was making this many calls when a lot of data was repeated. We solved this issue by implementing a caching system.

  • Training the model: Training a model on over 4.7 million parameters requires a lot of GPU power. Training our model for 10 epochs was not only resource-intensive but also time-intensive. The training of this model was done using the Adam optimizer on Google's colab with an A100 GPU. Along with this, we made mistakes in dataset input and the overall training of the model which made it predict wrong after some training runs.

  • Validating the accuracy of our model: We also had to implement validation to monitor overfitting. This ensured that our model was trained correctly and would work for the use case intended.

  • Integrating all of the submodules, different data types: Much of this project was split up and each team member was given responsibilities. However, optimizing all of the code and refactoring it to fit together at the end was a challenge we had to overcome.

Accomplishments that we're proud of

  • Training an original model from the ground up

  • We extrapolated large amounts of data from multiple different APIs and combined them into one cohesive data structure

  • We created an interactive UI that will work on current wildfires

  • We created multiple sub-modules with specialized purposes and combined them into one product

What we learned

  • How to train a ML model using a variety of factors

  • Using natural language models to generate description

  • Engineering specific prompts for desired results with Large Language Models

  • Working in a software team with specific roles using branching and git

  • Splitting up a big picture into multiple smaller workflows and assigning roles

What's next for BlazeCast

  • Longer training of machine learning model

  • Larger training database

  • Added data for vegetation and air pressure

  • Rooftop reflectivity factor

  • Time Lapse form video to show better visual of fire progression

  • Automatic mobilization of disaster response resources based on calculated impact

Built With

Share this project:

Updates