About Forest Sentinel

Inspiration

The inspiration for Forest Sentinel started with a simple question: What if we could hear the forest before it disappeared? Deforestation usually makes headlines only after the damage is done, buried in reports and statistics. We were struck by how long it takes for the world to notice what's been lost. That gap, the quiet between destruction and awareness, inspired us to act. We built Forest Sentinel to close that gap, turning satellite data into real-time insights that people can see and respond to. It's not just about tracking the loss, it's about helping people take steps toward protecting what’s left.

What it does

Forest Sentinel is a comprehensive platform designed to "See it, Map it, and Save it." It combines real-time satellite data with community-driven action to make deforestation visible, verifiable, and stoppable. Users can visualize historical and current forest loss on an interactive Google Map, identify areas suitable for restoration, and view critical environmental data like biomass density, soil carbon, and rainfall. The platform empowers local communities to become sentinels by reporting deforestation through GPS-tagged photos and alerts, which appear as a distinct layer on the map. For NGOs and researchers, an analytics dashboard provides detailed statistics for any user-drawn region, complete with AI-generated insights and downloadable reports to support conservation and funding efforts.

How we built it

Forest Sentinel is a full-stack application built on a foundation of Google Maps Platform, Google Cloud, and modern web technologies.

  • Languages: JavaScript (React), Python
  • Frameworks: React.js
  • Platforms & Cloud Services: Google Maps Platform, Google Cloud, Google Cloud Functions
  • APIs: Google Maps JavaScript API, Google Earth Engine API
  • Key Datasets: Hansen Global Forest Change, OpenLandMap, CHIRPS, JRC Global Surface Water, RESOLVE Ecoregions, NASA Forest Biomass.

  • Frontend & Visualization (Google Maps Platform): The user interface is the heart of the project, built with React and powered by the Google Maps JavaScript API. The Maps API is central to the entire experience, providing the interactive canvas for all our data. We used the @react-google-maps/api library for seamless integration. Key features like custom map styles for a unique light mode, drawing tools for region selection (DrawingManager), and custom HTML markers (OverlayView) are all enabled by the power and flexibility of the Google Maps Platform.

  • Backend (Google Cloud Functions): The heavy lifting is done by a suite of Google Cloud Functions written in Python. These serverless functions act as secure endpoints that the frontend can call. Each function is responsible for a specific data layer or analysis:

    • Deforestation & Restoration: One function queries the Hansen Global Forest Change dataset to generate dynamic, year-by-year deforestation tile layers.
    • Environmental Layers: Separate functions serve tile layers for Soil Carbon, Climate (Rainfall), Surface Water, Ecosystems, Forest Biomass, and Carbon Emissions, each drawing from its respective GEE dataset.
    • Statistical Analysis: A dedicated function takes a user-drawn geometry and calculates detailed statistics for the NGO dashboard, performing complex reductions across multiple datasets in real-time.
  • Data & Analysis (Google Earth Engine): The core analysis engine is Google Earth Engine. By writing Python scripts that run on Google's servers, we were able to process and visualize massive datasets in near real-time, something that would be impossible on a traditional server.

Challenges we ran into

The journey was not without its challenges. Our biggest hurdle was with the Google Earth Engine authentication system. For days, our backend functions would return a response without a valid token, preventing any map tiles from loading. This led us down a deep path of debugging IAM roles, service accounts, and project permissions. The breakthrough came when we discovered the newer v1alpha tile endpoint, which uses a different authorization method and instantly solved the problem.

Another significant challenge was performance, especially with the regional statistics function. Our initial scripts were timing out or hitting memory limits on GEE's servers. The reduceRegion() and reduceToVectors() operations are incredibly demanding. We learned to overcome this by optimizing our queries, increasing the scale to reduce resolution for large areas, and leveraging pre-computed datasets like Hansen's wherever possible.

Accomplishments that we're proud of

We are incredibly proud of building a truly comprehensive, full-stack environmental monitoring platform in a short amount of time. Integrating over seven distinct Google Earth Engine datasets into a single, cohesive user experience was a major accomplishment. We successfully architected a scalable, serverless backend that can perform massive computations on demand. Overcoming the complex authentication and performance challenges with Google Earth Engine was a significant technical victory that taught us a great deal about cloud-native geospatial analysis. Most importantly, we're proud of creating a tool that feels both powerful for experts and accessible for communities, truly embodying our mission to turn data into action.

What we learned

This project was a deep dive into the world of geospatial data and cloud architecture. Our biggest learning curve was with Google Earth Engine (GEE). We went from a basic understanding to writing complex scripts that process terabytes of satellite imagery on the fly. We learned how to:

  • Leverage pre-computed, large-scale datasets like the Hansen Global Forest Change for incredible speed and efficiency.

  • Integrate diverse environmental datasets, including OpenLandMap for soil carbon, CHIRPS for rainfall, and NASA's Forest Biomass to build a holistic view of an ecosystem's health. We also used NASA's FIRMS to get historical and real-time fire alerts.

  • Create powerful statistical summaries by performing complex reduceRegion operations on user-defined geometries.

On the architectural side, we learned how to build a scalable, serverless application. The pattern of using a React frontend powered by the Google Maps Platform to call a suite of Google Cloud Functions is incredibly powerful and efficient.

What's next for Forest Sentinel

The story of our forests is not yet finished, and neither is the development of Forest Sentinel. Our next steps are focused on deepening the "Act" part of our mission. We plan to expand the Restoration Hub with tools for tracking specific native species and monitoring growth over time using NDVI analysis. We also aim to enhance the Community Hub with more collaborative features, allowing users to form groups and launch local monitoring campaigns. Finally, we want to refine our AI Insights model, training it to not only suggest restoration areas but also to predict future deforestation hotspots based on historical patterns, turning Forest Sentinel into a proactive, predictive tool for conservation.

Built With

Share this project:

Updates