Inspiration
Lately there’s been a lot of talk about the environmental cost of digital services, especially with AI on the rise. It’s become a controversial and urgent conversation, but what struck me is how often people don’t realize that all computational activity has a measurable impact on the environment -- not just AI. It’s hard to imagine something intangible like a website having a footprint, and even when people do know, they usually don’t realize the scale. I remember looking into NFTs a few years ago when they were blowing up and finding out that the crypto systems tied to them were consuming as much energy as entire countries. With the idea of a “green web” starting to gain traction, I wanted to explore this topic in a way that feels visible and engaging. While there are already tools that let you check the carbon footprint of a single website, I thought it would be far more impactful to apply that at scale. Instead of just one site, why not look at the world’s most visited sites and see how they compare? Numbers like the percentage that are green-hosted, or how much CO₂ they emit per visit, put the conversation into perspective in a new way. That’s how Green Web Current came to life: a project to make the hidden cost of the web visible and easier to understand.
What it does
Green Web Current visualizes the carbon footprint of the world’s most visited websites. Each site is represented as an interactive particle in a 3D system, where color shows its carbon impact, size reflects popularity, and hosting status (green vs. fossil-fuel powered) is clearly marked. Clicking a particle opens detailed stats for that site: CO₂ per visit, estimated page size, energy consumption, hosting provider, and a carbon grade. An AI-generated translation also puts the data into real-world terms, like comparing 10,000 visits to minutes of car idling or phone calls. The result is an at-a-glance map of how the internet’s biggest players stack up in terms of sustainability.
How I built it
The project is a full-stack web app built with Node.js + Express on the backend and Three.js on the frontend. The backend aggregates data from multiple sources: the Tranco list for the top 200 websites, the Green Web Foundation API for green hosting checks, and the Website Carbon API for CO₂ and energy metrics. When page size data isn’t available, Google Gemini is used to estimate the average size of a site, which is then passed into the APIs to calculate emissions more accurately. Results are cached server-side to handle rate limits and improve performance. The frontend uses Three.js to render an interactive 3D particle system, with each particle representing a website. Particle size corresponds to popularity, and color encodes carbon impact. A responsive UI overlays site-specific stats, while Gemini also generates natural-language comparisons to put the carbon numbers into context. The app is deployed on Vercel, with environment variables managing API keys securely and custom routing ensuring smooth integration between backend and frontend.
Challenges I ran into
The biggest technical hurdle was implementing an effective caching strategy across multiple external APIs with different rate limits and data freshness requirements. I had to balance performance optimization with data accuracy while managing costs and API reliability. Without caching, each page load triggered 200+ API calls: from Tranco for rankings, the Green Web Foundation for hosting checks, the Website Carbon API for CO₂ metrics, and Gemini AI for page size estimates and impact translations. This led to 10–15 second load times, frequent rate-limit errors, and high costs from repeated AI calls. I built a multi-layer caching system with tailored lifespans for each data type (24 hours for Tranco, 7 days for green hosting, 1 hour for carbon data, 7 days for AI responses). Finding the right balance was tricky: too short meant wasted calls, too long meant stale data. Deployment also brought its own issues. Vercel was serving HTML instead of JavaScript, which caused syntax errors across the app. The fix was a custom vercel.json configuration to properly route API requests and serve static assets. Gemini itself introduced another layer of complexity. Its responses varied slightly for similar inputs, which made caching unpredictable. I solved this with tighter prompts, normalization of outputs, and caching AI responses for 7 days to cut costs.
Accomplishments that I'm proud of
This was my first time implementing my own caching system, and while it was a headache to figure out, I’m glad I got it working. I also don’t have much backend experience, and I’d never worked with APIs before, yet this project relied heavily on both. Trying so many new things that all relied on each other was definitely a challenge, but I got it to work! I’m also proud that this was my first solo hack. I’ve always hacked with a team before, so this time I was careful to rethink my objectives, set a more realistic scope, and manage my time differently. All in all, I think I did a good job of knowing how much I was capable of and managing this myself.
What I learned
I learned a lot about backend development, especially how caching works and why it’s so important when dealing with multiple external APIs. I also got hands-on experience with API integration, from figuring out request/response formats to managing rate limits and failures gracefully. On the frontend side, I deepened my understanding of Three.js and how to use visualization not just for aesthetics but to make complex data more intuitive. Most importantly, I learned how to scope and execute a project as a solo hacker: balancing ambition with feasibility, making trade-offs, and still ending up with something that feels more-or-less complete and meaningful.
What's next for Green Web Current
A new direction that I'd like to explore would be to personalize the project. Instead of focusing only on the top global websites, it could visualize a user’s own most visited sites. This would likely require building a Chrome extension, as access a user's top sites is close to impossible otherwise (or so I believe), but it would make the project far more meaningful and impactful. I’d also like to move away from relying on Gemini for page size estimates and instead integrate a more accurate method for collecting this data. That would improve both the reliability and the precision of the visualizations.
Built With
- css3
- express.js
- gemini
- github
- green-web-foundation-api
- html5
- javascript
- node.js
- three.js
- tranco-api
- vercel
- website-carbon-api


Log in or sign up for Devpost to join the conversation.