Inspiration:
The inspiration for this product came from finding it difficult to quickly find emails for different companies. This started with another project some of us made to get free merch from different companies by emailing them. It took a lot of time to gather these emails, so we wished there was an affordable tool we could use. Some of us have also had jobs where we look for leads for a company, and this involves a lot of searching online for emails.
What it does:
Our website makes it easy to find emails for different companies and organizations. Simply input the company's domain name (e.g. google -> google.com) and our website finds all the email addresses associated with that domain from across the web.
How we built it:
This project can be broken down into three components, a web-scraper, a database and server less backend, and a statically hosted website.
The foundation for the web-scraper was the scrapy python library. This allowed us to scrape with super high concurrency and some other nice features. The scraper had ~20 target domains where we scraped most pages on the website and parsed out emails. This ran on our team's local server for the duration of the hackathon.
These emails (as well as some other information) was then stored in a MongoDB Atlas database. MongoDB was chosen because it's NoSQL and we didn't have time to rewrite schemas. Another reason was because it had a simple cloud hosting service, so we didn't need to configure a cloud hosting provider.
The front-end was initially planned out using Figma (which we learned at the hackathon workshop). We then implemented our designs using Bootstrap. The static web pages were hosted on Netlify, because it avoids unnecessary complexity.
The front-end interfaces with the database via Netlify functions, which are a easier way to use AWS Lambda server less endpoints. We created some endpoints in Node.js that we could use to query the database. In our functions, we also validate some emails.
Challenges we ran into:
We got stuck in a few places when it came to the website. We thought that Figma was a tool to make websites, and we would be able to export the project when we were done. We went to the Figma workshop, and spent a lot of time using Figma when that time probably could have been better spent actually coding the website. CSS is also a tricky thing to wrap your head around when you are not used to it.
Accomplishments that we're proud of:
We're proud that we were able to complete this project on time, and have an end product that is semi-polished. We're also proud that we created a plan that stripped away unnecessary complications, and we stuck with it. Also that our product works!
No only are we happy with the turnout of our technical project, but out marketing content as well. Our video is pretty eye-catching, which is something we should be proud of!
What we learned:
Our team had two members that didn't have much experience with programming, but wanted to learn more. They were able to learn some of the basics of designing web pages. They also picked up some general knowledge on other languages.
The more technically experienced team members got some new experience working with serverless and NoSQL databases. Also, we learned more about how to market an idea.
What's next for Gatherer:
Gatherer is a basic product now with a huge amount of potential to grow. Whether we end up targeting business users, with features to help generate effective leads, or we target end users that just need a tool to find some emails for a project. Helping users access all sorts of data in one place is something our team could envision. Besides emails, our team could scrape phone numbers, social media profiles, and much more.
We think this product has lots of potential to become monetizable in some way.
Log in or sign up for Devpost to join the conversation.