Amphibian Engine
Virtual pet that lives inside your browser whilst indexing the sites that you browse.
Our project is a chrome browser extension that sends the sites that you browse to a server, along with title data, keywords, and links in the page. Your frog companion will grow in size as time passes onwards as you index more pages. We intend this to be a fun concept to encourage people to index information which could be used in a search engine or other format. An example of this is featured as part of our project, that searches through available links based on keywords from pages.
Instructions to run
Be aware that obviously as part of this project we are sending your browsing data to a server, if set up in the default way. This server probably is not running right now, but it is something to be aware of, do not keep the extension installed if you are no longer testing!
This project comes in three parts
Chrome Extension Virtual Pet
Contained within the "extension" folder is a chrome extension featuring an animal companion. If you want to run this on your computer, click "load unpacked" while in developer mode in the extension menu, and open the folder. If you want to run the extension locally, there is a http address that you need to change in web-scraping.js to point to your local server.
Pet Server
The server is made using nodejs and acts as an interface between a mysql server and the chrome extension, as well as the search page. You can run this yourself by installing required dependencies express, cors, and mysql for node.
Search Page
A demonstration of something you could do with a dataset collected in this way in shown in our search page. When entering a keyword query, this search asks the server to check the database for links that match the keyword. If you want to run this locally, please also point it towards your local ip address in index.html. The search page does send any personal data from your computer.
What does this do that's different or interesting
- Circumvents running web crawlers on dedicated hardware.
- Communicates deep and meaningful message about the passage of time and the future, and the horrors of climate change.
- Fun hackathon project mostly
Challenges we ran into
We split our project into three key stages: research, development, and desirable extensions.
In research, we faced the questions of deciding what idea to tackle, how the database should be structured, and how extensions actually work. Many problems at this stage were solved through delegating various research topics to different team members, and informing the team of their findings. This lead to us make effective and well-informed decisions, directing us to our project and the technology used for it.
In development, the main challenges were:
- how to store the number of webpages our used had accessed (Chrome Local Storage);
- using JS packages to parse through the HTML text for keywords
- how to search the webpage via the extension
- switching from MongaDB to MariaDB despite discussions as more realisations surfaced.
- Integration issues of Server to Client Side, and of Chrome issues and
jsonfiles.
We cleared through these projects by collaborating as a team. By properly communicating our concerns with other members in confidence, we appropriately re-allocated resources in order to solve problems. At times, pair-programming proved to be a useful tactic for achieving breakthrough at milestones, such as the integration of server and client side operations.
We worked with systems which not all members were familiar. This was again overcome by proper communication of concerns and research of topics.
Accomplishments that we're proud of
We are proud that within the limited time available we have created a working web crawler with maturing avatar. We have also been able to create a functioning search function as an extendable feature which uses the data we have gathered.
We are proud that we have collaborated as a team, making informed and democratic decisions to the project. We are proud to have delegated workload amongst the team in a fashion that all members could contribute, and communication within the team was well-maintained.
What we learned
We have learnt how to work with unfamiliar language on short notice, how to access and select the material required to work on unfamiliar tasks. In particular, working with MongoDB then transferring over to MariaDB was a valuable experience member to learn about backend development.
We also learnt much about the compatibility issues between different file types and explored different methods to solve such problems.
We experienced the strength of working in a team. It allows for the division of labour. It grants us a diversity of skills. It reminded us the importance of mastering communication effectively.
What's next for Amphibian Engine
Beyond the search engine, we have remaining extendable tasks for the Amphibian Engine.
To store data over time to allow us to view web pages at different time periods. To use AI agents to summarise the URLs we have visited and recorded. To add in more visualisation effects to our virtual pet.
We would also want to use it to explore the theme of climate change and environmental impact more. An example would be working with environmental groups to create a more effective way of communicating the concerns of climate change such as what we have in our extension.
We hope to bring this experience to the next stages of our careers.
Log in or sign up for Devpost to join the conversation.