Inspiration

Being a freshman remote student is not an easy task. Of course, the social aspect is difficult, but the academic aspect was even more challenging. Oftentimes, I found myself surfing the web for hours trying to find high quality study resources, whether they be youtube videos, online websites, or old textbooks being sold online. Because of this I decided to create a web app that quickly assembles high-quality study-resources with just one simple textbook scan.

What it Does

  1. Consolidates high-quality online study resources w/ a simple textbook scan
  2. Matches you with students who are selling relevant textbooks and searching for study partners.

Web App Integration

TextBook Perfect was built in Django with a backend sqlite database. The frontend was written with html, css, and javascript.

Feature No.1: Resource Finder

Process:

  1. First the user opens their mobile camera and snaps a quick picture of their textbook (or on desktop, they can upload an image file).
  2. After the image is uploaded, the page renders out ten websites(with hyperlinks) related to the textbook (study solutions for the textbook, related web content, online sources for purchasing the textbook, and etc. )
  3. Text extracted from the image is used to provide the search query value for the youtube videos.
  4. On the same page, the user is also provided with four relevant YT videos (pulled from the keyword) and their comment threads. There is also a sentiment analysis for each of the comment threads (provides helpful quick summary when the comment threads are very long).

Resource Finder Significance

Instead of searching far and wide across the web, all of those resources are right there in front of your eyes :)

How I Built It: Resource Finder

The Resource Finder was built by integrating Google-Vision-API, Google-Natural-Language-API, and Google-Youtube-Data-API.

After the mobile camera takes in a scan, I used Google-Vision-API’s OCR-Text-Detection to extract the text in the uploaded image. After comparing the text to a list of school subject keywords, I performed a YT search with that keyword (i.e. Multivariable Calculus). First I used the list search functionality, and then the list comment thread functionality (unpacking json responses). I stored the comment threads for each video in a python dictionary, and after that I extracted the sentiment score&magnitude for each comment thread.

At the same time, I used the Vision API’s web detection functionality to detect and render out pages with matching or partially matching images to provide me with good study websites.

Feature No.2: Textbook Exchange

Textbook Exchange takes user input and matches you with students who: A. Are likely to buy your books B. Have books that you would like to buy C. Would like to find a study partner in a similar subject D. Live in close vicinity to you, so book shipping costs will be reduced

How I Built It: Textbook Exchange

Points-Algorithm: The way I built the Textbook exchange form was with a Django form connecting to a SQLite backend. In python, I wrote a points-based-algorithm that increased every time there was a buyer/seller match and/or a study preference match.

Shipping Costs(Google Distance Matrix API): One of the user inputs is the city of residence. Using the Distance Matrix API, I found the distance in miles between the current user’s city of residence and that of every student in the database. From this, shipping costs are calculated based on the UPS Shipping Estimation Cost for a textbook (varies by distance). This equation was created by using a linear regression model on data points outputted from the UPS website.

TextBook Exchange Significance

Provides a function for user-to-user purchase and catalog views. Easy to see exactly who you should buy from/sell to/ study with.

Challenges I ran into

YT-Data Quota + JSON: I had several hiccups when working with the Youtube-Data-API because I was extremely unfamiliar with processing JSON responses. In addition, I had to watch my quota max for the day (couldn’t do as many tests as I would like) because the limitation was 10,000 queries per day.

Google Natural Language: Another challenge I ran into was unpacking the sentiment analysis data. This took several hours, but I finally accomplished the comment thread analysis.

Accomplishments that I'm proud of

I’m really proud of integrating all of the Google Cloud API’s. This is my first time actually implementing the API’s in a Hackathon Project, and I’m proud of myself for getting over all the obstacles.

I’m also proud of following the Retro Theme for this Hackathon! I’m slowly learning about web development, and it’s been fun integrating fun styles through CSS.

What I learned

  1. Improve in HTML and CSS, as well as begin working with json responses.
  2. Working with the Google Cloud API’s!
  3. Improving @ Django web framework.

What's next for TextBook Perfect

  1. Extending further on the e-commerce aspect of textbook exchange
  2. Allowing users to upload images of textbooks they are selling via buckets
  3. Integrating a video calling function for study partners to interact.

Team Leader Profile:

https://swamphacks-vii.slack.com/team/U01KT9V3X61

Built With

Share this project:

Updates