Inspiration

After meeting many teachers in our educational journeys, one problem has remained prevalent. As high school students, we have seen all too many teachers struggling to grade exams quickly. This is why the Scanify team is here to introduce a new way of grading through our mobile scantron grader and test maker. No longer will teachers have to spend days grading scantrons through an old machine, as this technology will change how testing occurs throughout all schools. Schools in areas of lower socioeconomic status will no longer need to rely on, or to purchase, one expensive scantron grading machine as Scanify will enable all teachers to rapidly grade even from the comfort of their own homes, and even directly after students take tests. We wish to push forward a simpler grading system, and create a better experience for the next generation of students and teachers.

What it does

First, our application allows teachers to create new tests, input questions, and use artificial intelligence to generate similar questions based on those the teacher writes, making it easier for teachers to create varieties of questions in the question bank.

After adding and creating as many questions as the teacher desires through our polished user interface, they can then proceed to move to the next step where they can select the number of test versions they would like, to ensure academic integrity on the exam. In each version, the order of questions is randomized, preventing cheating.

Then, they can directly print the required number of each test version, along with its corresponding scantron, indicated by the printed version number and its associated QR code.

Following the completion of the test, teachers can choose the ‘Scan Test’ option for the respective version to take a picture of a completed scantron, and they will receive an instant score, as well as a clear display of the correct and incorrectly answered questions, including the answer the student selected and the correct answer.

This system sidesteps the need to buy an expensive machine that underfunded schools might not have access to, as students and teachers from any district should be able to experience the best grading and testing experience, regardless of the school’s monetary status.

How we built it

  • We used Java Enterprise Edition for the web application logic, as well as to connect to the APIs and databases. We chose Java EE for its reliability and scalability.
  • The Tesseract.JS library was used to provide the optical character recognition model for scantron recognition. Tesseract.JS was chosen for its accuracy and portability, especially since it runs a small neural network for text recognition locally on the user’s computer. This is advantageous since it allows us to reduce server costs to a minimum, permitting our product to run at little to no cost.
  • We used the OpenAI GPT-3.5 API to generate similar questions in tests. We carefully engineered the prompts and parameters to ensure response accuracy.
  • Bootstrap was used to enhance our user interface so that it is responsive for use on mobile devices. This is especially important so teachers can grade their tests quickly and conveniently using simply a cell phone.
  • SQLite was used to store the tests efficiently in a relational database.

Challenges we ran into

  • The GPT-3.5 model sometimes produced inaccurate output that would follow the specifics of the prompt, but not the specifics. We initially tried to switch to the GPT-4 version, but that produced similar results. At the end, we found that the issue was the temperature setting in the API, which controls how deterministic the answers were. After experimenting with different temperature settings, the model produced the expected results. We were also able to switch back to GPT-3.5 with almost identical results, which allowed us to reduce the cost of operations massively.
  • The Tesseract.JS model initially misrecognized many parts of input images. We were able to fix this by researching the parameters of the model and using the tessedit_char_whitelist parameter, after an hour of testing a multitude of different parameters.
  • We ran into some issues with the element when scanning images when testing the scanning feature on iOS devices. We fixed this by using the image instead, which, in addition, allowed the user to also optionally select an image from the camera roll.
  • We had some driver issues with SQLite and JDBC version mismatch, leading to pesky linking exceptions. We found that the issue also involved permissions on temporary directories.

Accomplishments that we're proud of

  • We are proud that we were able to write the answer sheet scanning feature in time, especially after serious difficulties with whitelisting characters and noise removal.
  • We are proud of the interest we received when pitching our product to our cohorts and mentors through an interactive test, as they were given different versions of a fun CS exam we generated. Additionally, we utilized our system to scan and grade it with them watching our system work. This method of pitching and the appeal we saw made us further enjoy our finalized project.

What we learned

  • We experimented with adjusting parameters of AI and machine learning to achieve desired results. We also gained experience with prompt engineering in large language models.
  • We discovered the difficulties of optical character recognition using machine learning technology, and its importance for the future.
  • We learned how to connect JavaScript front ends with SQLite databases to store persistent information.

What's next for Scanify

We aim to incorporate AI into Scanify to precisely recognize and grade handwritten responses in exams. To accomplish this, we will utilize a different machine learning model connected to OpenCV and train it on student handwriting samples. This innovative feature aims to streamline evaluation processes, providing educators with a more efficient and accurate tool to grade free response questions, significantly reducing grading times. With Scanify, we are taking a step forward in revolutionizing exam assessment through the power of artificial intelligence.

Share this project:

Updates