Inspiration

The idea for this program came from our realization that students in underserved communities could be benefitted by AI feedback systems to=hat reinforce their learning. In an effort to have a standardized measure for potential weaknesses and knowledge gaps we wanted to use the ACT.

What it does

StackUp begins by prompting users to input an estimate of their region’s ACT test scores, typically accessible on the ACT test-feedback website, along with their own ACT scores. Based on this input, StackUp identifies subject areas that offers the greatest potential for score improvement and generates a tailored set of four questions. After completing the initial quiz, users can regenerate additional problems that build on the concepts covered, adapting dynamically to their performance. If a user struggles with a particular subject, StackUp adjusts to provide more questions at an appropriate difficulty level in that area. When users finish their session, they can explore the Analytics Tab to track their progress and see how their performance compares to their previously inputted regional benchmarks.

How we built it

The entire architecture of StackUp was implemented in Python, leveraging Flask to manage the API layer, Pinata for decentralized data storage, and Streamlit for rapid front-end development and prototyping. Flask was strategically chosen for its lightweight nature and flexibility, enabling seamless scalability and integration with more robust frameworks in the future as the product evolves. This modular design ensures that each component is optimized for its role, providing a solid foundation for future enhancements and growth. The stack prioritizes simplicity without compromising on the potential for scaling and adaptability.

Challenges we ran into

We initially aimed to gather data from underrepresented communities, but we quickly encountered a significant challenge: the lack of sufficient data within these communities to effectively train or contextualize the large language model (LLM). This highlights a broader issue in AI ethics, one that manifests globally and is slightly remedied by standardized metrics such as the ACT. It underscores the systemic disparities in data representation and the ethical implications of building AI systems that risk perpetuating these inequities. Deciding between using Pinata’s Files API (privatizing our data) and IPFS (Web3) API. We figured that since our data revolves around LLM generated responses, it would be more appropriate to store it on a decentralized network.

Accomplishments that we're proud of

We’re incredibly proud to have spent the past 24 hours tackling a pressing issue in the academic field. Collaborating as a cohesive team, we built, refined, and deployed StackUp, streamlining the development process and elevating our hackathon experience Developing an interactive UI and overall smooth user experience

What we learned

A new file storage API like Pinata and a general introduction to peer-to-peer file sharing with production code. We learned how to use Samba Nova for fast LLM responded using stored user context.

What's next for StackUp

The next step for StackUp is to expand beyond the ACT, broadening its scope to assist with other standardized tests such as the SAT or state assessments. We aim to handle regional data independently, eliminating reliance on external sources to enhance reliability and user experience. To ensure scalability and performance, we plan to migrate the front-end to a more robust framework like React. Additionally, improving stability for low-network environments is a top priority, as making StackUp accessible to underrepresented communities is central to our mission.

Built With

Share this project:

Updates