Inspiration

In the past decade, software development has heavily shifted towards open-source development, with React at the forefront of web development. Today, there are over 1.6 million React-specific libraries developed and maintained by the community. Picking between countless UI libraries and styling options is a challenge so we built Stacklet. Stacklet harnesses machine learning and natural language processing to provide smart library recommendations based on a user's need and their current development environment.

What it does

Stacklet recommends React libraries based on a user's currently installed packages and their intent, acting like an assistant in deciding which libraries to install. By doing so, Stacklet allows users to easily maintain compatibility between libraries and minimize additional dependencies.

How we built it

We structured Stacklet in four parts. Frontend, backend, model layer one, and model layer two. Model layer one handles processing the user's query to match it with relevant React libraries. This layer utilizes Cohere's NLP embed model to determine the similarity between the user's query and the README sections of React packages. Model layer two handles the analysis and ranking of the potential result packages based on the user's current dependencies. We built a custom ML model that calculates the covariance between over 7000 popular React packages that we indexed from open-source repositories on Github. We apply this to the user's existing dependencies and determine the optimal library to add that fulfills the user's query. The frontend handles the user input which includes parsing their package.json file and their search query. It also displays an intuitive visualization of their current project's dependency state with a 3-dimensional graph to emphasize compatibility between libraries. The backend has multi-language support to handle parsing NPM-specific package data and running Scikit-Learn-based models in Python.

Challenges we ran into

This project had an insane number of components, from 4 different versions of web scraping to multiple layers in our predictive model, to the multi-language runtime requirements for the backend. Carefully developing a scraping tool was incredibly difficult and required substantial planning and revision as it took more than 12 hours to generate a high-quality dataset.

Accomplishments that we're proud of

We put our heart and soul into our covariance ML model. Investing time into generating the custom dataset paid off well and while the implementation required a high-level technical understanding and precision, we managed to create a model that consistently provides high-quality recommendations.

What we learned

We learned a huge amount including the process of building multi-layer ML models, generating datasets via web scraping with tools like BeautifulSoup4, training NLP models via Cohere, and even creating a backend that supported a multi-language runtime.

What's next for Stacklet

We plan to continue working on this, improving the accuracy of the model and expanding the scope of the project past just React libraries. Companies spend weeks on end choosing between similar open-source libraries and a centralized ML recommendation system like Stacklet would provide an immense amount of value to the dev community.

Share this project:

Updates