Inspiration

pov: you just watched a 3blue1brown video...

you decide to code up a neural network and you suddenly have to learn how to import data, read pytorch documentation, install packages... and wait what even is a jupyter notebook? it's like a t-rex screaming "RAWRRR" at you when you just wanted to play around with a neural network (ok and what if u wanted to make chatgpt?)

What it does

we change that screaming "RAWRR ๐Ÿฆ–" into just "rawr ๐Ÿฑ". suddenly all you have to do is drag a cute, colored block to add a layer to your neural network!

  • Layer Mode (for neural networks): drag and drop layers including linear, convolutional, RNN, LSTM etc onto the canvas and tweak all of their specific parameters!
  • Then, train the model through the website itself without having to install anything! You can tweak all of the training parameters such as the epochs, batch sizes, the loss and optimizer etc.
  • Need help for the dataset of your choice? Click the shiny AI lookin button! This will generate a suggested layer architecture (powered by yours only OpenAI) for you and put it right onto the canvas!
  • Wondering what the code would look like? Download the python notebook for the stuff you made so easily with scraply!

  • Transformer Mode (for LLMs): with a few clicks you can build out your own transformer!

  • Tweak all of the parameters to your hearts content: Embedding dimensions, Attention heads, Hidden dimension etc.

  • After training your custom transformer with a single click, you can prompt it right through the website!

  • Have fun prompting your transformer! you can even make your own generative model with these transformer blocks.

Even though rawr is simple to use since it is targeted towards beginners, it is complete and does not shy away from representing how real neural networks and transformers work.

How we built it

The Dynamic Model API: PyTorch, Flask The Web App: Next.js, React, Tailwindcss Runs on AWS SageMaker ๐Ÿ™ triggered by the API Gateway and Lambda functions

Challenges we ran into

  1. At the last minute while testing the final endpoint: transformer inference, we ran into a chain of errors due to incompatible model architecture while training and testing. Status: pending.
  2. We also struggled a lot with AWS SageMaker to train the custom model on the cloud.
  3. In the beginning while making the drag and drop blocks it took us a long time to make it smooth and have each layer show its own specific editable params.

Accomplishments that we're proud of

  1. We're proud of how we gave it our all ๐Ÿ’ช
  2. we reached the deployment stage๐Ÿ•บ

What we learned

  1. We need to learn more about AWS technologies: we wasted a lot of time in debating what might be the best AWS technology for our purpose.

What's next for rawr

  1. Run models in the browser itself using tf.js

Built With

Share this project:

Updates