Inspiration

Scribblenauts and Doodlebob? We thought of what inspired or evoked excitement in us. When thinking back specifically to what felt new and magical as a kid, we thought back to the game "Scribblenauts," a 2D puzzle solver where you could bring to life (most) objects by simply typing their descriptions. This inspiration veered into VR and morphed into what could only be described as Doodlebob inspired; we refer here to a character from the hit show Spongebob that came to life after being drawn by a magic pencil. "Could we do something like that? Where drawings come to life... Could we do it in VR?" we thought. Ultimately, we wanted to create something that would create a small sense of magic, similar to what we felt as kids playing Scribblenauts. We also thought it would be very cool.

What it does

Doodle Life is a program where one can use their imagination to draw creatures in VR and have them come to life with movement and a unique personality and name. Your creature is then treated as if it were a creature in a nature documentary, whereby a narrator will begin to describe your creature, its habits, and perhaps a fun fact about it!

How we built it

The program is an interface between a Unity VR and python program. The VR program simulates the environment, allowing the player to move around and create their creature. It also performs the capture of the creature's data that will be processed in the python program. This said program is constantly checking for new creature data, when it finds it, it will interface with Gemini and have it perform two tasks. On one hand it will ask Gemini to perform a zero-shot classification on the creature image data, classifying its body parts into separate types (allowing for individualized animation). On the other hand, it will pass to Gemini the creature image data alongside the transcript to Planet Earth (a nature documentary) and ask it to create a short narration in a similar style. The classifications are passed back to the VR program via a text file and the narration is fed into a high-quality TTS which produces an audio file that is also fed to the program. The animation and audio playback can then be handled and voila.

Challenges we ran into

A surprising challenge in the initial stages was the migration of code from a colab notebook to a standard python file. A lot of setup tutorials, as well as the one for Gemini relied on using google colab. As for Gemini, we had never used an API before and the business of secret keys proved to be a little tricky. In addition to routine dependency errors and lost packages, creating a real-time interface between the python and Unity code definitely had its kinks. On the VR side, there wasn't much that didn’t initially not work. From faulty grab boxes, to the simple fact that animations are hard to program. The VR process was slow and iterative but extremely rewarding.

Accomplishments that we're proud of

We set a goal, worked our hardest to accomplish it, and created something we thought was genuinely cool; I don’t think we’re more proud of anything else. We worked with tools completely new to us and as beginners to hackathons, have come to understand the concerning amounts of caffeine distributed. If I speak for only one of us: this is the first personal project I’ve ever undertaken with full effort to accomplish a goal. After talking up to myself about doing these cool projects and how cool it would be to try them, I finally attempted something. Shoutout to LA Hacks.

What we learned

In the task of doing a project in such a short intense period of time, motivation becomes important; contrast this with goals that can only be realized long-term, where motivation may wane. In our case, at the outset of the project we wanted to make something fun and cool, something that was funny or awesome or surprising. In this short time, we faced many roadblocks, but in our enjoyment at seeing the small successes and the motivation that at the end we would have something we found really cool kept us going. This was a major takeaway for us.

What's next for Doodle Life

We feel that Doodle Life has a lot of potential. Potential in both quality and use-case. In terms of quality we look at animation, environments, and narration. For animation, there is a whole breadth of movements that could be designed and implemented to make the creatures more lifelike. These animations could be linked to their personalities or their overall look. These animations could then evolve into a larger class of behaviors. Environments could be more detailed and catered to the creature, with water biomes for fish-like creatures or clouds for the high fliers. Narration could be updated, perhaps giving the user descriptions in real time. The two use-cases we’ve identified are for entertainment/gaming and for education. We imagine a world that is saved between uses, where you can come back to your creatures and see what they’ve been up to. Perhaps they’ve grown, perhaps they’ve left for a while, or maybe they’ve grown attached to a fellow creature. A similar form of entertainment that Tamagotchi toys used to provide. We also believe that the program could be modified for education. The medium itself is inherently engaging and we could ask Gemini to provide a sort of taxological classification to the drawn animal, then going on to describe its likely environment. Something along the lines of: “Based on the large ears of this creature it most likely lives in a warm environment, with the large surface area of its ears helping it keep cool!”

Built With

Share this project:

Updates

posted an update

What would I do to life, to doodle life? Well, it's been a long weekend and an absolute journey as a creator of Doodle Life. I learned how to develop in VR and the challenges of demoing hardware you need to wear on your head. I'm looking forward to what we can do to life in Doodle Life 2(D) for mobile devices!

Log in or sign up for Devpost to join the conversation.