GITHUB LINK:
https://github.com/evan-gattis/mod
Inspiration
BlockVision Team: We are very interested in vision-guided intelligence, and we wanted to inspire the same kind of interest in others through the widely-loved game of Minecraft. Minecraft is a sandbox that opens up all kinds of creativity and we wanted to enhance that.
What it does
The BlockVision mod uses the aid of a depth-sensing camera to virtualize the real world inside of Minecraft. With the single press of a button, you can recreate any 3D asset (.obj file) in the game, transforming real-world environments into in-game representations. Additionally, we have human-virtualization mode -- a feature that allows real-time human interaction within the game. Essentially, it overlays the user's human movements with the block structure in the game for a fun and immersive experience.
How we built it
The BlockVision team wanted to incorporate technologies we have never used, including the Minecraft modding library NeoForge, and the ZED depth tracking libraries. The entire tech stack is listed here, from the bottom up:
- Depth Camera: ZED mini
- Body-Tracking ML module (ZED library)
- Spatial Mapping for 3D meshes (ZED library)
- Python (data processing)
- NeoForge mod library (in-game virtualization and user interface)
We also established a separation of roles to increase efficiency. Here are our team-members and their respective roles:
Artemis: Front-end in-game GUI Ben: 3D coordinate transformation within Minecraft Zak: ZED Mini data developer, data transferring Evan: Data developer, image color transformations
Challenges we ran into
Our team faced a lot of new challenges. Most of our group members have never participated in a hackathon before, and a lot of our tech stack was new to us too. One challenge was learning the complex structure of the Minecraft client/server architecture in a short time-frame, which also had limited documentation at times. We pushed through this challenge and managed to get a custom user-interface within Minecraft too.
Another challenge we faced was transferring 3D model data into the Minecraft environment at a high-rate for live human-tracking. We solved this by using a TCP stream to transfer the data from our Python process into the Java environment.
Accomplishments that we're proud of
The BlockVision team accomplished our goal of real-world virtualization within Minecraft, along with human-tracking within the game. Users can import/create any obj file, and quickly get an exact replica in the Minecraft game of this. When depth camera is stationary, the user can virtualize themselves in the game, and watch as the world follows their movements.
What we learned
The team learned a lot about the Minecraft library environment, and we also learned how to effectively transfer data in a short time. The ZED mini depth camera was also a new tool for all of us. All of us were learning throughout the entire hackathon.
What's next for BlockVision
Along the way, our BlockVision team worked closely to create a modular and scalable system for future developments. As such, our stretch goals are to:
- Flesh out the live human-virtualization to include a more robust in-game human model
- Dynamically scale the builds with in-game user input, such that the user can choose the overall size of the build in game
- Increase the color palette of the block options for the objects builds
- Double encoding user inputs for a more accessibility-friendly UI/UX
Log in or sign up for Devpost to join the conversation.