Inspiration

We gained inspiration for this project when looking at ways we could apply machine learning to our project. One of us had a bit of prior experience with machine learning and AR, we eventually came up with the idea of applying the two together. But how would they work together? Our idea of "beans" and seeing the world through other human "beans" such as "chem beans" -where one would see through the eyes of a chemist and see the chemical composition of objects in real life and not just in a lab, or of a someone who knows a lot of handy tricks at home, so a house being or "house bean" was a way to implement the two together while combining a purpose of worldview expansion or education in the different views the app would offer.

What it does

Our app combines different perspective so that they're accessible from one app. There are four different types of "beans" where you look through the eyes of a different type of "bean." For example, "chem bean" was where you could look out the eyes of a chemist while "art bean" would be where you could look out of the eyes of an artist. We also had a "item historian bean" where you would be able to find the histories of certain objects and a "house bean" where you could look at common tips and tricks someone experienced at home would know. These four views would utilise machine learning models to identify objects in the ar view, and then display information about that object in that field recorded in firebase or found with apis. Art bean was created so that you would be able to use the ar view to find color schemes in an environment, identify famous paintings, identify forms of art, and identify styles of art. Chem bean was created so that you would be able to look through the view and see prominent compounds or elements in objects that were in front of you, and information about those compounds or elements found using apis. Item historian had information about the histories of items the machine learning model identified, and house been had information about common tips and tricks used at home based what objects were used in those tips and tricks, and if they were identified from your ar view. These views were available on our app by swiping, and we also included a page where one could make bookmarks to store information of an object to review at a later date and a recommendations page where one could enter suggestions for what we could add to our databases as information.

How I built it

We built the app interface on xcode while a lot of the more data aspects were from other platforms such as apis, or from firebase. Machine learning models were trained on google Collab while the AR part was taken from Apple's AR kit.

Challenges I ran into

We ran into challenges integrating the AR view so that it could display information, using differently formatted apis, trying interactive ARs and machine learning models for the first time, and creating such an in depth app for the first time (as a first time in a hackathon too!)

Accomplishments that I'm proud of

We're proud of training our own machine learning model, integrating it with an interactive ar view, and getting information from complicated apis.

What I learned

We learned how to make machine learning models for the first time, make interactive ARs for the first time, integrate the two for the first time, and Hellie learned how to use APIs and make an app for the first time.

What's next for beans

Beans is an opportunity to learn. We plan on expanding it by adding more features to each view so that they are more in depth and comprehensive, as well as adding more views to experience many more of the different occupations in life.

Share this project:

Updates