Inspiration
For this project, we wanted to use the Palantir Foundry software efficiently and effectively, to target a real world issue. Our inspiration for the project came from the Austin Climate Equity Plan and the Tree Equity Project. The Austin equity plan outlines key objectives that the city wants to reach in different sectors. More info can be found on their website.
What it does
Austin Analytics creates visualizations to help plan Austin equity projects. We combined data sets from various resources and implemented them into the Palantir Foundry software. This made data easier to visualize and locate correlations. In our research, we noticed that the presentation of data was rarely in the form of graphs that had toggleable layers, and identifying existing correlations between certain factors was difficult. Austin Analytics combines these visualizations and machine learning to create a versatile product that is forward-thinking.
How we built it
We built out the map and data analytics portion of the application in Palantir Foundry, utilizing their workspaces, code repositories, and Ontology apps. To find the correlation between crime rate and other variables, we adapted a San Francisco crime predictor to our data set. This was done using python, sk-learn, pandas, and other python libraries. We found most of our machine-learning information on Kaggle.
Challenges we ran into
At first, we had difficulty picking out relevant data that fit our criteria and had data points meaningful to our project. Then, when trying to get the data to display in Palantir, we ran into various bugs. Due to our inexperience with the software, we encountered a big learning curve, as we had to learn about all the features quickly. When adapting the machine learning model, the Austin crime data sets were large so it took a long time to run. There was also trouble setting up the Python environment for the machine learning algorithms.
Accomplishments that we're proud of
We are proud of our usage of Palantir Foundry, especially because it was new to all of us. We were able to generate a dashboard where users can see data at a glance. Additionally, we fit a linear regression model to our dataset with an 89% accuracy in predicting the severity of the crime based on the area's zip code. We are also proud of learning more about machine learning with Python and being able to get insights from our data.
What we learned
We learned how to use Palantir Foundry to an extent and how to use large data sets in machine learning. We learned how to quickly grasp real-world software and apply it to our project. Due to the extensive amount of data, we learned to manipulate the various data points to fit our needs. For example, when we first imported the data set for creating the maps, it only contained latitude and longitude values and not a geohash. We learned how to use Palantir code repositories and libraries to create the geohash and even to fix postal codes.
What's next for Austin Analytics
In the future, Austin Analytics would expand to reach more cities and take in more data, creating more toggles on the map. We would also like to utilize machine learning to make predictions, possibly alerting users about future crime. With Austin Analytics, we hope that cities can better understand crime patterns in order to make informed decisions towards taking preventative measures and ensuring public safety.
Log in or sign up for Devpost to join the conversation.