Inspiration

You’ve carefully plotted the "best" route to your destination, trusting that it will get you there efficiently. However, midway through your journey, traffic comes to a sudden halt. While your navigation app, whether Google Maps or Waze, still indicates that you’re on the fastest route, it becomes apparent something is wrong. As you approach the next turn, the sight of an accident confirms the delay, with emergency vehicles audible in the distance. It's clear this obstruction could cause a significant setback in your travel time, leaving you feeling stranded in an unexpected jam.

This scenario is not uncommon, and the inspiration for our project came from the alarming rise in traffic incidents and the associated increase in distracted driving. Studies show that distracted drivers are responsible for a significant percentage of road accidents, often due to confusion or frustration when delays occur, especially when navigation apps don’t account for sudden changes like accidents or roadblocks. Our solution aims to alleviate that frustration by proactively detecting roadblocks and providing alternate routes, keeping drivers safer and less distracted. This is especially crucial as reliance on navigation technology grows, with many drivers depending entirely on these tools to guide them through real-life traffic situations. Our mission is to keep drivers safe - while keeping them off their phones.

What it does

SpeechMaps is an innovative navigation solution that optimizes your travel routes by dynamically reconfiguring them with real-time traffic incidents. Unlike conventional GPS applications that may not promptly update route conditions, SpeechMaps allows users to report accidents directly through voice input, ensuring immediate, AI based updates to avoid congested or blocked areas and uploading the incident for all users to see who are connected to the SpeechMaps network.

How we built it

The project is built using Next.js 14 with the App Router, supporting both server and client-side rendering for efficient page loads. It employs strict TypeScript and ESLint configuration to ensure code quality and maintainability. The design is fully responsive, styled with modern techniques using Tailwind CSS and enhanced with accessible UI components from Aceternity UI. For animations, Framer Motion is integrated to bring smooth, interactive visuals. The application also features advanced functionality like speech recognition powered by Groq AI and wake-word detection using the Web Speech API and Picovoice AI. Additionally, smart map integration is enabled with the Google Maps API, providing dynamic location-based services. The entire project is deployed seamlessly on Vercel for optimized performance and scalability.

Challenges we ran into

The project incorporates a range of complex features aimed at improving functionality and user experience. However, working to make these technologies presented several challenges. First, implementing real-time rerouting based on blocked roads required real-time traffic data on a shared network, which was difficult given some of the limitations of the Google Maps API. The voice recognition component, while it did provide hands-free interaction, sometimes involved handling unpredictable speech patterns and background noise, making reliable activation through "wake-word" detection and speech recognition a significant hurdle. Furthermore, integrating dynamic UI animations and ensuring correct positioning also added frontend complexity, as achieving the smooth transitions we got without compromising user experience required meticulous effort. While these technologies can enhance the user experience, each introduceed its own set of challenges that we were fortunately able to address to ensure our system remains responsive, reliable, and safe to use.

Accomplishments that we're proud of

Committing ourselves to create a solution to a real-world problem.

What's next for SpeechMaps

First, we hope to develop these features into a dedicated mobile app (or perhaps as a custom plugin for existing ones). The next main feature we are envisioning is implementing Augmented Reality (AR) for route visualization. Though it may pose as a significant challenge because it requires overlaying real-time navigation data onto the physical world with precision (all while maintaining device performance and ensuring an intuitive user interface) having the route laid out in front of the driver would significantly increase engagement with the roads and a better sense of where to go.

Maintaining a cleaner code base is another ongoing difficulty. As features grow in complexity, keeping the code modular, maintainable, and scalable becomes harder, especially with frequent updates and integrations. We hope to practice good coding paradigms and constant refactoring to ensure the app remains stable and easy to maintain in the long term.

Last but not least, another feature we see coming in the future is automatic vehicle tracking. It may introduce additional technical hurdles, such as managing GPS accuracy, data latency, and potential privacy concerns. Moreover, the challenge extends to maintaining reliable connectivity, especially in areas with poor network coverage, and ensuring the app can operate under various environmental conditions. Addressing these complexities while providing users with accurate, real-time updates is essential to making vehicle tracking both effective and secure, but it will require careful attention to both technical and ethical considerations moving forward.

Built With

Share this project:

Updates