Inspiration

Navigating public spaces can be daunting when essential accessibility features remain hidden in generic images. We recognized that people with disabilities often face uncertainty due to the lack of clear, real-time accessibility information. Inspired by the need for transparency and empowerment, we set out to create a solution that reveals the hidden details of our surroundings—making every outing safer and more inclusive.

What it does

Access360 leverages Google Maps imagery and advanced visual models to analyze both the interiors and exteriors of venues. Our platform generates a comprehensive accessibility evaluation, accompanied by actionable recommendations for local and small business owners who may not have the resources to recognize the needs of people with disabilities, so users can confidently choose restaurants and public spaces that meet their unique needs.

How we built it

To bring Access360 to life, we developed a full-stack web application using:

  1. Frontend: React/Next.js/Tailwind for a responsive, user-friendly experience.
  2. Backend: Python/Flask (later expanded with FastAPI and Uvicorn on an EC2 instance) to handle robust image processing.
  3. AI & APIs: 3.1. Integrated the Google Maps Street View API to fetch real-time imagery. 3.2. Trained a custom YOLO model on our tailored dataset with ultralytics and OpenCV to detect key accessibility features. 3.3. Employed OpenAI’s API and Langchain to agentically generate detailed, user-friendly reports 3.4. Utilized Whisper SST and OpenAI API to agentically navigate website with voice 3.5. Full 3d Scene reconstruction with trained gaussian splats

Challenges we ran into

Deployment issues with Vercel and managing large volumes of image data posed significant obstacles. Optimizing performance for multiple image processing tasks and handling base64 encoding before switching to a static file approach. Dependency conflicts, frontend state management issues, and ensuring consistent functionality across environments further complicated the process. Configuring the Google Street View API, managing API costs, and debugging our AI model’s image recognition capabilities added to our hurdles. Setting up our EC2 instance required installing mesa-utils, increasing volume size, and fine-tuning inference settings. Accurately calculating image angles to produce comprehensive 360° views presented additional technical challenges. Tight time constraints forced our team to work late nights as we balanced UI/UX refinements with delivering a reliable demo before the hackathon deadline.

Accomplishments that we're proud of

We’re incredibly proud of our custom-trained machine learning model, which delivers highly accurate accessibility feedback. We meticulously designed innovative prompts and rubrics to evaluate venue accessibility comprehensively. In addition, integrating voice-controlled website navigation agents was extremely difficult. Lastly, integrating multiple technologies into a cohesive, user-focused platform under tight deadlines was a real testament to our teamwork.

What we learned

Access360 taught us the critical importance of a smooth backend-frontend integration and the benefits of developing a custom dataset for enhanced accuracy. More than that, it reinforced the power of teamwork and iterative development—reminding us that collaboration and persistence are key to overcoming even the most daunting challenges.

What's next for Access360

Future features of Access360 include AI-assisted calling functions—such as calling ahead during off-peak hours when a venue is poorly rated—and tailored improvement advice for businesses, along with an agentic system with map-reduce, advanced search, and Gaussian Splatting exploration for enhanced 3D analysis. We anticipate to scale the platform to other countries and other types of building beside restaurants (already in progress), refining our machine learning models, and integrating a user feedback system for reporting inaccuracies.

Built With

Share this project:

Updates