Inspiration

Our journey began with a shared love for TikTok, an app that allowed us to share laughs, and learn new things. However, as we reveled in the joy it brought, we noticed a heartbreaking reality: some of our friends struggled to enjoy the app due to accessibility issues. The inability to share this wonderful experience equally with them inspired us to create a solution. We envisioned an app that would bridge this gap and ensure that everyone, regardless of their abilities, could enjoy TikTok to its fullest. Thus, an idea was born, named to reflect our mission of transforming idle moments into meaningful and inclusive learning experiences.

By integrating features like voice control, high contrast detection, and seizure prevention, TikAccess not only brings the joy of TikTok to everyone but also encourages productive use of the app. Users can now learn, create, and share content more effectively, knowing that their experience is optimized for their needs. TikAccess is not just about accessibility; it's about making every moment count and turning entertainment into a tool for growth and productivity.

What it does

TikAccess is an innovative app designed to make TikTok accessible for everyone. It incorporates three main features to enhance user experience:

  • Epileptic Sensitivity Detection: This feature scans videos for potential epileptic triggers and reports the number of violations, ensuring a safer viewing experience for users prone to seizures.
  • High Contrast Errors Detection: To assist users with visual impairments, our app detects videos with contrast errors between the video text and the video text and flags them, making it easier for content creators to adjust and enhance visual accessibility.
  • Voice Control: Our app includes built-in voice control functionality, allowing users to perform functions like swipe up, swipe down, and add posts without touching their screens. This feature provides seamless accessibility without the need to share data between the app and the operating system of the device.

How we built it

  • Backend: Developed using Firebase for authentication, user database management, and storing images and videos.
  • Android App: Built with Flutter, which posts media files for processing. Flutter's cross-platform capabilities ensure seamless functionality on both Android and iOS devices.
  • Job Processing: Managed by a collection that handles tasks, with Fly.io running the main driver to execute text detection and contrast analysis scripts simultaneously.
  • Real-time Updates: The Flutter app listens for updates and displays any detected contrast violations to the content creator once processing is complete and the status changes to 'C'.
  • Voice Control Integration: Implemented using the free tier DialogFlow API for actions like swipe up, swipe down, and adding posts based on intent.
  • Epileptic Seizure Detection: Created a FastAPI integrated with the TikTok app to analyze and flag videos for risks, detecting potentially epilepsy-inducing visual patterns such as flashes and saturated red transitions.
  • Text Detection and Contrast Analysis: Utilized EasyOCR to detect text in images and videos, analyzing the contrast between text and background to identify potential readability issues. The code processes both images and videos by resizing frames, extracting text regions, and comparing text and background colors to ensure accessibility compliance.

We developed a backend using Firebase for authentication and user database management, along with storing images and videos. The Android app is built with Flutter, which posts media files for processing. A job processing collection manages the tasks, and Fly.io runs the main driver to execute the text detection and contrast analysis scripts simultaneously. Once processing is complete and the status changes to 'C', the Flutter app listens for updates and displays any detected contrast violations to the content creator.

Challenges we ran into

Implementing Voice-Controlled Gestures: Ensuring accurate and responsive voice recognition required extensive fine-tuning and testing. Compute Resources for Video Processing: Handling the computational demands of processing large volumes of video data, especially in real-time scenarios. Contrast Error Detection: Developing efficient algorithms to analyze contrast errors without significantly impacting performance. Maintaining Real-Time Processing: Balancing the need for real-time processing with the high computational requirements of our accessibility features.

Accomplishments that we're proud of

We implemented voice-controlled gestures for improved accessibility, allowing users to navigate the app seamlessly using voice commands. We also integrated a risk warning system for videos that could potentially induce epileptic seizures, enhancing user safety. Additionally, our app analyzes uploaded videos and images for contrast errors between text and background, ensuring better readability and overall accessibility. These features highlight our commitment to making our demo app not only feature-rich but also inclusive and user-friendly for all audiences.

What we learned

During this hackathon, we learned the importance of cross-platform development and how to leverage Flutter’s capabilities to create seamless experiences on both Android. Implementing accessibility features like voice-controlled gestures and contrast error analysis deepened our understanding of inclusive design. We also explored using the DialogFlow API for voice control, enhancing our skills in integrating third-party APIs to enrich app functionality. Overall, this project significantly broadened our technical knowledge and problem-solving skills.

What's next for Time Pass

We plan to implement additional voice-controlled actions to further improve accessibility and user interaction. Our focus will also be on creating a more streamlined and intuitive UI, making navigation effortless for all users. Integrating an improved OCR model will enhance the detection of contrast errors between text and background, ensuring better readability. Additionally, we aim to develop a system that assigns an accessibility score to content creators based on their uploaded videos, encouraging the production of more accessible content.

+ 4 more
Share this project:

Updates