Inspiration

The main motivation behind the creation of this project is that one of our team members is Red-Green Color blind and being color blind he faces a lot of annoyances his daily life. We noticed that good accessible design is hard to come by and our aim is to make it easier for everyone to consider making designs with accessibility and inclusivity in mind.

Existing Solutions

During our research into the already existing Adobe Express Add-ons that aim to perform similar tasks we found out that the currently existing solutions either:

  1. Have a difficult to use UI or do not provide information that is easy to act on for any user.
  2. Do not use AI or use a primitive language model which only provides basic inferences.
  3. Are expensive to use.

Our Solution

We designed an Adobe Express Add-On that automates the whole Design Analysis process at the click of a single button. On clicking the Analyze button our Addon analyzes the currently open page and provides a list of flaws to the user with easy to understand reasoning and potential fixes which can make their design more accessible. The System is powered by an expert team of AI Models that work together to come up with the list.

Our Solution is able to provide accurate feedback regarding the following topics:

  • Font Clarity and Visibility
  • Image Visibility
  • Color Contrast
  • Text Readability
  • Font Style Suggestions
  • Content Modifications to increase Accessibility.

We believe that the design instructions provided by this Add-On would definitely benefit people with various forms of color blindness, low vision or other impairments but it can also make the life of people without any impairments easier as good design benefits everyone!

How we built it

Our target was to provide a scientific basis behind the Analysis that the AI generates for a design. To achieve this we read through a few research papers and built a condensed knowledge base that the AI can work with. To Implement the analysis process we chose to use open source multi modal LLMs with vision capabilities in an agentic workflow using LangChain. We are harnessing LLama 3.2 90B Vision Instruct, LLama 3.2 11B Vision and Qwen 2.5 72B Instruct as each model lends its strengths to it's own task within the design review team.

Adobe Express was the perfect platform for building such a tool as it already has a tried and tested design editor and it allows us to plug in right into the users data without the user having to go through a complex setup process. The Adobe Express Add-on acts as the front for the system. the main task of the Add-on is to get the page's content when the user clicks the Analyze button and display the Suggestions from the AI Models.

Challenges we ran into

The Major Challenges we ran into were:

  • Since we were absolute beginners to Adobe Express Add-ons development we felt that the steps to follow and JS functions could have been documented better.
  • We Planned to use TogetherAI as our provider for Large Language Models but TogetherAI has a really primitive python SDK and only basic LangChain Integration.
  • Sometimes a "lower powered" LLM can be better at performing a particular task than a model with more number of parameters. We had to find this out after a lot of time taking experiments.
  • LLMs don't always follow instructions and ensuring that a run succeeds every single time is really difficult.

Accomplishments that we're proud of

  • We built a custom LLM Model Class which interfaces with the LangChain LLM base Class and works with the TogetherAI API's to allow messages that have image urls, this allows LangChain to work with MultiModal LLM Models. This Class could be really helpful for the open source community and we plan on contributing this to LangChain in the near future.

  • We built a really nice (at least in our eyes) Adobe Express Add-on with every feature that we initially planned. We feel this is a great accomplishment for us considering that we weren't even aware of Adobe Express 25 hrs ago.

What we learned

  • LLM Models are really powerful if used properly in a way that we can limit their hallucinations.
  • Accessibility in Design is a well researched topic but difficult to implement.
  • Working in pairs is really fun when we are able to work with each other's strengths to overcome the other's weakness.

What's next for AccessiBot

  • Get Feedback from Users

    • We can ask for user feedback on the AI generated suggestions to understand the usefulness of the suggestions. The Add-on could learn from this feedback and modify future suggestions accordingly.
  • Extended Knowledgebase with RAG functionality

    • We can provide users with sliders to set their own parameter values to adapt the model to their own personal use case and also allow users to define target audience for their design
  • Allow user to Fine Tune their use case and target audience.

    • The user set values and feedback for the AI bot will help increase its knowledge base and understand its customer’s needs. This will help the AI bot to understand which class of people matters for the user to be accessible through their design.
  • Experiment with Smaller, Low Powered AI Models

    • Smaller LLM models can provide impressive outputs especially when they are working on smaller sub tasks in a team. We aim to experiment further with this concept to make this Add-on cheaper for everyone.

Built With

Share this project:

Updates