Problem

While prototyping circuits on a breadboard, engineers often need to reference pin diagrams to ensure inputs and outputs for ICs (the black rectangular chips) are wired correctly. These specifications are found in manufacturer datasheets, where a massive amount of information is provided.

Getting to these datasheets can often be a tedious process, requiring reading the part number, typing the long alphanumeric sequence into a web browser, opening the datasheet, and scrolling through 50+ pages to find the proper pin layout.

Then, when building another circuit a week or two later, the engineer might have to repeat the whole process again.

Solution

Pin Preview uses Snap Spectacles and a custom OCR-capable backend to automatically extract the part number off of a chip, identify the chip, and display the corresponding pin diagram in an AR space above the user's left hand.

Technology

To accomplish this functionality, Pin Preview is designed with the following pipeline:

1) Track 3D body and hand input through Lens Studio's tracking mechanisms. 2) Capture a portion of the screen, compress and serialize it, and send it to a separate Flask-powered server for external processing. 3) Manipulate the image to emphasize any letters, and use Optical Character Recognition (OCR) to extract chip numbers found in the image. 4) Match the part to associated pin mappings from the chips database, serialize the image, and return it to the Spectacles. 5) Deserialize the image and attach it to Lens Studio's built-in HandVisuals for visualization.

Biggest Challenges

  • Snap Spectacles are currently in Beta, where a significant number of features or changes are not yet documented. We are incredibly grateful for the Snap developers at HackGT who helped us troubleshoot some of the more peculiar issues. Thanks again!
  • SnapML in theory has support for For our Optical Character Recognition (OCR), we ended up building a custom solution to do so.
  • We initially faced significant latency issues with client-server communication due to the payload sizes of transferred images. We were able to significantly lower these latencies by converting to grayscale (1 byte per pixel as opposed to 4 with RGBA) and serializing with Base64.

Built With

Share this project:

Updates