Inspiration

Not more than a handful of years ago, the jump from 3G to 4G made a lot of internet services more accessible to users from around the world. Video streaming got easier, live streams were more fluid, the age old tradition of downloading and storing movies on phones disappeared. I was awestruck by the potential of 5G and what it enabled. There are a large number of use cases where we experience latencies and slowness during day to day phone usage. But I’ve noticed (and gotten the most frustration out of) trying to figure out where my food delivery is. With the pandemic and work from home trend, deliveries are at an all time high. People’s lives would directly benefit from improvements to location tracking and more fluidity in the process, especially when a lot of people have meetings to get to right after picking up the lunch they’d ordered.

What it does

This is a proof of concept of a 2 tiered architecture, that applies to a variety of use cases. It is understandable that high frequency data proves to be a waste of storage and bandwidth for most companies. Why would a delivery company need to know every step of the pizza delivery guy? They track just enough data, whether it is to optimise storage or to protect the security/privacy of the personnel. Enablement of last-mile fine-grained tracking could make deliveries a painless experience. It enables high frequency data to be stored at the edge while only sending batched/most recent data to the backend for further processing. Users can then be selectively given access to these location streams when the delivery personnel is at close proximity to the customer. This separation of “low-latency, expensive, low-compute” from “high-latency, cheap, high-compute” adds a new system design paradigm to various industries. This design can achieve a better UX while keeping costs low.

How we built it

The Edge Server has a redis DB with PubSub channels. The drivers authenticate themselves to the edge server and connect via websockets, and publish their location onto a PubSub channel. Users who are given access to a certain driver can connect to the edge server (via websockets) and retrieve realtime updates of the driver’s location with almost no latency. Similarly we have a subscriber in the edge that listen to all publishes and maintains an in-memory map of the latest known locations of each driver. This set of bucketed driver locations is then sent to the server at a relaxed pace, based on the use case requirement for the backend server. This way the edge server can get many updates per second from every driver, but only send one update every 5 seconds per driver to the backend. The backend in our case uses another Redis DB with geolocation storage. This enables the backend to perform proximity searches to find cabs in a location, determine the surge prices in an area, etc.

Challenges we ran into

Getting the AWS network up and running and figuring out how subnets, vpcs, security groups and gateways all connect to form a cohesive product was the hardest part. Testing it locally, while simulating a large amount of traffic and also processing and running databases on the same computer was a hassle. But once I deployed the server onto the AWS instance and offloaded the processing, everything started running smoothly. Simulations of the drivers was also a very interesting problem. I ended up using Perlin Noise from my experience with computer generated art, to move the drivers around in a believable fashion. I also did not have Verizon 5G, so I had to make do with 4G LTE and getting around carrier endpoints.

Accomplishments that I’m proud of

Creating an architecture that works and proves that it would be beneficial for multiple industry use cases. Successfully simulating a fleet of drivers, updating their location at a very high frequency. Utilizing mechanics that I’ve learned from game engines and noise functions in generative art, I modeled sensible random behaviors into the drivers. I’m proud of having been able to create not just a product for myself but to provide a format for others to take over and build better products for everyone to use.

What we learned

I successfully navigated and figured out how to wire up an entire architecture, including the networking aspects of it. In addition to that, learning how bastion hosts are deployed on a public subnet in order to gain access to a wavelength zone instance was really cool. I also got an opportunity to explore Redis and some of its inner workings to make sure it was the right candidate for the job.

What's next for EdgeTrackr

EdgeTrackr performed so well on 4G LTE, I can only imagine what it is capable of on a 5G network. I really want to continue building a framework that enables users to create “edge rooms” using Redis, basically a storage area for a limited set of users to store in order to drive a multi-user application. It would be interesting to abstract that logic out of the current design and have it be reusable for a large number of use cases. A simple example would be a roullette game, with randomness being fed into the edge servers for each roll. Or even a realtime multiplayer car racing game where locations are tracked by the edge servers and the backend performs match-making for users.

Share this project:

Updates