Inspiration

In order to find the best candidates for certain positions, HR recruiters need to scroll through hundreds of resumes and gather information of many candidates to confirm their qualifications. This is a tedious job to do since it is not easy to verify and confirm that the candidates have needed skills for certain positions. In order to do so, HR recruiters need to search the Internet and visit sites such as Github, Devpost, Stackoverflow etc. to gather information about potentials candidates for certain jobs.

The above points show the necessity to have a site that can help HR recruiters gather information about potential candidates quickly with just one click. This not only eliminates the manual work of gathering information but also cleaning up/analyzing the data and allowing the HR recruiter to focus on making decisions and interviewing.

Because of the above requirements, we designed our site, Intellipick, for HR recruiters speed up their hiring work and eliminate large amount of manual work that they have to do. The Intellipick allows HR recruiter to upload resumes in bulk and gather information of candidates in one click. Further, the data are cleaned up, analyzed, and are ready for HR representatives to make decision. This may sound simple but the process cut down huge chunk of work that HR representatives have to do (gathering data, analyzing data, etc.).

What it does

The site allows HR recruiters to upload resumes in bulk, click one click and data are gathered from sites such as Stack Overflow, Github, Devpost, etc. Data are also cleaned up, analyzed and are ready for HR representatives to deep dive in and make decisions. There is a chatbot that assists in instruction about how the site works and provides tips/tricks about interviewing/recruiting candidates.

How we built it

Lots of brainstorming, analyzing utility, testing up different API methods like HTTP and Web Sockets, fixing up timeout issues, CORS problems, handling different exception scenarios, it was a lot of fun building IntelliPick.

We have built the frontend user-facing architecture using React JS and backend using multiple Microservices which are built on top of Fast API, Cloud Function and Google AppScript.

In order to connect our services to UIPath, we used Orchestrator API. Feeding in the data scrapping and then performing different Datatable operations and making HTTP requests there were so many cool things we did while building our project.

Challenges we ran into

  • Figuring out working of Orchestrator API, OAuth Scopes and right endpoints to make a request
  • Making Parallel Processing of data scrapping reliable and accurate (Since the same data could be fetched for multiple selectors if they're not unique) using Advanced Selector Concepts such as using wildcards and building selector String using String operations.
  • Timeout issues and wait periods to be conveyed to frontend. (Scrapping process being a foreground process generally takes around a minute or more)
  • Trying to run Web Sockets and HTTPS simultaneously.
  • Exception handling Scenarios in UIPath. (Handling Null and Empty Values and ContinueOnError)

Accomplishments that we're proud of

  • Managing to work in a team of people located at different Time Zones.
  • Managing to keep a demo product ready end to end operation wise.
  • Using a complete Tech Stack (Frontend, Backend and RPA) to solve a real-world problem.
  • Optimize our Application to be error and bug-free.
  • Learning UIPath in depth.

What we learned

  • UIPath Orchestrator and Orchestrator API Services
  • UIPath Datatable manipulations, advanced debugging features, selectors and much more!
  • Handling raw files and transferring them through HTTPS network in bytes.
  • Understanding different kinds of PDF encoding and decoding algorithms.

What's next for Intelli-Pick

  • Scrap Support in your local system

Since foreground processes can be time-consuming and more resourceful, we plan to build executable support that would ask users for permission to load package files in an encrypted format along with the unattended robot allocation, which will not only save resources but also give users a better understanding of how the process is working behind the hoods.

  • More Datapoints and more smart algorithms to judge a candidate profile

Since our first version is still in the development phase, we aim to improve the utility further by collecting more data points that are relevant and making scrapping smarter.

  • More Customization and algorithm templates Support

Users of our application may not need to configure everything from scratch and we intend to collect the configuration settings and then use them to train most used customizations and also allow users to save their customizations as algorithm templates that can be reused again and again. Additionally, by collecting the configuration metadata we also plan to train our internal algorithm to be smarter and then deploy that in our future versions too.

  • More Social Profile Support

In our current version, we only support three social sites and (Devpost, Github and Stack Overflow) that too with limited datapoints. We’re looking to support more open social profiles and also detect URLs from social sites to redirect to another social site that can be relevant for the job position.

Built With

Share this project:

Updates