“Karan is fantastic to work with and has exceptional expertise in Javascript. He is self motivated, forward thinking and intelligent. He has positive attitude to work and is very independent.”
Sign in to view Karan Pratap’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Karan Pratap’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
San Francisco, California, United States
Sign in to view Karan Pratap’s full profile
Karan Pratap can introduce you to 9 people at Apple
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
1K followers
500+ connections
Sign in to view Karan Pratap’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Karan Pratap
Karan Pratap can introduce you to 9 people at Apple
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Karan Pratap
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Karan Pratap’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
Apple
******** ********
-
******** ****
****** ******** ********
-
******** *** ****
****** ********
-
*** ********* ** ******* *** ********** ********
******** ** ********** * ****** ******** ******* undefined
-
View Karan Pratap’s full experience
See their title, tenure and more.
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Publications
-
Learn Go
See publicationISBN 979-8-9883975-4-0
Master the fundamentals and advanced features of the Go programming language. -
System Design
See publicationISBN 979-8-9883975-8-8
Learn how to design systems at scale and prepare for system design interviews.
Patents
Languages
-
English
Full professional proficiency
-
Hindi
Native or bilingual proficiency
Recommendations received
2 people have recommended Karan Pratap
Join now to viewView Karan Pratap’s full profile
-
See who you know in common
-
Get introduced
-
Contact Karan Pratap directly
Explore more posts
-
Avish Mishra
Uber • 30K followers
One thing your manager will never tell you Your promotion depends less on your performance and more on whether your team is actually growing. When I joined Amazon as an SDE-1, my team had 25 people. By the time I became Senior, it had 120+. Looking back, my “fast promotion” wasn’t magic. The org was expanding like crazy and growth creates opportunity. Here is the part people don’t like hearing: ❌ If your team hasn’t grown in years, your career won’t either. ❌ It doesn’t matter how good you are. There is nowhere to go. ❌ The only real path up becomes waiting for someone above you to leave. So if you want to grow faster: Choose teams that are growing. Not teams that are comfortable. Skills matter. Environment matters more.
1,039
20 Comments -
Sameer Bhardwaj
Layrs • 50K followers
You are in a system design interview at Amazon for an SDE-3 role and the interviewer has given you the question to Design Netflix. He then asks a follow up to you: How does Netflix knows when to show: Are you still watching? Is it just time-based? If it were time-based, you would see it every time. So what is actually going on under the hood? Both look like a simple pop-up. Underneath, it is a mix of product thinking, client logic, and backend events. Btw, if you’re preparing for system design/coding interviews, check out our mock interview tool. You can use it for free here: https://lnkd.in/gpCn7t2T [1] Start from the product goals The feature is not only about nagging people. - Save bandwidth and CDN cost if the viewer has slept or walked away - Avoid autoplaying potentially sensitive content in an empty room - Protect kids if parents start a show and leave the TV on - Do all this without annoying active binge watchers So the design must answer one question: "When is the user probably not here any more" [2] Naive design - pure timer Simplest idea: - If playback has been running for 2 hours, show Are you still watching - If user clicks "Yes", reset the timer Why this is weak: - Someone can binge 5 episodes in a row and gets interrupted in the middle of an intense scene - Someone can start a show, walk away after 5 minutes, and the platform will keep streaming for the next 2 hours - It ignores how many episodes were auto played, device type, time of day, user habits [3] Realistic design - session and engagement signals Think in terms of a "watch session" and "engagement events". Signals the client can track: - Play, pause, seek, volume change - Episode finished and next episode auto started - Remote or keyboard input, UI navigation - Screen on or off events from the device - For mobiles: app background or locked state A common heuristic could be: - Count how many episodes have auto played without any user interaction - Track how long it has been since the last button press or navigation - Only trigger the prompt when both are high enough Example rule: - If 3 episodes in a row have auto played - And there has been no interaction for 45 minutes - Then, before starting episode 4, show Are you still watching This feels much smarter: - Active viewers who pause, skip intro, change volume keep resetting the "engagement clock" - Sleeping viewers do not touch anything, so the next episode is blocked by the prompt [4] Client heavy vs server heavy design You can talk about two design choices. Client first: - All logic runs inside the app on TV, mobile, or web - Client keeps an in memory session model and decides when to show the popup - It still sends telemetry events to backend for analytics and future tuning Pros: - Works even with flaky network - Highly responsive, no extra server round trip Cons: - Logic must be implemented and updated across many platforms - Harder to quickly roll out rule changes Continued ↓
184
12 Comments -
Arpit Bhayani
278K followers
When you join a new org or switch teams in the same, it is quite an unsettling and borderline anxious experience. Here are a few things that I did to ramp up faster. 1. I consciously made an effort to remain unblocked 2. I read a ton of code and its history, sometimes even unrelated 3. I asked questions about the past, the present, and the future 4. I took up relatively mundane tasks - cross-team work, tests, and docs 5. I extended a helping hand in whatever capacity I could 6. I proactively met skip-level leaders and managers to understand the vision Some companies do have a culture of pairing up with someone existing in the team, but even if you do not get a mentor, it is important that you still navigate the situation and ramp up as quickly as you can. The above list is not exhaustive, by any means, and hence you can always add things that you find helpful in your context. but the lowest common denominator is to show extreme intent and interest to get output and drive outcomes. Hope this helps.
995
25 Comments -
Shivansh Raheja
Luneblaze • 1K followers
The H-1B Shake-Up: A Turning Point for India's Tech Talent? The recent news of a potential US crackdown on the H-1B visa program, including a proposed $100,000 fee per visa, has sent ripples through the global tech community. For decades, the H-1B has been a key pathway for Indian professionals to work in the US. Now, that could all change. This raises a critical question: What does this mean for Indian talent and India's job market? Is this the beginning of a massive "brain gain" for India, or will our top talent simply choose other international destinations? The Dilemma: Return Home or Relocate? 🤔 Indian professionals facing these new hurdles have a tough choice to make. Arguments for returning to India: - Booming Tech Scene: India is no longer just a service provider. It's a hotbed of innovation with soaring opportunities. - Cultural & Family Ties: The pull of being closer to home is stronger than ever. Arguments for choosing other countries: - Global Competition: As the US tightens its policies, nations like Germany, Canada, and the UK are rolling out the red carpet, offering welcoming immigration policies and a high quality of life. Impact on India's Job Market: Challenge or Opportunity? 📈 This potential influx of highly skilled professionals could be a game-changer for India. Potential Positives: - Innovation Boost: A "reverse brain drain" could inject immense expertise into our economy, fueling startups and R&D. - GCC Expansion: US companies may expand their Global Capability Centers in India, creating more high-value jobs. Potential Challenges: - Increased Competition: A surge of talent could intensify competition for top-tier positions. - Infrastructure Strain: Our cities would need to adapt to support a large-scale return. This is a pivotal moment. The decisions made by thousands of Indian professionals in the coming months could reshape our nation's economic future. What are your thoughts? Will this be a watershed moment for India's tech landscape? 👇 #H1B #H1BVisa #India #Tech #FutureOfWork #BrainGain #IndianEconomy #USImmigration #Jobseekers #SoftwareEngineer #Google #Amazon #MAANG
8
1 Comment -
Henry M.
Epic • 2K followers
Shantanu Narayen’s 18 year tenure as CEO at Adobe is a reminder of how rare long-term stewardship has become in Silicon Valley. During his time with the company, the company’s stock increased more than 6x and its business model fundamentally changed. The most important decision of his tenure was moving Adobe from selling boxed software like Creative Suite to a subscription platform through Creative Cloud. At the time it was controversial. Today, it is considered one of the most successful SaaS transitions in the history of enterprise software, turning Photoshop, Illustrator, and Acrobat into one of the most profitable recurring software ecosystems ever built. The respect he commands across the industry says a lot about how difficult that transformation actually was. Microsoft CEO Satya Nadella even posted publicly congratulating Narayen on a “legendary run,” praising both his leadership and the way he expanded what creators and businesses could do with software. In an industry that is often intensely competitive, that kind of recognition from peers is not given lightly. There are only a few CEOs in the world who truly get to define one era of software and then help set the stage for the next. Narayen did it by turning Adobe into a SaaS powerhouse. However, the next leader will have to answer a harder question. What does a creative software company look like when AI becomes part of the creative process itself? #Adobe #Leadership #SaaS #AI #Technology
10
-
Patrick Tammer
Google • 7K followers
Micron, SK Hynix, and Samsung stocks are soaring but few understand why. 1. The structural reason: - Memory, in particular High-Bandwith Memory (HBM) has become crucial to run LLMs for billions of users - Running LLMs is mostly is memory-bandwidth bound, not compute-bound - During decode, GPUs spend more time fetching weights and KV cache than doing math, making HBM the primary bottleneck 2. The supply chain reason: - As demand soared, the major players shifted production capacity to high-margin HBM - That led to undersupply of other memory types (SRAM, DRAM) which are still needed for AI What this means for… 1. Business leaders Memory cost will drive up GPU pricing Even if you don't buy chips, AI infra costs will likely rise as supply chain players will pass on costs 2. Entrepreneurs After decades of silence, there is massive opportunity in innovating memory Its still overlooked by many but we will soon see more high valuation memory startups which will become attractive acquisition targets for the 3 big incumbents 📷: FT … Did you find this helpful? ♻️ Repost this to inform your network 🔔 Follow me for more AI insights 🔖 Subscribe to my newsletter
15
4 Comments -
Parikh Jain
ProPeers • 177K followers
A candidate appearing for SDE-II interviews at Amazon was asked the LFU Cache hard LC problem during his coding interview. Another candidate appearing for Senior SWE at Uber was given this problem to solve during his coding interview. 1 failed the interview. 1 landed the offer. The difference? Pattern-based thinking. Let me break it down for this exact question and how it works when you are under pressure, stressed, and the interviewer is watching. Btw, this is the exact skill that I teach with my DSA-pattern Ecosystem, you can check it out here: https://lnkd.in/gqXTkSev 5100+ students are already using it, it has 50+ videos covering each pattern in detail and how to spot it, 250+ handpicked problems mapped to patterns, and you get AI-assisted feedback. [1] Step 1: Decode what the problem is -really- asking - You see: get, put, eviction, and the words "must run in O(1) average time". - That should ring a bell: this is a cache design problem. - Pattern in your head: - O(1) lookup → HashMap for key to node. - O(1) eviction according to some policy → Linked list or ordered buckets. If your first thought is "I will scan all entries and find the minimum frequency", you have already lost. That is O(n). [2] Step 2: Name the eviction policy pattern - Here the policy is: - Remove Least Frequently Used. - If tie, remove Least Recently Used among those. - That means we need to track two things at once: frequency and recency. - Pattern: - HashMap: key → node (value, freq, pointers). - Another map: freq → doubly linked list of nodes with that frequency, ordered by recency. - A variable minFreq to know which list to evict from. Once you say this out loud, the interviewer already knows you are on the right track. [3] Step 3: Walk through operations in your head - get(key) - If key not found, return -1. - If found: - Take node out of list freq = f. - Increase its freq to f + 1. - Move it to the head of list f + 1. - If old list becomes empty and f was minFreq, bump minFreq. - put(key, value) - If capacity is 0, do nothing. - If key already exists: update value, then treat as get(key) to increase freq. - Else new key: - If cache is full: - Go to list with minFreq. - Evict the node at the -tail- of that list (LRU among LFU). - Remove from both maps. - Insert new node with freq 1 at head of list 1. - Set minFreq = 1. Notice how every step is O(1) because you never loop over all keys. [4] Step 4: How this looks in an interview under pressure A candidate who fails usually: - Jumps straight to code. - Uses a single map plus a linear scan to find LFU on every eviction. - Realises near the end it is not O(1), gets stuck, starts patching. A candidate who passes does this: - Spends 3–4 minutes talking through the pattern. - Draws: keyMap, freqMap, and a small example of minFreq changing. - Only then writes code that is almost a direct translation of the dry run.
139
9 Comments -
Michel Tu
Databricks • 21K followers
🚨 60 day grace period for H1B workers who lost their job is no more Most software engineers moving to the US for a job do so with a H1B visa – you can get a L1 (but then your stay is tied to the company) or an O1 (but that's fairly restrictive). In the past, in case you lose your job, you could stay 60 days in the country to find another job. After that, you have to leave the country but can still come back if you get a new job and get your H1B transferred (e.g. you don't need to go through a lottery again) The issue now seems to be that the 60 days grace period is gone[1], so essentially you have to immediately leave the country once you're laid off. Being an immigrant always come with some uncertainty on your situation but this wasn't too much a problem in the past because: - Tech companies were not doing laid off - The job market was in favor of engineers – it was easy to just find another job - While 60 days is kind of short, it was still enough in many cases (or at least to sort part of your personal situation) This is a pretty big blow for the industry – while some (especially younger folks?) will still move to the US, less and less will be willing to do (especially if you have no short path to a green card) The situation is also even worst for people in the current situation that may be overstaying because their 60 days grace period was suddenly gone and may not be able to transfer their H1B visa after 🙁
73
20 Comments -
Ashish Kumar Singh
Glance • 2K followers
When Claude Code Joined Our Team in the Middle of a Tech Scavenger Hunt Yesterday, our randomly assembled team of four engineers joined an intense tech scavenger hunt at InMobi Encore2025 - packed with coding challenges and technical riddles. The team: Me (Staff Engineer – Frontend) An SDE2 – ML An SDE3 – Backend Another Staff Engineer – Backend One puzzle stopped us cold: two giant parquet datasets (~750K rows each) hiding a riddle about bids, floors, installs, and a “final pattern” buried deep inside. My teammates? Instantly spinning up Databricks, notebooks, pandas. Me? No relevant setup. No DS tools. Just sitting there, watching the clock tick, feeling like dead weight. 💡 The Lightbulb Moment "Why not try Claude Code?" I dropped the datasets and riddle in with few more lines of instructions. And while we were still stuck on the first step, Claude tore through: Filtering down to iOS/Android Checking bid floors Finding the winning ads (render + click + install) Merging on impression_id Revealing a Fibonacci sequence in time_to_install Answer: 34. “Guys, just enter 34.” Suspicious stares. A pause. Then… Correct! That single AI-powered leap saved us 10-15 minutes and flipped the game. From that point on, Claude became our fifth teammate. Takeaways: In tech, the boldest move is often trying what looks unconventional AI can be a genuine partner in problem-solving Resourcefulness > perfect setup And sometimes, success in uncharted territory comes from being willing to try what others dismiss I may not have written a line of pandas, but being open to AI meant I could unlock value my team didn’t expect. PS: We didn’t win the contest, but we came very close — only to be tripped up in the last round that required us to actually run around the office for the answer. Turns out, AI has its limits 😉 #AI #Engineering #ProblemSolving #Teamwork
30
2 Comments -
Parag K. Goyal
Oracle • 3K followers
Stop treating Load Balancers and Reverse Proxies as the same thing. In System Design interviews (and production incidents), the distinction matters more than you think. As we move from SDE1 to SDE2, we stop asking "How do I make this code work?" and start asking "How does this system scale and survive failure?" Here is the deep dive on two critical components that often get conflated. 1. The Load Balancer (The Scaler) Primary Goal: Availability & Horizontal Scaling. The Job: Spreading traffic across multiple compute resources to eliminate Single Points of Failure (SPOF). The SDE2 Nuance: You need to know the difference between Layer 4 (Transport) and Layer 7 (Application) balancing. L4 is fast; it forwards packets based on IP/Port without looking inside. L7 is smart; it terminates the connection, reads the HTTP headers/path, and routes /api differently than /images. 2. The Reverse Proxy (The Shield) Primary Goal: Security, Unification & Offloading. The Job: Sitting in front of backend servers to hide their topology and IP addresses. The SDE2 Nuance: SSL Termination. Decrypting HTTPS handshakes is CPU-intensive. A Reverse Proxy handles this heavy lifting at the edge, allowing your backend application servers to focus purely on business logic (and communicate via HTTP inside the private VPC). The Reality Check 💡 In modern architecture (and cloud environments like AWS ALB or tools like NGINX), the lines blur. We often use a single component to perform both roles simultaneously—terminating SSL (Reverse Proxy) and then distributing the request to a pool of instances (Load Balancer). But knowing which function you are tuning—and why—is what separates a mid-level engineer from a senior one. Questions for the network: Do you prefer handling SSL termination at the Load Balancer level or strictly on the application server for end-to-end encryption? #SystemDesign #SDE2 #SoftwareEngineering #Scalability #DistributedSystems #DevOps
66
1 Comment -
Sai Krishna Saurabh Nadupuri
Amazon Web Services (AWS) • 2K followers
Ever wonder why we never went back to waving at street cabs after using Uber, Lyft or Ola - even with surge pricing, delays, or app glitches? That’s Delta 4 Theory, and discovering it through Kunal Shah completely rewired how I think about product value. While diving deep on ride-hailing vs traditional cabs, I realized something simple but powerful: great products don’t just improve experiences - they make the old way feel impossible. Delta 4 gave me the clarity to cut through noisy hypotheses and focus on what truly drives product-market fit: irreversibility, deep user benefit, and even loyalty in the face of flaws. If your users can’t imagine life before your product, you’re onto something. #PMF #productmanagement #innovation #Leadership #SaaS
8
-
Sanchit Narula
Nielsen • 38K followers
I’ve interviewed to 500+ candidates combined in the last 7 years of my journey at Amazon, Cars24, and now Nielsen…(150+ in the last 1 year alone) This is the best guide I can give you on how to pass technical interviews and what 90% of candidates often forget while interviewing. (Btw, I also used this to switch jobs 3 times in my career) [1] Don’t rush to code, talk through the problem first Many candidates fail because they panic and try to impress with fast coding but the right move is to slow down and talk out loud. Break the problem into conditions, use a whiteboard or pseudocode if needed. Even a simple question becomes easy once you write the logic in plain English. [2] Think before you type Take your time to understand the requirements before you touch the keyboard. Rushing leads to bugs. Thoughtful implementation shows maturity. Even if you know the answer, show structured thinking and clear decision-making. [3] Always explain your choices, even if they’re wrong Interviews are less about the correct answer and more about your thought process. Prefer saying something logical than staying silent. For example: “I’m using a regular for loop here because it gives me better control.” [4] Practice talking while solving Most people freeze not because they lack skill but because they don’t rehearse speaking + thinking. Get used to explaining your logic out loud while coding. Follow the rule: ABC – Always Be Chatting (not just coding). [5] Don’t sit in silence when you're stuck Silence kills momentum. If you get stuck, ask questions: “Should I print this line-by-line or return a result array?” Showing that you can stay calm under pressure matters more than getting it perfect. [6] Prefer readable, scalable solutions over clever tricks Avoid language-specific shortcuts (e.g., JS type coercion). Use modulo instead of overcomplicating divisibility logic. Interviewers prefer clean, maintainable code that works across languages. [7] Optimize code for simplicity and future changes If new conditions are added (like printing something for multiples of 7), your code should be adaptable. Use a variable-based solution instead of nested `if-else` trees. Think like an engineer, not just a coder. [8] Know your basics (like Big-O) Be ready to talk about time complexity. Even if the problem is trivial, say something like: “We’re iterating 100 times, so technically O(n), but since it's a fixed 100, it simplifies to O(1).” Technical interviews are not just a test of memory or speed. They’re also a test of clarity under pressure, structured thinking, and communication.
138
11 Comments -
Upasana Singh
Flipkart • 111K followers
Just heard Amazon Prime laid off so many employees, Because they shipped faster and automated things with AI. And this just scares me but at the same time I feel there’s huge gap between how managerial side is thinking about AI v/s the Reality of AI: Because: 1. AI writes the code but fails at quality, I need to spend time to get good final version 2. Doing LLDs with AI is too hectic, it skips what it doesn’t understand or comes up with solution which isn’t optimised 3. HLDs aren’t even AIs cup of tea yet This just about coding part, there’s lot of more that we do as engineers. So, is it managers illusion which is taking away job ? But whatever it is someone needs to wake these companies to reality and show how important engineers are for the company. Done with listening AI can replace engineers.
192
9 Comments -
Navneet Anand
30K followers
I do not really post a lot about tech stuff, but maybe I can start. Here is a paper I read recently. ROSE: Robust caches for Amazon product search [https://lnkd.in/gPhbgHHQ] Thanks to Arpit Bhayani's paper shelf recommendations. The insights from this are really fascinating for the search world and how Amazon always knows how to find what you are looking for even if you have clumsy fingers like mine. For example if you are looking for shoes, you might type something like: “nike shoe,” “nike shoes,” “nikes shooes.” A normal cache treats these as different, so it gets big and slow, and you miss the cache a lot. Amazon’s ROSE fixes this by caching the intent, not the exact spelling. What ROSE essentially is, is a typo/variant-tolerant cache for product search that groups similar queries into the same “bucket,” so most lookups hit fast without growing memory. It’s deployed in Amazon Search and improved both latency (single-digit ms for most traffic) and business metrics. How this works is even more fascinating. 1. Locality-Sensitive Hashing: Similar queries collide in the same bucket. • Lexical-preserving hashing (character n-grams + minhash) for typos/variants. • Product-type-preserving hashing (weighted minhash) to keep the category intent (e.g., “dishwasher” vs. “dishwasher parts”). 2. Reservoir sampling: caps each bucket so memory stays constant even as queries grow. (Super convenient, and one of the many engineering gotcha's people might miss) 3. Count-based k-selection: avoids pairwise similarity inside buckets; just count collisions across hash tables → near constant-time retrieval. Where we can apply some of these concepts with Gen-AI (which is in many flavors essentially a search problem with extra steps)- Semantic prompt cache: Group paraphrased prompts to reuse an answer/logits/KV cache even when text isn’t identical. Can be helpful in saving your tokens and $. RAG request cache: map similar user questions to one retrieval plan to cut vector searches + reranking cost. [I remember Azure AI search having a RAG cache, but I am not sure how that worked] Tool/agent call cache: Dedupe near-duplicate function/tool invocations (e.g., same API call phrased differently). Eval & feedback loops: Bucket similar generations or errors to reuse critiques/patches. Support/search front-ends to LLMs: Typo-tolerant, intent-stable pre-rewrites before hitting the model, ensure we only use tokens when absolutely needed.
174
6 Comments -
Hameer Singh
HomeLane • 368 followers
🚀 Friday Night + Tech Chat = System Design Deep Dive Yesterday was Friday — and you know what that means for most of us tech folks 😅 Weekend mode kinda kicks in right after dinner. So, after finishing dinner, I called up a friend. We started chatting about his new role (he joined a company around two months ago), and somehow… our conversation drifted into system design — specifically how people actually approach it during interviews. He mentioned one of his interview questions: “Design a URL shortener — like Bitly or TinyURL.” Sounds simple, right? But the more you dig into it, the deeper it gets. ⚙️ 🧩 Step 1: Define the Problem Clearly Before diving into design, we outlined what we’re actually building — Scale: how many URLs per second? Traffic: expected read vs write ratio? Data size: how many URLs stored per year? Reliability: what happens if a node fails? 🔐 Step 2: Designing the Key Generation Logic We debated a few approaches: Hashing (MD5, SHA-256, SHA-512) — good for uniqueness, but MD5 collisions and long hashes make it tricky. Timestamp-based keys — simple and fast, but not collision-proof under high concurrency. Incremental counters — compact and sequential, but need distributed coordination. ⚙️ Step 3: High-Level Implementation (Best Practical Approach) To make it production-ready and scalable, here’s a solid architecture outline: API Layer — handles shorten/expand requests. Key Generator Service — uses a Base62-encoded global counter stored in a distributed key-value store (like Redis or Zookeeper) to ensure uniqueness and shortness. Storage Layer — Use NoSQL (e.g., Cassandra, DynamoDB) for horizontal scalability. Maintain an index on the original URL to avoid duplicates. Caching Layer — Use Redis or Memcached for ultra-fast lookups of popular links. Sharding Strategy — Hash-based sharding on the short URL key for even data distribution. Redundancy & Fault Tolerance — Replicate data across multiple nodes or regions for high availability. Analytics & Expiry — Track usage stats and periodically remove expired or inactive URLs. This approach balances simplicity, speed, and horizontal scalability while keeping collisions practically impossible. 💪 🧠 Step 4: Scaling & Optimization Thoughts We also discussed how to: Handle lookups efficiently (O(1) via key-value access). Detect duplicate URLs without a full DB scan. Manage concurrent key generation in distributed systems. By the end of the night, it didn’t feel like a casual chat anymore — it turned into a mini architecture design session 🔍 Crazy how something that looks tiny (like shortening URLs) can open up so many architectural rabbit holes 🕳️ Might actually build a small version next weekend — just to experiment with caching, hashing, and sharding properly 😏 #SystemDesign #BackendEngineering #DistributedSystems #Scalability #URLShortener #LearningByBuilding #TechTalk
6
-
Keshav Kolur
Meta • 5K followers
A Reddit thread made me invest in 108 apartment units—and start my own company. I have been a software engineer at Meta for four years now; I started my career in 2019. From day one, I wanted to get on the right track and build a career that could create generational wealth for my family. I’m grateful for a path that has taught me discipline, given me mentors, and helped me save. In 2019, I read Rich Dad Poor Dad and thought I’d build a small single-family rental portfolio. I even earned my realtor’s license on Leap Day 2020 to learn the fundamentals and stack a little extra cash. Then a friend sent me a real-estate Reddit thread about syndications. I didn’t know anyone doing it and it sounded too good to be true, so I went deep. For a year I underwrote deals after work, called operators and property managers, asked existing LPs what went wrong, and tried to poke holes in the model. In August 2021, I wired into my first deal: 108 units in Texas. When the first distributions landed and depreciation showed up at tax time, it clicked—scale, professional management, and diversification made more sense for my goals than buying one door at a time. That experience became the blueprint for Clive Capital. I started Clive to help family, friends, and other busy tech professionals co-invest the way I wished I’d known from day one—partnering with strong operators, diversifying across markets and asset types, and treating diligence like a first principle. Since then, we’ve expanded beyond multifamily into other assets, with investors spread across the country. If you’re where I was, curious but cautious, this episode walks through my exact process and what I’d repeat today. 🎧 Full story + playbook: https://lnkd.in/e24ZHhHA
6
2 Comments -
Kartik Aggarwal
Microsoft • 17K followers
🔥 “𝗬𝗼𝘂 𝗱𝗼𝗻’𝘁 𝗼𝘄𝗻 𝗿𝗲𝘃𝗲𝗻𝘂𝗲.” That one line almost broke me early in my PM career. I developed a '𝗰𝘂𝘀𝘁𝗼𝗺 𝗔𝗜/𝗠𝗟 𝘀𝘆𝘀𝘁𝗲𝗺 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 𝗽𝗿𝗼𝗴𝗿𝗮𝗺' which needed vendor and engineering team and cross-functional owner collaboration. On paper, I was unblocking bottlenecks, driving execution, and pushing delivery forward.The leadership delivered a direct message about operations work when it was time to demonstrate impact by stating “𝗢𝗽𝘀 𝘄𝗼𝗿𝗸 𝗱𝗼𝗲𝘀𝗻’𝘁 𝗺𝗼𝘃𝗲 𝘁𝗵𝗲 𝗻𝗲𝗲𝗱𝗹𝗲.” But one project turned that thinking on its head. 💡 Instead of stopping at technical fixes, I partnered with 𝙛𝙞𝙣𝙖𝙣𝙘𝙚.We dug into the ripple effects: 1. The vendor delays resulted in the company missing the holiday launch window. 2. The optimization of models directly affects the BOM costs which results in higher costs. 3. The rework cycles operated in the background to exhaust budget allocations. The numbers were eye-opening: The company achieved an 18% decrease in rework expenses through process improvements at the vendor level. • 3 weeks shaved off time-to-market, securing a seasonal launch • The execution system produces more than $20M in revenue because it links operational activities to business results. That project didn’t just deliver a technical win — it reframed how leadership viewed product execution. Suddenly, “ops” wasn’t background noise; it was a 𝗽𝗿𝗼𝗳𝗶𝘁 𝗹𝗲𝘃𝗲𝗿.The recognition (and promotion) I received came from showing how 𝗰𝘂𝘀𝘁𝗼𝗺 𝗔𝗜/𝗠𝗟 𝘀𝘆𝘀𝘁𝗲𝗺 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 = 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝘃𝗮𝗹𝘂𝗲. The real lesson? 👉PMs and TPMs don’t just own delivery; we engineer ROI through trust, partnerships, and execution that protects revenue and margin. I will always be thankful to my coach Shilpa Kulshrestha-Murdering Mediocrity who made me look at delivery metrics from a business perspective. PMs, TPMs and tech leaders who read this should investigate if they have ever felt invisible in terms of business impact. Connect process → efficiency → revenue. That’s where the hidden ROI lives. I have created slots on 𝗧𝗼𝗽𝗺𝗮𝘁𝗲 to assist others in creating their own impact story by converting “behind-the-scenes work” into career-defining ROI. Let’s connect, link: https://lnkd.in/gQN3KvYn
12
1 Comment -
Arshad Siddieque
Wolters Kluwer • 5K followers
Silicon Valley was built on Indian talent 🇮🇳, and now it’s our turn to create that ecosystem here. Talent alone isn’t enough — India needs stronger R&D funding, better AI infrastructure, and the courage to innovate beyond just copying models. If we get the intent right, “Silicon Valley Bharat” won’t just be a dream — it can be our reality. 🚀
15
1 Comment -
Sanyam Sareen
Sareen Career Coaching • 25K followers
Here’s exactly how I would crack a $150K+ SWE job at Microsoft in 6 months. (A real strategy my client used to land interviews at Microsoft, Meta, and Stripe) Too many engineers prepare hard to crack MAANG. But they lack a solid strategy. Here’s the exact roadmap I'll follow if I wanted to land a SWE job at Microsoft. Step 1: Resume + Role Clarity → Reverse-engineer the JD Study at least 10+ Microsoft SWE job listings. Highlight recurring keywords, must-haves, and preferred tools. → Rewrite your resume like a product pitch Show measurable impact: Good: “Improved load time by 43%” Bad: “Worked on performance optimization.” → Make it keyword-optimized for ATS without sounding robotic. Step 2: Master DSA the Microsoft Way → Focus on patterns, not problems Microsoft LOVES: Sliding window Trees (esp. DFS/BFS) Graphs Dynamic programming → Suggested Platforms: Leetcode (Microsoft tag), Neetcode(dot)io roadmap, Grokking series on Educative → 2 problems/day + 1 mock interview/week = compounding prep Step 3: System Design (yes, even for junior roles) → Start with High-Level Design (HLD): Learn how to design APIs, caching, rate limiting, and DB scaling. → Build a project where you actually implement what you learn. Step 4: Microsoft-Specific Behavioral Prep → Microsoft uses structured behavioral interviews Focus on “3As” framework: Action → Approach → Aftermath → Use real projects to show: Collaboration Adaptability Customer focus Engineering rigor → Prep 8–10 STAR stories mapped to their values Step 5: Mock + Real-World Practice → Do 4–5 peer or mentor-led mock interviews (especially for behavioral + design) → Record yourself. Watch for filler words, unstructured answers, lack of metrics. → Apply for 5 roles/week—Microsoft + similar-sized companies (to get into interview flow) Step 6: Apply Strategically + Use Referrals → Connect with 3–5 engineers/recruiters/week on LinkedIn. Don’t ask for a referral right away - engage first. → Reach out using tailored messages like: “Hi [Name], I’ve been preparing for an SWE role at Microsoft. Your journey from [X] to [Microsoft] really stood out. If you're open, I’d love to learn more about your experience.” → Submit applications using referrals whenever possible + 1-click apply where relevant The client who followed this playbook now works at Microsoft Azure and had 3 competing offers before accepting. It’s not about doing everything. It’s about doing the right things in the right order. Give yourself 6 months and follow this roadmap to make it a reality. Share this with someone who dreams of working at Microsoft. P.S. Follow me if you are a tech job seeker in the U.S. I share practical advice that gets you hired. — Additional resources: https://lnkd.in/eKxQmYtP. https://lnkd.in/gwRWpXR9. https://lnkd.in/g94_Cziv
100
4 Comments -
Deepak Pal singh
LTI - Larsen & Toubro Infotech • 478 followers
Is the "American Dream" on a permanent waitlist? 🇺🇸 For decades, the H-1B program has been the engine room of US tech innovation. But today, that engine is stalling. With visa interview dates in India now stretching into 2027, we aren't just looking at a "paperwork delay"—we’re looking at a massive talent drain. The Reality for US Firms: • Project Paralysis: Skilled engineers are stuck in "administrative processing," leaving critical AI and infrastructure projects leaderless. • The $100k Hurdle: New fee structures are making it harder for mid-sized firms to compete for global talent. • Brain Drain: While we wait for slots to open, other tech hubs (Canada, UAE, Europe) are rolling out the red carpet. Innovation doesn't wait for an interview slot. When we strand 70% of our H-1B workforce behind a backlog, we don't just lose employees—we lose our competitive edge. It’s time for a conversation: How do we balance national security vetting with the urgent need for the talent that keeps American tech at the top? 👇 I’d love to hear from other leaders: How is your 2026 hiring roadmap shifting due to these delays? #H1B #USVisa #TechInnovation #TalentAcquisition #ImmigrationReform #FutureOfWork
1
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content