<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[CafeAffe]]></title><description><![CDATA[Ramblings on programming, computer science, life, maths and culture.]]></description><link>https://cafeaffe.substack.com</link><generator>Substack</generator><lastBuildDate>Fri, 10 Apr 2026 08:33:04 GMT</lastBuildDate><atom:link href="https://cafeaffe.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Gagan]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[cafeaffe@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[cafeaffe@substack.com]]></itunes:email><itunes:name><![CDATA[Gagan]]></itunes:name></itunes:owner><itunes:author><![CDATA[Gagan]]></itunes:author><googleplay:owner><![CDATA[cafeaffe@substack.com]]></googleplay:owner><googleplay:email><![CDATA[cafeaffe@substack.com]]></googleplay:email><googleplay:author><![CDATA[Gagan]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Hibernate, Flush Mode, and Broken Invoice Numbers]]></title><description><![CDATA[A real-world debugging story from production, and a lesson in why database sequences and ORM behavior don&#8217;t always mix.]]></description><link>https://cafeaffe.substack.com/p/hibernate-flush-mode-and-broken-invoice</link><guid isPermaLink="false">https://cafeaffe.substack.com/p/hibernate-flush-mode-and-broken-invoice</guid><dc:creator><![CDATA[Gagan]]></dc:creator><pubDate>Wed, 31 Dec 2025 04:02:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KgjP!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb27ef4f-a07d-4376-bdbd-235011a9b215_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>(This was something I debugged a decade ago, in Seller Platform Services at <a href="http://flipkart.com">Flipkart</a>. I don&#8217;t exactly remember the Hibernate version or detailed configurations, like isolation levels. The below is the best I could do with my memory.)</p><h2>The Problem</h2><p>We discovered that <strong>invoice numbers were being generated out of sequence</strong> for shipments containing multiple orders.</p><p>Instead of clean sequences like:</p><pre><code>inv-1, inv-2, inv-3</code></pre><p>we occasionally saw results such as:</p><pre><code>inv-1, inv-2, inv-125574</code></pre><p>This only happened for <strong>shipments with multiple orders</strong>, and the pattern was inconsistent across environments, sometimes all invoices were wrong, sometimes only one, and sometimes none at all.</p><h2>How Invoice Numbers Were Generated</h2><p>At the time, invoice numbers were generated using a custom sequence table and MySQL&#8217;s <code>LAST_INSERT_ID()</code> function.</p><p>The flow looked like this:</p><ol><li><p>Increment the sequence:</p><pre><code>update invoice_sequence
set next_value = LAST_INSERT_ID(next_value) + 1
where id = :id;</code></pre></li><li><p>Fetch the generated value:</p><pre><code>select LAST_INSERT_ID();</code></pre></li><li><p>Insert the invoice number into <code>InvoiceNumber</code></p></li><li><p>Insert an outbound message into another table (to notify other consumers of invoice event)</p></li></ol><h3>The Key Assumption</h3><p>This logic <strong>only works if steps 1 and 2 execute atomically</strong>, i.e., with no inserts happening in between. Otherwise, <code>LAST_INSERT_ID()</code> can change.</p><h2>What Went Wrong</h2><h3>Hibernate Flush Behavior</h3><p>For shipments with multiple orders, this logic ran <strong>inside a loop</strong>.</p><p>Hibernate (JPA), for performance reasons, <strong>does not immediately execute insert statements</strong>. Instead, it queues them and flushes later. Either:</p><ul><li><p>at transaction commit, or</p></li><li><p>automatically before executing certain queries.</p></li></ul><p>This led to a subtle but fatal interaction.</p><h3>Why <code>LAST_INSERT_ID()</code> Broke</h3><p>In MySQL:</p><ul><li><p>If an insert happens on a table with an <code>AUTO_INCREMENT</code> column, it <strong>overwrites</strong> <code>LAST_INSERT_ID()</code>.</p></li><li><p>If the table has no auto-increment, <code>LAST_INSERT_ID()</code> becomes <code>0</code>.</p></li></ul><p>Hibernate sometimes flushed pending inserts <strong>between</strong>:</p><ul><li><p>the sequence update, and</p></li><li><p>the <code>select LAST_INSERT_ID()</code> call.</p></li></ul><p>When that happened, the invoice number suddenly came from an <strong>entirely different table</strong>.</p><h2>Observed Behavior in Different Environments</h2><p>Because Hibernate&#8217;s default flush mode is <code>AUTO</code>, the timing was unpredictable.</p><p>We observed several patterns:</p><h3>a) Local Environment</h3><p>For a shipment with 3 orders:</p><ul><li><p>The first invoice was correct</p></li><li><p>The next two were wrong</p></li></ul><p>Hibernate flushed inserts right before the next iteration&#8217;s <code>select LAST_INSERT_ID()</code>, corrupting the value.</p><h3>b) Production &#8211; Flush Only at Commit</h3><p>Some shipments flushed only when the transaction closed.</p><ul><li><p>No errors occurred</p></li></ul><h3>c) Flush at the Last Order</h3><p>Hibernate flushed inserts only during the final iteration.</p><ul><li><p>Only the last invoice was corrupted: in 10 orders, only the last one was corrupt</p></li></ul><h3>d) Flush Mid-Loop</h3><p>Hibernate flushed somewhere in the middle.</p><ul><li><p>One random order was corrupted</p></li><li><p>Example: 4 orders, 2nd one corrupted</p></li></ul><p>There was <strong>no fixed pattern</strong>.</p><h2>Why This Happened</h2><p>Hibernate always flushes before executing a <strong>native query</strong>.</p><p>Since <code>select LAST_INSERT_ID()</code> is a native query, Hibernate triggered a flush&#8212;but <strong>only if there were pending inserts</strong>.</p><p>This behavior is documented and consistent with Hibernate&#8217;s implementation:</p><ul><li><p>Default flush mode: <code>AUTO</code></p></li><li><p>Meaning: Hibernate may flush &#8220;sometimes&#8221;</p></li></ul><h2>Impact</h2><ul><li><p>Shipments with multiple orders generated <strong>invalid invoice numbers</strong></p></li><li><p>Invoice sequences jumped unexpectedly</p></li><li><p>Downstream systems were affected due to incorrect invoice references</p></li></ul><h2>The Fix</h2><p>We made two key changes:</p><h3>1. Removed <code>LAST_INSERT_ID()</code> Entirely</h3><ul><li><p>Replaced the update/select logic with a <code>SELECT &#8230; FOR UPDATE</code> on the sequence row</p></li><li><p>Ensured correctness without relying on session-level side effects</p></li></ul><h3>2. Generate Invoice Numbers in One Shot</h3><ul><li><p>Instead of generating numbers inside a loop</p></li><li><p>We queried and reserved all required invoice numbers <strong>once per shipment</strong></p></li></ul><p>This eliminated flush-related timing issues completely.</p><h2>Why Tests Didn&#8217;t Catch It</h2><ul><li><p>Unit tests used <strong>mocked databases</strong></p></li><li><p>Mock DBs don&#8217;t replicate:</p><ul><li><p><code>LAST_INSERT_ID()</code> semantics</p></li><li><p>Hibernate flush timing</p></li></ul></li><li><p>The issue was <strong>non-deterministic</strong>, making it nearly impossible to catch with small test runs</p></li><li><p>Integ tests missing multiple orders in one shipment</p></li></ul><h2>Lessons Learned</h2><ol><li><p><strong>Never rely on </strong><code>LAST_INSERT_ID()</code><strong> across multiple statements unless they are truly atomic</strong></p></li><li><p><strong>ORM flush behavior matters</strong>, especially with native queries</p></li><li><p><strong>Mocked DB tests are insufficient</strong> for sequence and concurrency bugs</p></li><li><p>Prefer <strong>explicit locking (</strong><code>FOR UPDATE</code><strong>) or database-native sequences</strong></p></li><li><p>Generate identifiers <strong>outside loops whenever possible</strong></p></li><li><p>Run integration tests with all possible scenarios</p></li></ol>]]></content:encoded></item><item><title><![CDATA[Things Just Changed Again, Thanks to AI]]></title><description><![CDATA[Yet another turning point]]></description><link>https://cafeaffe.substack.com/p/things-just-changed-again-thanks</link><guid isPermaLink="false">https://cafeaffe.substack.com/p/things-just-changed-again-thanks</guid><dc:creator><![CDATA[Gagan]]></dc:creator><pubDate>Thu, 27 Nov 2025 02:10:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KgjP!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb27ef4f-a07d-4376-bdbd-235011a9b215_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We are at a turning point. Maybe we should be proud of it? I don&#8217;t know. To be honest, there are a few times in everyone&#8217;s lifetime where they experience such turning points.</p><p>Back in the days, say 1990s, internet and information was the turning point. Within a decade, every office, bureaucracy was digitized. Everywhere cyber-cafes popped up. Where they didnt, dial-up modems came in handy. Everyone got their first embarrassing emails. It fundamentally changed how people lived and did their work.</p><p>Exams were held on computers, results came in on internet. Banks were digitized. Govt. offices moved to Microsoft Office. Windows was on everyone&#8217;s vocabulary. Bill Gates became the richest person in the world. </p><p>That was indeed a turning point. </p><p>What happened after that, was no more a &#8216;turning point&#8217; but a natural progression. Dial-up modems were discarded in favor of broadband connections. 40GB HDDs gone. Bulky desktops gone and everyone got down to a laptop, much more powerful than any bulky desktop. CD DVDs were not required anymore. <em>Ahead Software, are you still there with your Nero?</em></p><p>None of it changed the way people lived their lives or worked. Sure, it modified and influenced their way of working in nooks and corners, but that wasn&#8217;t a modification to the fundamental approach of how life was being lived.</p><p>The next big thing that came up was probably cloud computing. But it didn&#8217;t effect the end consumers as much as the software companies. Probably the end users didnt even notice something changed in the backend of the products they were using. At least in the beginning. What end users would have certainly noticed is the mushrooming of delivery apps and eCommerce sites, which wouldn&#8217;t have been possible without the ease of access to quick infrastructure through cloud computing.</p><p>Fast forward to 2023, or maybe 2020+.</p><p>AI is suddenly the new thing that everyone is talking about. Phrases like &#8216;agentic, LLM, AI powered, AI integrated, vibe coded&#8217; etc., are everywhere. LinkedIn is full of AI flex. And so is the &#8216;AI will take my job&#8217; paranoia. Just like delivery apps, this time an avalanche of AI startups. AI powered customer support, AI powered IDEs, AI integrated document system, AI in emails etc etc.</p><p>Why is it a turning point ? Well, maybe it isn&#8217;t, maybe it is a huge bubble waiting to burst. But it doesn&#8217;t appear like that to me. To me it appears more like a turning point, with a lot of exaggeration on the sides, of course. This is a turning point because it changes how we live and work.</p><p>The state of AI is now at a place where everyone has an assistant by their side. Things have become &#8216;easy&#8217;, or probably too easy sometimes. Writing some boilerplate code, restructuring some code portions, or writing docs - they are all too easy now.</p><p>And talk about learning. Ask AI to generate a learning plan for some topic you&#8217;d want to learn, maybe economics during the Aztecs, and AI will generate a plan for you with references!</p><p>Every industry, and every age group is effected by this. Students don&#8217;t write thesis no more! Movies refine, or maybe just write their scripts through AI. News articles edited through AI, sometimes with footers from ChatGPT still in there! Doctors take notes through AI. Software engineers are now becoming prompt engineers; yes, one abstraction level up!</p><p>With all that said, it is clearly changing the way we think and work. I am not sure I believe it is a great change, though. Like anything else, as long as we keep it in balance and equilibrium, we should be fine. The more we rely on AI, and the more we offload our &#8220;power of thought&#8221; to it, the more it will create a state of disturbance.</p><p>A few things, out of many, that I find particularly concerning are the environmental impacts of using such widespread AI and the energy-guzzling, GPU-powered data centers. End users, while generating funny videos or memes or just having fun with a prompt&#8212;do they know how much energy is spent behind the scenes?</p><p>Second, the impact on creativity and overall brain health. Humans are what we are because of our cognitive skills. What happens when we offload every bit of that to AI? Take the example of students writing their thesis with AI. Getting help with refinements or corrections is one thing, but writing the whole thing through AI is something that scares me.</p><p>Third, the cookie-cutter style prose. I can clearly identify what is written through AI and what&#8217;s not (or what is cleverly hidden). The dashes &#8212;, bullet points, and the same repeated emojis give it away. The style of being super neutral, without any hint of bias, is another sign. But think of a world where everything you read sounds similar. If that&#8217;s not boring, what is?</p><p>Fourth, I see it as a threat to both the people whose careers depend on writing and to those who simply consume whatever the AI throws at them&#8212;because the AI said so.</p><p>In conclusion, I believe this is indeed a turning point because it is changing our behavior, lifestyle, and working style. Now it is on us to make judicious calls on when and how to use it, and how much we should offload onto it.</p><p></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Goodbye, Amazon!]]></title><description><![CDATA[8.75 years, and not counting anymore.]]></description><link>https://cafeaffe.substack.com/p/goodbye-amazon</link><guid isPermaLink="false">https://cafeaffe.substack.com/p/goodbye-amazon</guid><pubDate>Sat, 01 Nov 2025 23:25:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KgjP!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb27ef4f-a07d-4376-bdbd-235011a9b215_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After almost 9 years, I have moved on from Amazon. And this is a very frank and straight from the heart account of my duration there.</p><h2>The Berlin Days</h2><p>I had joined Amazon in Berlin. A super sleepy team with extremely boring and meaningless work. I didn&#8217;t even get to commit my first code until well into 3 or 4 months into the job.</p><p>I was in a team that was tasked with re-architecting and implementing a &#8216;review solicitation engine&#8217;. In simple terms, it was something that would find out who bought stuff from Amazon in the last X weeks, and then send them emails to write a review of the product on the Amazon site. The legacy workflow was implemented as a few python scripts that used to run on cron triggers. The new service was to be implemented in &#8216;modern&#8217;, ahem, &#8216;Java8&#8216; and as a workflow, instead of a bunch of scripts tied together.</p><p>It seemed like a normal software engineering project in the beginning. But turned sour quite soon. A toxic team was to blame. The team had two additional SDEs who would never stop arguing. It was painful to work with the team. Even the SDM was helpless. Every single pull request ended in heated discussions, even over simple stuff like, &#8216;why tag code comments as [nit]&#8217; or &#8216;how much detail would you want to put in an acceptance criteria of a ticket&#8217;. Getting them to do something as a team, was next to impossible. Many times, PEs were pulled into the discussions to just tell them &#8216;shut the fuck up, and do this&#8217;.</p><p>And that turned the project from super frustrating to extremely boring. The PE came in, and made a unilateral decision to work with a workflow already developed by some other team! And we were just expected to write &#8216;yaml&#8217; configurations to tell how the workflow should behave. So, essentially, months of &#8216;heated discussions&#8216; on the potential Java service, reduced to a maintaining a number of yaml configuration files.</p><p>We still figured some good engineering things to do, afterall we liked coding! So we came up with templatizing the yaml files and generating those. Basically, use config files to generate more config files.</p><p>Later on, we started with experimenting reducing the spams sent to Amazon&#8217;s customers by using ML to identify most likely customers to write some reviews. And we did some additional enhancements and expansions among other things.</p><p>But overall, it was pretty boring. Meaningless too, as I despised the whole &#8216;send spam notifications&#8217; to customers </p><h2>The SCOT Years</h2><p>I moved to Seattle to join a team in the gigantic supply chain organization. It was a team that identified vendors or suppliers from whom Amazon should replenish the inventories.</p><p>The work was better. Regular java microservices. People weren&#8217;t fighting much, which was good. But boy, was I in for a surprise! The abysmal state of code or testing or general engineering practices, were embarrassing. Code that was written sometime in 2008 or so, and had no clear owners and kept on passing around to multiple teams. Folks just came along, dumped their shit, and moved on. Design patterns? What&#8217;s that?</p><p>I soon learnt that Seattle had a completely different work culture. SDEs were more focused on &#8216;doing the most visible thing&#8217; as opposed to &#8216;doing engineering things&#8216;. Code optimizations, design patterns, testing strategies etc., always took a backseat. Folks were doing things that solved business problems ASAP no matter how it was done. Just add some patch work to already existing spaghetti mess and call it a day.</p><p>No wonder things were super messy. A deprecation of a service went on and on for years with no result. Changing an API signature would take 10 months of work. Tickets will remain open for months and then some &#8216;senior engineer&#8217; will close them with an embarrassing comment like &#8216;if the issue still remains, please create a new one&#8217;. Of course, the ticket requester would have forgotten what was the issue in the first place.</p><p>I wouldn&#8217;t say it was the fault of the SDEs. It was the &#8216;culture&#8217;. SDEs who worked on business heavy projects were getting more visibility. Better chances of promotion if you learnt supply chain instead of engineering skills. Praises were showered on folks who wore multiple hats, i.e., SDEs who also did the work of a supply chain manager! SDEs would get promoted for writing a tiny lambda that created some business tickets, rather than engineers who spent much effort in rearchitecting in optimized way over AWS.</p><p>I realized I had to leave the org if I wanted to stay as an &#8216;engineer&#8217;. If I have to roughly estimate, 70% of my time was going in learning and making decisions related to supply chain and probably 30% was going towards real engineering work. I hated working with the decade old spaghetti code that nobody wanted to cleanup. If I would take the initiative to clean up that mess, it would have been a thankless job.</p><p>I tried to make some initiatives. Like, introducing Kotlin to the stack, or developing a generic test framework. I was shut down. &#8216;We are not in the business of making platforms&#8217; or &#8216;Kotlin will be harder to move from Java&#8217; (seriously?) were some of the remarks.</p><p>On top of that, the constant ass licking of upper level was nauseating. A senior manager literally said in an org meeting that &#8216;PEs/Sr. SDEs are great gods&#8217;. PEs or Senior SDEs were more Supply chain engineers rather than &#8216;software engineers&#8217;. I was disgusted by the incompetence of those. They would ask questions like &#8216;whats the benefit of Java 17 over 8 apart from being cool&#8217;, or &#8216;why do we need CDK when we can just have the resources created through console&#8217;.</p><p>So I left. I wanted to do engineering work. And I realized, the hard way, that I should stay at a place where &#8216;technology&#8217; is the business. </p><h2>Experiments at AWS</h2><p>I first joined Route53. My team worked on the ZonalShift product that customers use to move traffic away from an availability zone to some other zone. Usually they use it during AZ impairment events. Like, IAD1 is down, then get out of that zone and move traffic to IAD6 or something.</p><p>It was what I wanted. Pure technology business. No bullshit supply chain decisions or sending spam emails to folks. I loved it. The focus on technology was much better than any other team before. We talked tech, instead of random business shit. People knew more than me. So it was an opportunity to learn from them, and I did a good deal.</p><p>Also the team was working heavy on Kotlin. CDK was given. I was doing a lot of those. Custom CDK constructs in typescript, developing features in Kotlin, talking to IAM teams, ASG teams etc., to onboard additional AWS services to ZonalShift. It was good work, and I liked it.</p><p>But, I still wanted something better. My learning in that team soon plateued. There wasn&#8217;t more fun things to do. And I wanted to get into rust, databases, system programming. Everything seemed very &#8216;high level&#8217; now. It was much better than SCOT, but it wasn&#8217;t that fulfilling from my learning point of view.</p><p>So I moved again. This time to AuroraDSQL. The fancy, brand new database that was under development at that point of time.</p><p>I wasn&#8217;t in the core DB internals team. I joined the peripheries. I was in a team that worked closely with the control plane of the database. We owned the billing and metering of the database. When customers create a cluster, write some data, our service made sure that the db metrics (e.g., storage size, number of DPUs etc.,) are visible in their cloudwatch.</p><p>Everything was in rust. It was great learning experience. I also took part in some work that involved in getting a bit inside the storage engine. The team had very experienced developers who knew a great deal of DB internals, query engines, rust programming and so on. The team worked closely with rust team that actually develops some well known open source libraries and frameworks like turmoil, duchess, tokio etc.</p><p>This was everything that I wanted: DB, rust, system programming, distributed systems. I was already shining in the first few months of my joining. I looked forward to so much more! Formal methods, query engine optimizations, simulations, what not!</p><p>Until I wasn&#8217;t anymore.</p><h2>The 5 day RTO</h2><p>January 2025. Amazon declared war on any employee who had a life. Remote or 3-day RTO worked great for me. But 5-day RTO, that came as a kick on the balls.</p><p>Everyday, I lost about 2 hours in just driving. Add the time to get dressed, prepare things, stretch and breath, it was easily about 2.5-3 hrs of meaningless wastage of time and energy.</p><p>And it wasn&#8217;t just that. I had to make sure to at least reach office by 9:30 or so. Which meant, leaving home by 8:15 or at least by 8:30. But kids school timings were 9:00-3:15, which meant I had to arrange extra-hours for my eldest. Similarly my youngest, whose daycare timings were 9:00-5:00, had to be extended to 8:00-6:00.</p><p>This came as a big lifestyle and financial change. My kids, who eat their food slower than a snail crossing a mile, almost stayed fasting the whole time. The eldest, who used to come at 3:30 and have a late lunch at home, stayed hungry till 5:30. And both the kids had to wake up early, which effected their sleep quality every single day. They are not early risers, and everyday, I had to see them waking up with eye bags. It was distressing. Why they had to suffer because of a stupid rule from my employer?</p><p>Financially, extra hours at day care and school meant shedding extra money. Every month, I paid around 700 extra for that. And on top of that, I had to pay for the fuel and parking. I was losing money on things that I never wanted in the first place.</p><p>And all of it started to reflect on my work front. I was tired and exhausted by the time I&#8217;d reach home. I hardly worked for 4-5 hours in office and then, even if I wanted to to, I just couldn&#8217;t do much productive work at home. Meanwhile, fresh grads, or below-30 folks, staying near the office, easily spent their whole day in the office churning code.</p><p>There was no match.</p><p>I also thought of moving to Seattle. Maybe that would work? I spent a few weekends hunting rentals in that region. And I was of course disappointed. The lack of good amenities, super high rents, not-so-good schools, drug addicts and homelessness, shooting incidents etc., just didn&#8217;t feel right. Kids were having a good time where they were staying, with nice school and friendly atmosphere. Going to a drug infested zone, clearly would have been a wrong move for them.</p><p>Also, there was no assurance that Amazon wouldn&#8217;t lay me off or put me in focus or something after I move there. Amazon wouldn&#8217;t change anyway.</p><h3>The Grind</h3><p>Meanwhile, the grind at AuroraDSQL wasn&#8217;t getting any better. Deadlines after deadlines. No additional resources. Many folks, including seniors, left the org. Some left Amazon, some left to other teams. No-lifers could still sustain, working 14-16 hours a day. People worked over weekends like it was nothing. Do more, and more, and more - for some artificial deadline. I never understood who was waiting for those features holding their breath? But it was pushed. Get it done somehow!</p><p>I checked randomly. There wasn&#8217;t a single hour when I didn&#8217;t get an email from the code review system. Which comes when someone publishes a code review, or someone comments in a code review. I checked for a longer time window, like 2-3 weeks, and every single hour had at least a few mails: XYZ added some comments, ABC published a CR. 2 AM, 3 AM, 4 AM - every single fucking hour.</p><p>I wondered, what the fuck are these folks doing in their lives?</p><p>I obviously couldn&#8217;t do it. I liked databases, rust, distributed systems. But I liked my life more. I would rather read something and sleep, or maybe work on my personal projects, than work for Amazon.</p><p>I still could have done much, much more with remote or 3 day RTO setting. But with 5-day RTO, there was no way in hell I could have kept up with the no-lifers.</p><p>So, there went my ambition of getting into storage engine or query planner down the Nitro North&#8217;s toilet. I stayed in the peripheries and didn&#8217;t even bother to try to get into the core. It was just not worth it.</p><p>I finally found something that I thought would work better for me. So I left. Time will tell if that was a good decision. But I had to do it anyway. I moved to Aerospike.</p><div><hr></div><h2>The Goods and Bads of Amazon</h2><p>There is no doubt Amazon has some great minds working for them. It&#8217;s a great place to learn engineering skills, explore diverse areas and domains of tech. The attention to detail, particularly in AWS, is something to be proud of. The deep dives, COEs, the microsecond level optimizations - all are great examples of high quality engineering.</p><p>But the bads, are just too many! Its a deeply hierarchical company. And its very frustrating to see incompetent managers or PEs or Sr. SDEs doing random and stupid shit. More so in Amazon Retail, compared to AWS. I have seen Sr. SDE asking question like &#8216;what is an integ test&#8216;!</p><p>And with hierarchies, comes ass-licking. Amazon doesn&#8217;t lack asslickers at any level. SDEs try to keep a &#8216;friendly&#8217; PE or Sr. SDE on their side, for promotions. Which is not a bad thing in general, but becomes toxic when they just do whatever thy lord says without questioning. </p><p>Same thing with quality. PEs get away from writing stupid, unorganized docs, with the excuse of &#8216;I put together this quickly to give some pointers to the team&#8217;. The same doc, if written by an SDE would require 5 rounds of refinement. There was clear pattern of &#8216;rules apply differently to different levels&#8217;.</p><p>The operational issues, they speak volumes. Amazon, particularly AWS, takes very seriously on ops. SDEs, oncalls, work like hell when a sev-2 knocks. But then, sometime, comes the holier-than-thou comments from the upper echelons. A PE would ask, &#8220;Why it wasn&#8217;t deployed on so-and-so region ? Bake-time? Why do we need bake time?&#8221;.  The same PEs would then ask &#8220;Why there was a deployment on so-and-so region without a bake time?&#8221;</p><p>There was no winning here. Hierarchy wins. I am not saying that PEs are dumbasses. But the lack of empathy, high arrogance, farther away from actual ground reality, was very de-motivating. Amazon PEs need to learn that there are great engineers outside of Amazon too, or that, they can lead from the front while being a team player and not necessarily being &#8216;unavailable&#8217;.</p><p>I am sure there&#8217;s a good number of PEs who just stay unapproachable, fake busy, just to show that they are important! Power play, anyone?</p><p>I wouldn&#8217;t get into the crappiness of SDMs. They are just too many in Amazon. Its enough to say that I would try to stay away from SDMs who ever worked at Amazon.</p><p>Anyway, all that said, I am done with that. I wish it had worked out better. But it didn&#8217;t. </p><p>If you are an amazonian, and disagree with whatever is written above, shove it up your arse, and don&#8217;t reach out.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Breaking Junit: Tests dependent on method order]]></title><description><![CDATA[There are a thousand ways one can make junit test cases fail sporadically and inexplicably.]]></description><link>https://cafeaffe.substack.com/p/breaking-junit-tests-dependent-on</link><guid isPermaLink="false">https://cafeaffe.substack.com/p/breaking-junit-tests-dependent-on</guid><dc:creator><![CDATA[Gagan]]></dc:creator><pubDate>Fri, 27 Jun 2025 05:30:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KgjP!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb27ef4f-a07d-4376-bdbd-235011a9b215_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There are a thousand ways one can make junit test cases fail sporadically and inexplicably. Here is one such encounter that happened a few months ago.</p><p>A test case was measuring &#8216;timestamp proximity&#8217;.</p><p>Sample code under test:</p><pre><code>long timestamp = System.currentTimeMillis();

return SomePojo.&lt;O&gt;customBuilder()
       .requestId(UUID.randomUUID() + "-" + timestamp)
       .timestamp(timestamp);</code></pre><p>As seen, it generates a UUID concatenated with a timestamp. The unit test was validating that the timestamp was &#8216;freshly generated&#8217; and is within an acceptable range (=20ms):</p><pre><code>String[] tokens = requestId.split("-");
        assertThat(Double.valueOf(tokens[tokens.length -1]), IsCloseTo.closeTo(System.currentTimeMillis(), 20));</code></pre><p>So far so good. It just worked. And 20ms is a wide enough duration.</p><p>But it started failing after migrating to JDK15. </p><p>Suddenly, the duration 20ms was not enough anymore.</p><p>Turns out, the order of execution of the tests was no longer the same as before. </p><p>To validate what&#8217;s the order of execution, by default it relies on the order of the methods that the JDK provides. This can be tested out with the below sample code:</p><pre><code>import java.lang.reflect.Method;

public class ReflectDemo {
    public static void main(String[] args) {
        Class&lt;?&gt; clazz = SampleClass.class;

        Method[] methods = clazz.getDeclaredMethods();

        System.out.println("Methods of class: " + clazz.getName());
        for (Method method : methods) {
            System.out.println(method.getName() + " - Return type: " + method.getReturnType().getSimpleName());
        }
    }
}

// A sample class with some methods
class SampleClass {
    public void sayHello() {
        System.out.println("Hello");
    }

    private int add(int a, int b) {
        return a + b;
    }

    protected void doSomething() {
        // do something
    }
}
</code></pre><p>When run with JDK 17:</p><pre><code>Methods of class: SampleClass
add - Return type: int
doSomething - Return type: void
sayHello - Return type: void</code></pre><p>When run with JDK 8:</p><pre><code>Methods of class: SampleClass
add - Return type: int
sayHello - Return type: void
doSomething - Return type: void</code></pre><p>Plus, this doesn&#8217;t stay static and it may depend on JVM implementation.</p><p>But, we weren&#8217;t using any &#8216;state&#8217; that carries over from one test to another, so how can it effect?</p><p>Well, maybe not so obvious, but we still do have a &#8216;state&#8217; that carries over. And thats the classloader.</p><p>So, what was happening was, when this particular test was <em>not</em> the first test to be executed, the UUID class was well loaded. However, when this became the first test to be executed, it did take some extra time to load the class and hence, sometimes it failed that 20ms bound check.</p><h3>The fix</h3><p>I could fix it with enforcing the tests execution order. But I guess that wasn&#8217;t the cleaner way to do, as that can thrown subtle bugs like these under the carpet.</p><p>The other way to fix it was obviously letting the timestamp get recorded after the class is loaded.</p><p>So, the changed code looked like this:</p><pre><code>UUID uuid = UUID.randomUUID();
long timestamp = System.currentTimeMillis();

return SomePojo.&lt;O&gt;customBuilder()
       .requestId(uuid + "-" + timestamp)
       .timestamp(timestamp);</code></pre><p>There! Fixed it.</p>]]></content:encoded></item><item><title><![CDATA[Breaking Junit: Timezones with Daylight Savings Time]]></title><description><![CDATA[It is quite well known to make your unit tests resilient to environmental variables to avoid weird or hard to reproduce failures.]]></description><link>https://cafeaffe.substack.com/p/breaking-junit-timezones-with-daylight</link><guid isPermaLink="false">https://cafeaffe.substack.com/p/breaking-junit-timezones-with-daylight</guid><dc:creator><![CDATA[Gagan]]></dc:creator><pubDate>Fri, 03 Jan 2025 19:07:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KgjP!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb27ef4f-a07d-4376-bdbd-235011a9b215_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It is quite well known to make your unit tests resilient to environmental variables to avoid weird or hard to reproduce failures. Environment variables such as system variables, timezones, data from sensors etc etc.</p><p>Here is a similar case which I had found and fixed recently.</p><h2>The Symptoms</h2><ol><li><p>Unit test started to fail suddenly - no code changes done in recent times.</p></li><li><p>Tests didn&#8217;t fail on build servers - only failing in local runs.</p></li><li><p>Tests failed on time assertions. Meaning, tests were able to compile, initialize and run.</p></li></ol><h2>Diagnosis</h2><p>At the first glance, the difference between the local runs and the build servers was the Operating System. Build servers were Linux based, while the developers used Macs.</p><p>A difference in the OS shouldn&#8217;t cause errors on time assertions.</p><p>On a detailed look, I realized the build servers were running in UTC, while local setups were running, with local timezones (PST).</p><p>To validate the theory, I changed the timezone of my mac to UTC, and re-run the tests. Voila. Went smooth.</p><p>So, we have a case where unit test is dependent on the system timezone.</p><h3>What did the test do</h3><ol><li><p>Take currentTime</p></li><li><p>Randomly take a duration range : e.g., 170 days - mocked as external input</p></li><li><p>Add it to currentTime to get an endTime</p></li><li><p>Assert that the endTime matches with the duration range that we got from external source.</p></li></ol><p>Pretty simple. So, why this &#8216;addition&#8217; in step 3 resulted in a wrong value in PST?</p><p>I found that test cases used <code>DateUtils</code><a href="https://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/time/DateUtils.html"> from </a><code>JakartaCommons</code> package. The <code>add</code> method of <code>DateUtils</code> uses <code>Calendar</code> fields to add days, without taking timezone into consideration. Normally this works, but it fails when there is a Daylight Saving Time.</p><p>Which they also mention in the JavaDocs, which was missed whenever that code was added:</p><blockquote><p>It is important to note these methods use a <code>Calendar</code> internally (with default time zone and locale) and may be affected by changes to daylight saving time (DST).</p></blockquote><p>For example, assuming daylight saving time changes on <strong>01/01/2023 (mm/dd//yyyy)</strong>, advancing the clock by 1 hour</p><pre><code><code>We would expect this behavior : 

startDate = 01/01/2023 00:00:00
endDate = startDate + 1 Day = 01/02/2023 01:00:00

Days added: 1 + delta (delta = 1 hour)
Actual hours added: 24</code></code></pre><p>That's because, normally, the expectation is we add 86400 seconds to the starting timestamp. So, with Daylight saving time, even when we add 24 hours, the clock is advanced by 25 hours.</p><p>However, Jakarta commons library's behavior is different. Instead of adding timestamps, it adds <code>1 Day</code> to <code>Calendar</code> field. That means, for the same example above, it does this:</p><pre><code><code>startDate = 01/01/2023 00:00:00
endDate = startDate + 1 Day = 01/02/2023 00:00:00

Days added : 1
Actual hours added: 23
</code></code></pre><p>With that in the test case, it became <code>timeZone</code> dependent and it failed in PST for a certain date range. If no fix is being done, it will start working again on its own, whenever the date range moves past the daylight switching date.</p><h3>The fix</h3><p>The fix was simple. Just don&#8217;t depend on the DateUtils library for this trivial task!</p>]]></content:encoded></item><item><title><![CDATA[Kotlin Flows as blackhole]]></title><description><![CDATA[This was yet another case of a service bug that stayed dormant for years and then suddenly, when the data pattern changed, surfaced and created a week long on-call sprint.]]></description><link>https://cafeaffe.substack.com/p/kotlin-flows-as-blackhole</link><guid isPermaLink="false">https://cafeaffe.substack.com/p/kotlin-flows-as-blackhole</guid><dc:creator><![CDATA[Gagan]]></dc:creator><pubDate>Fri, 13 Dec 2024 04:01:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!SW7w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7edda2-99d0-4cce-83c1-9e877b9bbc94_1590x448.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This was yet another case of a service bug that stayed dormant for years and then suddenly, when the data pattern changed, surfaced and created a week long on-call sprint. One such dormant bug I had written about in the previous <a href="https://cafeaffe.substack.com/p/immutability-matters">post</a>.</p><p>The service under question reads data constantly from a source, applies certain transformations, and writes to a sink (S3). Quite simple actually.</p><p>Normally, the data read is under a manageable volume that kept in sync with the S3 write operations. At least till now.</p><p>And then, suddenly the service encountered a large chunk of data, and S3 write couldn&#8217;t keep up with the pace it was reading from the source. It showed signs of frequent crashes and high heap mem usage and GC activities.</p><p>After much digging around, we got some heap dumps. <em>(Yeah, its not that easy to get the heap dumps from within the docker containers.)</em></p><p>Similar to a <a href="https://cafeaffe.substack.com/p/a-value-de-duplicated-cache-1">previous encounter</a>, I got the heapdump to analyze with Eclipse MAT.</p><p>It looked something like this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SW7w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7edda2-99d0-4cce-83c1-9e877b9bbc94_1590x448.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SW7w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7edda2-99d0-4cce-83c1-9e877b9bbc94_1590x448.png 424w, https://substackcdn.com/image/fetch/$s_!SW7w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7edda2-99d0-4cce-83c1-9e877b9bbc94_1590x448.png 848w, https://substackcdn.com/image/fetch/$s_!SW7w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7edda2-99d0-4cce-83c1-9e877b9bbc94_1590x448.png 1272w, https://substackcdn.com/image/fetch/$s_!SW7w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7edda2-99d0-4cce-83c1-9e877b9bbc94_1590x448.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SW7w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7edda2-99d0-4cce-83c1-9e877b9bbc94_1590x448.png" width="1456" height="410" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9e7edda2-99d0-4cce-83c1-9e877b9bbc94_1590x448.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:410,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:494278,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SW7w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7edda2-99d0-4cce-83c1-9e877b9bbc94_1590x448.png 424w, https://substackcdn.com/image/fetch/$s_!SW7w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7edda2-99d0-4cce-83c1-9e877b9bbc94_1590x448.png 848w, https://substackcdn.com/image/fetch/$s_!SW7w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7edda2-99d0-4cce-83c1-9e877b9bbc94_1590x448.png 1272w, https://substackcdn.com/image/fetch/$s_!SW7w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e7edda2-99d0-4cce-83c1-9e877b9bbc94_1590x448.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>So, a single thread, a single kotlin channel, is consuming 95% of the heap? But why?</p><p>Turns out, the &#8216;buffer&#8217; that was plugged between the producer (component that reads the data) and the consumer (that writes to S3) was not applying backpressure. It was thought that it did.</p><p>Here is a sample code:</p><pre><code>fun &lt;T&gt; Flow&lt;T&gt;.buffer(
    maxCapacity: Int,
    sizeFn: (T) -&gt; Int
): Flow&lt;T&gt; = flow {
    coroutineScope {
        var capacity = maxCapacity
        val consumeTrigger = Channel&lt;Unit&gt;(capacity = Channel.CONFLATED)
        val channel = produce(capacity = Channel.UNLIMITED) {
            collect {
                val size = sizeFn(it)
                while (size &gt; capacity) { consumeTrigger.receive() }
                capacity -= size
                send(it)
            }
        }
        channel.consumeEach {
            val size = sizeFn(it)
            capacity += size
            consumeTrigger.send(Unit)
            emit(it)
        }
    }
}

fun &lt;T&gt; Flow&lt;T&gt;.backPressuredBuffer(
    scope: CoroutineScope,
    maxItemRunningSize: Int,
    sizeFn: (T) -&gt; Int
): ReceiveChannel&lt;T&gt; =
    buffer(maxItemRunningSize, instrumentationFn, sizeFn)
        .buffer(capacity = Channel.UNLIMITED)
        .produceIn(scope)</code></pre><p>So, this thing, while applied a steady flow from the first buffer, it kept on consuming even when the buffer size was limited to say 1.</p><p>As learnt the hard way, a <code>Channel.UNLIMITED</code>  essentially creates an infinite linkedList that takes whatever you offer it. And of course, heap memory is the limit.</p><p>Fixing it was quite simple. Just dont make it unlimited, and have a reasonable size, based on service behavior.</p><p>Here is a small sample unit test that validates if it works, or not:</p><pre><code>@Test
    fun `test buffer doesn't overflow the max size`() {
        val totalItems = 1000
        val maxCapacity = 1000 // bytes

        // inner buffer is limited at 1k bytes
        // outer buffer is limited at 10 entries = 1k/100

        val randomByteArrays = List(totalItems) {
            ByteArray(Random.nextInt(100, 201)).apply {
                Random.nextBytes(this)
            }
        }

        val cursor = AtomicInteger(0)
        val receiveChannel = randomByteArrays.asFlow()
            .produceBuffered(
                scope = this,
                maxItemRunningSize = maxCapacity,
                sizeFn = { it.size }
            )
        // cursor is advanced by 1 which is not yet consumed by the buffer because the buffer is full
        assertTrue(cursor.get() &lt;= 22)
        assertTrue(cursor.get() &gt;= 11)
        receiveChannel.cancel()
    }</code></pre><p></p><p></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Immutability Matters]]></title><description><![CDATA[What need not change, must not change]]></description><link>https://cafeaffe.substack.com/p/immutability-matters</link><guid isPermaLink="false">https://cafeaffe.substack.com/p/immutability-matters</guid><dc:creator><![CDATA[Gagan]]></dc:creator><pubDate>Wed, 12 Jun 2024 06:11:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KgjP!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb27ef4f-a07d-4376-bdbd-235011a9b215_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Handling mutable data in a multi-threaded high traffic environments is always error-prone. Shared mutable data across various threads, can change the service behavior in weird ways, leaving undetectable and irreproducible bugs.</p><p>One such case was discovered a couple of days ago at my workplace, and interestingly, the bug was there for almost 4 years, and nobody noticed it. Which also means that the bug was not impacting the business much, thankfully. Or else, it would have made a lot of noise right when the bug got introduced.</p><h3>The Symptoms</h3><ul><li><p>Bad data is returned randomly from service fleet</p></li><li><p>Some hosts return correct data, some not</p></li><li><p>Bad hosts when restarted, start behaving correctly</p></li><li><p>Once a host returns bad data, it always returns bad data</p></li></ul><h3>Issue diagnosis</h3><p>From the first 3 symptoms, we can think of issues due to multi-threading with mutable data, like a cache. But the 4th one suggests that it could be something related to static initialization, or a singleton with mutable data. Because, with multi-threading concurrency issues, the occurrence of the issue is still expected to be random. And with cache, generally when the TTL gets over, it should auto-correct itself.</p><p>Since a bad host stays permanently damaged, it indicates towards a singleton or static object which is supposed to stay untouched, but somehow got changed.</p><p>Consider this:</p><pre><code>public enum DaysGroup {
  WeekDays(
           Stream.of("Mon", "Tue", "Wed", "Thu", "Fri")
             .collect(Collectors.toSet())
           ),
  Weekends(Stream.of("Sat", "Sun")
             .collect(Collectors.toSet())
           )

  private Set&lt;String&gt; days;

  DaysGroup(Set&lt;String&gt; days) {
        this.days = days;
  }

  public Set&lt;String&gt; getDays() {
       return this.days;
  }

}</code></pre><p>The above defines an <code>enum </code>which has a set of strings that gives meaning to the enums. The set is stored as an enum instance variable.</p><p>It is worth noting that enums are singleton by design, and hence there will always be a single copy of the defined enums : <code>WeekDays</code> and <code>Weekends</code></p><p>With this defined, consider a service code portion:</p><pre><code>...
Set&lt;String&gt; days = WeekDays.getDays();
if (condition) {
   days.remove("Fri");
}
...</code></pre><p>This in the first look, seems okay. But, if seen with a slight detail, we notice that if the condition is true, the actual set from the enum is altered.</p><p>So, whats the issue?</p><p>The problem is that the enum is changed permanently, and next time if the code is run, or any other thread accesses it in parallel, it will never find <code>Fri</code> in the set anymore.</p><p>The problem is obviously temporarily fixed if the service is restarted, and it will stay fixed till that condition is satisfied again.</p><p>The problem also doesn&#8217;t occur enough if that condition is infrequent. Like, condition == is this year a leap-year.</p><p>The problem will occur only in those machines where this condition was ever met. So, if you have a 100 machine fleet for your service, and only 2 of them ever got this code path executed, then those 2 are broken, not the rest. So, the impact of it stays pretty unpredictable.</p><h3>The Fix(es)</h3><h4>Return a copy</h4><p>One way to fix this is to return a copy of the set, instead of the set itself.</p><pre><code>public Set&lt;String&gt; getDays() {
       return new HashSet&lt;&gt;(this.days);
}</code></pre><p>This works. But if this method is called at many places, and mostly they just read the items and dont change anything, then we are unnecessarily creating quite a lot of memory usage.</p><h4>Keep an immutable set and return that</h4><p>The other way is not to keep a mutableSet, but rather keep an immutable set as part of the enum construction.</p><pre><code>DaysGroup(Set&lt;String&gt; days) {
   this.days = ImmutableSet.copyOf(days);
}</code></pre><p>This will ensure that the clients of this code will handle the returned set as they see fit.</p><p>The above uses Guava&#8217;s ImmutableSet. In modern JDKs, this can simply be replaced with <code>Set.of(&#8230;)</code>. </p><h4>Preventions</h4><ol><li><p>The above was preventable with proper static analysis in place. For example, <a href="https://spotbugs.readthedocs.io/en/latest/bugDescriptions.html#ei-may-expose-internal-representation-by-returning-reference-to-mutable-object-ei-expose-rep">SpotBugs</a> provides out of the box support to check such cases.</p></li><li><p>Best practices in code reviews can avoid such issues leaking to the prod.</p></li><li><p>Some elaborate Junit tests can capture such errors.</p></li><li><p>Use modern JDK/Kotlin to use immutability as first class construct rather than having the need to use external libraries or doing something special to get the immutability niceties.</p></li></ol><p></p>]]></content:encoded></item><item><title><![CDATA[A value de-duplicated cache -2]]></title><description><![CDATA[Optimizing memory through clever data structures]]></description><link>https://cafeaffe.substack.com/p/a-value-de-duplicated-cache-2</link><guid isPermaLink="false">https://cafeaffe.substack.com/p/a-value-de-duplicated-cache-2</guid><dc:creator><![CDATA[Gagan]]></dc:creator><pubDate>Wed, 08 May 2024 08:00:44 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/27c116be-d92d-4dcc-83f7-d29734d6f995_931x920.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://cafeaffe.substack.com/p/a-value-de-duplicated-cache-1">Previously</a>, we figured that duplicate data in our LRU cache was eating up more than 50% of our whole heap space. Lets see where they are coming from and what can we do about it.</p><h1>A simplistic view of service cache</h1><p>Caching of frequently used data to increase application throughput and reduce latency is a very common pattern.</p><p>Consider a case of services talking to each other. Service A fetches certain data from service B. This obviously incurs network latency, serialization-deserialization latency and CPU work, cost due to payload transmitted over the network and so on. If we know that the data from service B is not super real-time and we can assume some level of stability or immutability of the data, say, 1 hour, then we can essentially cache service B&#8217;s response for an hour and reduce our cost, latency and improve throughput.</p><p>Of course, there are many things to consider in a cache: invalidation strategies, what to cache, how much to cache, how long to cache, architectural requirements and so on. Many of these parameters are dependent on the business logic (for example we can&#8217;t and shouldn&#8217;t cache account balance maybe?) and some are service infrastructure dependent (how much memory can we assign for the cache?). Then there is completely separate discussion of distributed cache or local cache (an architectural question).</p><p>For now, we are simply talking about in-memory cache here. A <a href="https://docs.oracle.com/javase/8/docs/api/java/util/HashMap.html">HashMap</a>!</p><h2>Implementation of an InMemory Cache</h2><p>There are off-the-shelf libraries which provide ready to use in memory caches. I am talking about JVM caches; but its the same in any other language or runtime. Some of the widely used caches include <a href="https://github.com/google/guava/wiki/CachesExplained">Guava</a> or <a href="https://spring.io/guides/gs/caching">Spring</a>.</p><p>Then there is always the option of doing it in-house for better control and with the philosophy of implementing only the required features instead of bloating the service up with some external library like Guava.</p><p>A sample AOP styled cache looks like:</p><p><em>(Taken shamelessly from spring <a href="https://docs.spring.io/spring-framework/docs/3.2.x/spring-framework-reference/html/cache.html">docs</a>.)</em></p><pre><code><code>@Cacheable("books")
public Book findBook(ISBN isbn) {...}
</code></code></pre><p>None of the default implementations will provide any de-duped strategies. And that makes sense. Deduplication comes with its own set of problems.</p><h2>What is de-duped cache and why we might need it?</h2><p>A cache is just a <code>key-value</code> store, and naturally a cache can have only unique keys. But consider a case where values are duplicated across many many keys.</p><p>A hypothetical example (probably not a great one): say we have a bunch of vendors, and the currencies they can transact in.</p><p>Now, of course, if the query pattern is something like, get all the vendors for a given currency, we would need the currency as the key.</p><p>But if we have a query pattern that says something like: get all the transact-able currencies for a given vendor, the vendorId becomes the key.</p><p>And imagine a globalized economy (where we already are, kind of), most of the vendors can transact in most currencies. So, the value part gets duplicated over here:</p><pre><code><code>&#9;&#9;&#9;KEY           |            VALUE
         &#9;------------------------------------------------
&#9;&#9;&#9;vendor1       |         CAD,USD,EUR,GBP....
&#9;&#9;&#9;vendor2       |         CAD,USD,EUR,GBP....
</code></code></pre><p>The problem is quite apparent here: duplicate data being repeated over and over again. The situation becomes worse with large number of keys with the same duplicated data.</p><p>If you are aware of <a href="https://openjdk.org/jeps/192">JVM string DeDuplication</a>, this is exactly the same case but above the JVM layer and inside the application.</p><h1>Implementation of a Value Deduped Cache</h1><p>A value DeDuped LRU cache is essentially a Cache over another cache. Simply put, we make the below change:</p><p><strong>BEFORE</strong>:</p><pre><code>key -&gt; value</code></pre><p><strong>AFTER</strong>:</p><pre><code>key -&gt; hash(value)
hash(value) -&gt; value</code></pre><p>This clearly reduces the number of bulky <code>value</code> entries to their hashes, which do not eat up as much space.</p><p>A pseudocode for such an implementation might look like:</p><p>(The below code is also available here at <a href="https://gist.github.com/gagan405/716d9aecd7d72f612e75d985ff3f1f3c">github gist</a>.)</p><pre><code><code>class LeastRecentlyUsedDeDupedExpirationCache&lt;K, V&gt; {
    // To keep null values and distinguish null against cache miss
    private static final Object NULL = new Object();
 
    /**
     * Ordered map so that we can implement LRU.
     * This is the original LRU built on top of something like a LinkedHashMap
     * This keeps the "key -&gt; hash(value)" entries
     */
    private final LeastRecentlyUsedMap&lt;K, ExpirationValue&lt;Integer&gt;&gt; map;
 
    /**
     * Map to hold dedup values. This need not be ordered as this is a map to backup the ordered
     * keys stored above.
     */
    private final Map&lt;Integer, V&gt; valueMap;
    
    /**
    * Other attributes such as capacity, ttl etc., aren't shown for brevity.
    **/
&#9;&#9;
&#9;public Value&lt;V&gt; get(final K key) {
&#9;ExpirationValue&lt;Integer&gt; hash;
        Value&lt;V&gt; res = null;
 
        synchronized (this.map) {
            hash = this.map.get(key);
        }
 
        if (hash != null) {
            V value = valueMap.get(hash.getValue());
            if (value == null) {
            /**
             * This can happen when we have references like:
             * key1 -&gt; hash -&gt; value
             * key2 -&gt; hash -&gt; value
             *
             * Now, if key1 is expired, it will remove the the entries from both the maps.
             * That results in having : key2 -&gt; hash -&gt; null
             *
             * To avoid this, we can do the following options
             *
             * 1. Keep a ref-count of the hash :  key1 -&gt; hash -&gt; value, ref-count
             * We would need to update the ref-count whenever the key is added/expired
             *
             * 2. Keep another map which is revert of 1st map: hash -&gt; key
             * That doesn't look so efficient
             *
             * 3. Just return null and make it look like the value didn't exist, and so the MethodProxyHandler will
             * fill up the cache
             */
                return null;
            } else if (value.equals(NULL)) {
                res = new DeDupedValue&lt;&gt;(null);
            } else {
                res = new DeDupedValue&lt;&gt;(value);
            }
        }
 
        return res;
    }
    
    public void put(final K key, final V value) {
        synchronized (this.map) {
            // Add the value to the map.
            int hash = Objects.hashCode(value);
            this.map.put(key, new ExpirationValue(hash, currentTimeInMillis + this.timeToLiveInMillis));
 
            if (value == null) {
                this.valueMap.put(hash, (V) NULL);
            } else {
                this.valueMap.put(hash, value);
            }
        }
    }
}
</code></code></pre><h2>Invalidation</h2><p>Since the same values are pointed to by different keys it is possible to end up in a situation where a certain key is expired, and hence the value is removed, and we end up in cache miss of other keys. For example:</p><pre><code>k1 (expires at 10:00) -&gt; hash(v1) -&gt; V1

k2 (expires at 10:05) -&gt; hash(v1) -&gt; V1</code></pre><p>If the expiration of <code>k1</code> removes <code>v1</code> from the value map, fetching <code>k2</code> (or any other key with the same value) will result in a cache miss.</p><p>This can be avoided if we keep a <code>reference count</code> for each of the values. That makes the implementation slightly complicated. In the above code, we simply do a cache miss and rely on the fact that it will get pre-filled.</p><h2>Immutability Caution</h2><p>While it all sounds great, one thing to consider here is immutability. If the values cached are being mutated outside of the cache, it will effect all the keys that are holding references to the value. And that will create hard to debug issues and possibly corrupt data on production.</p><p>To avoid that, we can do one or more of the following:</p><ol><li><p>Ensure data immutability - do not modify the cached items</p></li><li><p>Clone the object and then return - will consume more space - also, why de-dup?</p></li><li><p>Ensure that the hash function computes a different hash for any modification to the object, and compare the hash to stored hash value before returning.</p></li></ol><p>If the requirements are something of the sort of &#8216;every key can alter the value as it requires&#8217;, we can go ahead with the clone approach, although, it probably doesn&#8217;t make much sense on wht to do the deduplication in the first place?</p><p>To ensure the immutability of the cached values, with the 3rd approach, the <code>get</code> method would look like:</p><pre><code><code>public Value&lt;V&gt; get(final K key) {
&#9;ExpirationValue&lt;Integer&gt; hash;
        Value&lt;V&gt; res = null;
 
        synchronized (this.map) {
            hash = this.map.get(key);
        }
 
        if (hash != null) {
               V value = valueMap.get(hash.getValue());
               ...
&#9;       if (hash != hashFunction(value)) {
&#9;&#9;      // IMMUTABILITY VIOLATED
&#9;&#9;      // either force a cache-miss or throw exception
                     throw new ConcurrentModificationException();
&#9;&#9;}
               res = new DeDupedValue&lt;&gt;(value);
               ...
        }
        return res;
    }
</code></code></pre><p>Here, even if we find the value in the cache, we double check the authenticity of the value by recomputing the cache and comparing against what&#8217;s there in the keys. If there&#8217;s a mismatch, it means it got modified through some other key.</p><p>So, either we can force a cache-miss here, or we can throw an exception and let the user know what they are doing.</p><h1>Improvements after deduped cache</h1><p>Post dedup cache, we see significant improvements in the heap space usage. Also as a side effect, no more OOM errors, and we could get the instance types down to one level cheaper, saving quite a lot of $$s.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5U8I!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2012aae-a688-45dd-a6a8-cf94b4770082_2000x334.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5U8I!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2012aae-a688-45dd-a6a8-cf94b4770082_2000x334.png 424w, https://substackcdn.com/image/fetch/$s_!5U8I!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2012aae-a688-45dd-a6a8-cf94b4770082_2000x334.png 848w, https://substackcdn.com/image/fetch/$s_!5U8I!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2012aae-a688-45dd-a6a8-cf94b4770082_2000x334.png 1272w, https://substackcdn.com/image/fetch/$s_!5U8I!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2012aae-a688-45dd-a6a8-cf94b4770082_2000x334.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5U8I!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2012aae-a688-45dd-a6a8-cf94b4770082_2000x334.png" width="1456" height="243" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a2012aae-a688-45dd-a6a8-cf94b4770082_2000x334.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:243,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:284335,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5U8I!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2012aae-a688-45dd-a6a8-cf94b4770082_2000x334.png 424w, https://substackcdn.com/image/fetch/$s_!5U8I!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2012aae-a688-45dd-a6a8-cf94b4770082_2000x334.png 848w, https://substackcdn.com/image/fetch/$s_!5U8I!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2012aae-a688-45dd-a6a8-cf94b4770082_2000x334.png 1272w, https://substackcdn.com/image/fetch/$s_!5U8I!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2012aae-a688-45dd-a6a8-cf94b4770082_2000x334.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h1>Conclusion</h1><p>JVM eco-system has some wonderful tools to help with deep dives into the internals of runtime behavior. Eclipse MAT is one such tool. Before jumping into &#8216;increase the heap space&#8217; or &#8216;get a larger machine&#8217;, it is worthwhile to investigate which components are actually causing the issue.</p><p>As seen, some clever data structure optimizations can actually save lots in terms of performance as well as money spent on the infrastructure.</p>]]></content:encoded></item><item><title><![CDATA[A value de-duplicated cache - 1]]></title><description><![CDATA[Analyzing heap memory with Eclipse MAT]]></description><link>https://cafeaffe.substack.com/p/a-value-de-duplicated-cache-1</link><guid isPermaLink="false">https://cafeaffe.substack.com/p/a-value-de-duplicated-cache-1</guid><dc:creator><![CDATA[Gagan]]></dc:creator><pubDate>Wed, 08 May 2024 07:43:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MwOD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff89ba0e8-8250-4bde-af86-7b8d6bdbb70f_931x920.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Before we get into the details of Cache or anything for that matter, lets see why this article exists in the first place. Let&#8217;s talk about what problem are we trying to solve here.</p><h2>OOM errors and Heap dumps</h2><p>The situation here is with a standard Java <a href="https://tomcat.apache.org/">TomCat</a> service processing sync and async API requests and doing certain processing and responding back with the response. With occasional heap-dumps due to OOM errors. <em>(You can enable heap dumps on out of memory with </em><code>-XX:+HeapDumpOnOutOfMemoryError</code> as JVM arg.)</p><p><code>OutOfMemory</code> errors come when the JVM is not able to allocate required space, as there is none left in the heap space.</p><p>The obvious way to fix it, is to assign larger heaps. But the problem with that is it might cost a lot depending on how many servers you have, and what if the heap is still not enough?</p><p>The other way to fix it, is by ignoring it. <strong>Please don&#8217;t do it</strong>. And if you do, make sure to remove the JVM arg <code>XX:+HeapDumpOnOutOfMemoryError</code> as that will clutter the server host with so many <code>.hprof</code> files and eventually the server will run out of Disk space. (This reminds us to have alarms on disk usage.)</p><p>So, if we aren&#8217;t doing any of the above : using larger heap or ignoring it, then we are left with &#8216;fixing it&#8217; as real engineers. And in order to do that, we need to analyze the heap dump files.</p><h3>Analyzing heap dump files</h3><p>We can use the wonderful <a href="https://eclipse.dev/mat/">Eclipse MAT</a> to analyze our <code>.hprof</code> files. Open them up, and lets see what things are eating up the most memory.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MwOD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff89ba0e8-8250-4bde-af86-7b8d6bdbb70f_931x920.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MwOD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff89ba0e8-8250-4bde-af86-7b8d6bdbb70f_931x920.png 424w, https://substackcdn.com/image/fetch/$s_!MwOD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff89ba0e8-8250-4bde-af86-7b8d6bdbb70f_931x920.png 848w, https://substackcdn.com/image/fetch/$s_!MwOD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff89ba0e8-8250-4bde-af86-7b8d6bdbb70f_931x920.png 1272w, https://substackcdn.com/image/fetch/$s_!MwOD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff89ba0e8-8250-4bde-af86-7b8d6bdbb70f_931x920.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MwOD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff89ba0e8-8250-4bde-af86-7b8d6bdbb70f_931x920.png" width="931" height="920" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f89ba0e8-8250-4bde-af86-7b8d6bdbb70f_931x920.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:920,&quot;width&quot;:931,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:418380,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MwOD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff89ba0e8-8250-4bde-af86-7b8d6bdbb70f_931x920.png 424w, https://substackcdn.com/image/fetch/$s_!MwOD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff89ba0e8-8250-4bde-af86-7b8d6bdbb70f_931x920.png 848w, https://substackcdn.com/image/fetch/$s_!MwOD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff89ba0e8-8250-4bde-af86-7b8d6bdbb70f_931x920.png 1272w, https://substackcdn.com/image/fetch/$s_!MwOD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff89ba0e8-8250-4bde-af86-7b8d6bdbb70f_931x920.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Interesting, about 3 GB of the memory is taken up by a single <code>ProxyClass</code> instance ! And, what exactly is that proxy class?</p><p>(If you don&#8217;t know what is a proxy class, read it <a href="https://docs.oracle.com/javase%2F7%2Fdocs%2Fapi%2F%2F/java/lang/reflect/Proxy.html">here</a>, and <a href="https://en.wikipedia.org/wiki/Proxy_pattern">here</a>.)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!anN1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bcd6d98-0f07-4ca1-bd35-d099902e57d2_2058x662.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!anN1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bcd6d98-0f07-4ca1-bd35-d099902e57d2_2058x662.png 424w, https://substackcdn.com/image/fetch/$s_!anN1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bcd6d98-0f07-4ca1-bd35-d099902e57d2_2058x662.png 848w, https://substackcdn.com/image/fetch/$s_!anN1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bcd6d98-0f07-4ca1-bd35-d099902e57d2_2058x662.png 1272w, https://substackcdn.com/image/fetch/$s_!anN1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bcd6d98-0f07-4ca1-bd35-d099902e57d2_2058x662.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!anN1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bcd6d98-0f07-4ca1-bd35-d099902e57d2_2058x662.png" width="1456" height="468" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2bcd6d98-0f07-4ca1-bd35-d099902e57d2_2058x662.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:468,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:391187,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!anN1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bcd6d98-0f07-4ca1-bd35-d099902e57d2_2058x662.png 424w, https://substackcdn.com/image/fetch/$s_!anN1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bcd6d98-0f07-4ca1-bd35-d099902e57d2_2058x662.png 848w, https://substackcdn.com/image/fetch/$s_!anN1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bcd6d98-0f07-4ca1-bd35-d099902e57d2_2058x662.png 1272w, https://substackcdn.com/image/fetch/$s_!anN1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bcd6d98-0f07-4ca1-bd35-d099902e57d2_2058x662.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>So, essentially the proxy class is generated through AOP which implements a Cache over a method. And the cache is a <code>LinkedHashMap</code> used as an LRU.</p><p>Looking at the contents of the cache:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tnSh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9648ac91-603e-42f6-b824-2fe23b5587fb_2794x1414.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tnSh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9648ac91-603e-42f6-b824-2fe23b5587fb_2794x1414.png 424w, https://substackcdn.com/image/fetch/$s_!tnSh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9648ac91-603e-42f6-b824-2fe23b5587fb_2794x1414.png 848w, https://substackcdn.com/image/fetch/$s_!tnSh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9648ac91-603e-42f6-b824-2fe23b5587fb_2794x1414.png 1272w, https://substackcdn.com/image/fetch/$s_!tnSh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9648ac91-603e-42f6-b824-2fe23b5587fb_2794x1414.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tnSh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9648ac91-603e-42f6-b824-2fe23b5587fb_2794x1414.png" width="1456" height="737" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9648ac91-603e-42f6-b824-2fe23b5587fb_2794x1414.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:737,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1088213,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tnSh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9648ac91-603e-42f6-b824-2fe23b5587fb_2794x1414.png 424w, https://substackcdn.com/image/fetch/$s_!tnSh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9648ac91-603e-42f6-b824-2fe23b5587fb_2794x1414.png 848w, https://substackcdn.com/image/fetch/$s_!tnSh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9648ac91-603e-42f6-b824-2fe23b5587fb_2794x1414.png 1272w, https://substackcdn.com/image/fetch/$s_!tnSh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9648ac91-603e-42f6-b824-2fe23b5587fb_2794x1414.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>This is even more interesting. Most of the values in the Map are showing up as the same size. This maybe a normal case if data are kind of well-defined with fixed size packets. But the service doesn&#8217;t work with any such thing and its purely business data which are almost never standardized as fixed size.</p><p>Looking deeper at a few values (not shown here), we can conclude that they all have the same data. Of course we can manually check all of those, but a good sample size test would also work here to get some confidence.</p><p>Essentially, we are looking at a <code>HashMap</code> with many different keys, but having the same value, which consumes 55% of memory and 3 GBs and results in frequent OOM errors.</p><p>In the <a href="https://cafeaffe.substack.com/p/a-value-de-duplicated-cache-2">next article</a>, we will see where the duplicate data are coming from and what can we do about it.</p>]]></content:encoded></item><item><title><![CDATA[Whats in my toolbox?]]></title><description><![CDATA[No, I am not talking about the iTerm or oh-my-zsh setup, or IntelliJ.]]></description><link>https://cafeaffe.substack.com/p/whats-in-my-toolbox</link><guid isPermaLink="false">https://cafeaffe.substack.com/p/whats-in-my-toolbox</guid><dc:creator><![CDATA[Gagan]]></dc:creator><pubDate>Thu, 18 Apr 2024 06:28:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb27ef4f-a07d-4376-bdbd-235011a9b215_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>No, I am not talking about the iTerm or oh-my-zsh setup, or IntelliJ. Although I do use them.</p><p>I am noting down those items I have found very useful to organize myself (and my work), plan better and kill entropy. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cafeaffe.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading cafeaffe! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Emails</h2><p>I am one of those who created a <strong><a href="https://gmail.com/">gmail</a></strong> account when it was freshly brewed. I still use it everyday. But the Spams, promotions etc., are well, nothing less than crazy. And it almost runs at 99% capacity.</p><p>Also, every person that shares my name, probably assumes they share my email too. This has led me to go for a paid <strong><a href="https://workspace.google.com/">Google Workspace</a></strong>, with my own domain. </p><p>Yeah it costs. But much better! And I love gmail.</p><p>Other emails that I use: <strong><a href="https://proton.me/mail">protonmail</a></strong>, <strong><a href="https://tuta.com/">tutamail</a></strong> among others. Absolutely love both of them for their privacy features. Also the email alias feature from Proton!</p><p>The key is to split up the external comms across various emails at random. And never share the critical ones for random subscription or RSS feeds.</p><h2>Notes</h2><p>I write a lot of notes. And I like notes with markdown features. I started with <strong><a href="https://evernote.com/">EverNote</a></strong> which I loved, but they got too greedy and restricted the users to just 50 notes. That too without notice.</p><p>I found <strong><a href="https://notion.so/">Notion</a></strong> quite nice. Although there are limitations on how much free content it can have, but I guess when I hit that limit, will switch to something else like <strong><a href="https://coda.io/">Coda</a></strong>. None of the apps have unlimited notes as far as I can tell.</p><h2>Planning</h2><p>While Notion is pretty good with planning, calendar and such, sometimes I require to share it with others. And for that, I found <a href="https://to-do.office.com/tasks/">Microsoft To Do</a>, and <a href="https://keep.google.com/">Google Keep</a> pretty handy.</p><p>Other items that I use time to time: <a href="https://www.jetbrains.com/youtrack">YouTrack</a> and <a href="https://trello.com/">Trello</a>. </p><h2>Password Managers</h2><p>With hundreds of sites and apps to login, its risky to use the same password at many places. I used <a href="https://www.lastpass.com/">LastPass</a> for a while, till they suffered a security breach. These days I am using better privacy and encryption providers: <a href="https://proton.me/pass">ProtonPass</a> and <a href="https://bitwarden.com/">BitWarden</a>. Super useful!</p><h2>Storage</h2><p>I need more data to store my random useless pictures and PDF files. I am paying a minimal for a 200GB plan from Apple <a href="https://www.icloud.com/">iCloud</a>. I use it only because it is well integrated with my phone.</p><p>I also use Google Drive extensively and the paid plan (Google Workspaces) come with 30G per license.</p><p>Of late, I have started using some private encrypted storage, and I use <a href="https://mega.io/">mega</a>, <a href="https://koofr.eu/">koofr</a>, <a href="https://filen.io/">Filen</a> and <a href="https://nordlocker.com/">NordLocker</a>.</p><h2>VPN</h2><p>Not a regular user of VPN. But if I do, I use <a href="https://protonvpn.com/">ProtonVPN</a>. There are some good European providers such as <a href="https://nordvpn.com/">NordVPN</a>. But haven&#8217;t found the need of it yet.</p><h2>File Shares / Pastes</h2><p>There are some very nice tools around these. I found <a href="https://wormhole.app/">wormhole</a> and <a href="https://paste.ec/">paste.ec</a> particularly interesting.</p><h2>Messaging</h2><p>I don&#8217;t like <a href="https://www.whatsapp.com/">WhatsApp</a> much. But unfortunately thats what most people use. So, I am kind of stuck with it.</p><p>Given a choice, I prefer <a href="https://signal.org/">Signal</a> over WhatsApp, as it doesn&#8217;t require you to add contacts by phone numbers.</p><p>I recently discovered <a href="https://simplex.chat/">Simplex</a>. That&#8217;s another level of anonymity, but I doubt I will use it anytime soon.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cafeaffe.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading cafeaffe! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[cafeaffe]]></title><description><![CDATA[This is cafeaffe.]]></description><link>https://cafeaffe.substack.com/p/cafeaffe</link><guid isPermaLink="false">https://cafeaffe.substack.com/p/cafeaffe</guid><dc:creator><![CDATA[Gagan]]></dc:creator><pubDate>Thu, 18 Apr 2024 03:13:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KgjP!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb27ef4f-a07d-4376-bdbd-235011a9b215_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is cafeaffe. Read as Cafe-Affe.</p><p>This doesn&#8217;t mean much. This is one of those 32-bit numbers, written in hex, frequently used in dev or tests (similar to <a href="https://en.wikipedia.org/wiki/Foobar">foo-bar</a> or <a href="https://en.wikipedia.org/wiki/Alice_and_Bob">Alice-Bob</a>), particularly in low level programming.</p><p>This is a nice memorable word. Other such words include DeadBeef, CafeBabe etc.</p><p>Trivia: <a href="https://www.artima.com/insidejvm/whyCAFEBABE.html">CafeBabe</a> is used in official Java class file format.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cafeaffe.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading cafeaffe! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>