<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Gopal Verma</title>
        <link>https://gopx.dev</link>
        <description>Gopal Verma's Portfolio</description>
        <lastBuildDate>Thu, 25 Sep 2025 18:44:40 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>CC BY-NC-SA 4.0 2025 © Gopal Verma</copyright>
        <atom:link href="https://gopx.dev/feed.xml" rel="self" type="application/rss+xml"/>
        <item>
            <title><![CDATA[Edge Computing & TinyML]]></title>
            <link>https://gopx.dev/diary/blogs/edge-computing-and-tinyml</link>
            <guid>https://gopx.dev/diary/blogs/edge-computing-and-tinyml</guid>
            <pubDate>Thu, 25 Sep 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>The past decade has witnessed an explosion in the number of IoT endpoints—sensors, wearables, actuators, smart appliances. While cloud-based analytics have served us well, emerging use cases demand <strong>real-time</strong>, <strong>low-latency</strong>, <strong>private</strong>, and <strong>ultra-efficient</strong> processing. That’s where <strong>edge computing</strong> and <strong>TinyML</strong> come together, enabling machine learning models to run <em>directly on devices</em> or on edge nodes close to devices.</p>
<p>In this blog, I’ll walk you through:</p>
<ul>
<li>What edge computing and TinyML are, and why they matter</li>
<li>Architectural patterns and tradeoffs</li>
<li>How to build and deploy lightweight models</li>
<li>Real-world case studies (manufacturing, healthcare, conservation)</li>
<li>Performance comparisons and practical challenges</li>
<li>Future directions</li>
</ul>
<p>My goal is to keep this human and conversational, not overly formal—let’s explore this together.</p>
<h2>What is Edge Computing, and Why Now?</h2>
<p>Edge computing refers to moving computation, storage, or analytics closer to where the data is generated (the “edge” of the network) instead of sending everything to a centralized cloud.</p>
<h3>Why is edge computing suddenly so relevant?</h3>
<ul>
<li><strong>Latency Sensitivity</strong>: Some applications (e.g. defect detection, autonomous control, medical alerts) can’t wait for a round-trip to the cloud.</li>
<li><strong>Bandwidth / Cost Constraints</strong>: Transmitting raw sensor data (e.g. video, high-sample signals) is expensive or impractical.</li>
<li><strong>Connectivity Gaps</strong>: Many devices operate in remote or intermittently connected environments.</li>
<li><strong>Privacy &amp; Compliance</strong>: Keeping sensitive data locally (on device) avoids sending raw personal or health data over networks.</li>
<li><strong>Resiliency</strong>: Edge nodes can continue working even when connectivity to the cloud is degraded.</li>
</ul>
<p>IBM lists a host of use cases—from autonomous vehicles to industrial control and healthcare—where edge computing delivers tangible value. (<a href="https://www.ibm.com/think/topics/edge-computing-use-cases" title="Edge computing: Top use cases">IBM</a>)</p>
<h3>Edge tiers and hybrid architectures</h3>
<p>You don’t always need <em>all</em> the intelligence on the microcontroller. A typical architecture is hierarchical:</p>
<ol>
<li><strong>Device / Sensor level</strong> (microcontroller, sensor node)</li>
<li><strong>Edge gateway / local aggregator</strong> (Raspberry Pi, industrial PC, edge server)</li>
<li><strong>Cloud backend / central AI / model updates</strong></li>
</ol>
<p>At each layer, some processing, filtering, or inference may happen. The trick is knowing <strong>which tasks</strong> belong where.</p>
<h2>Enter TinyML: Machine Learning for Microcontrollers</h2>
<p><strong>TinyML</strong> is the discipline of building and deploying ML models on heavily resource-constrained devices (microcontrollers, low-power SoCs). The idea: run inference (not full training) on ultra-low-power hardware.</p>
<h3>Key characteristics of TinyML</h3>
<ul>
<li>Works on devices with <strong>tens to hundreds of kilobytes</strong> of RAM and limited compute</li>
<li>Emphasis on <strong>very low-power consumption</strong></li>
<li>Uses optimized models (quantized, pruned, compact architectures)</li>
<li>Usually supports a <strong>fixed inference pipeline</strong>, not full retraining</li>
</ul>
<p>See the Embedded article “Deploying Neural Networks on Microcontrollers with TinyML” for a good introduction to the workflow. (<a href="https://www.embedded.com/deploying-neural-networks-on-microcontrollers-with-tinyml/" title="Deploying Neural Networks on Microcontrollers with TinyML">Embedded</a>)</p>
<p>Recent survey work highlights how edge computing + TinyML synergize: embedding intelligence at the endpoints to reduce communication, latency, and energy consumption. (<a href="https://www.mdpi.com/2079-9292/13/17/3562" title="Advancements in TinyML: Applications, Limitations, and ...">MDPI</a>)</p>
<h2>Building and Deploying TinyML Models — A Walkthrough</h2>
<p>Let me outline a practical pipeline for developing a TinyML solution. I’ll also show some code snippets to make this concrete.</p>
<h3>1. Data collection and preprocessing</h3>
<p>You gather data from sensors (accelerometer, vibration, audio, etc.). Preprocessing might include filtering, normalization, windowing, feature extraction (e.g. FFT, spectrograms).</p>
<p>Example (Python) for a sliding window:</p>
<pre><code class="language-python">import numpy as np

def sliding_windows(signal, window_size, step_size):
    windows = []
    for start in range(0, len(signal) - window_size + 1, step_size):
        windows.append(signal[start : start + window_size])
    return np.stack(windows)

# Example: 1D accelerometer series
sig = np.load(&quot;accel_data.npy&quot;)
windows = sliding_windows(sig, window_size=128, step_size=32)
</code></pre>
<p>You might compute features like mean, variance, FFT bins, etc. Or directly feed the raw window into a small neural network.</p>
<h3>2. Model selection, training &amp; optimization</h3>
<p>Pick a compact neural architecture (e.g. small CNN, 1D conv, MLP). Train it on a desktop or server. Then:</p>
<ul>
<li><strong>Quantize</strong> (e.g. 8-bit integer operations)</li>
<li><strong>Prune</strong> (remove redundant weights)</li>
<li><strong>Knowledge distillation</strong> (train small model to mimic a larger one)</li>
</ul>
<p>Frameworks like <strong>TensorFlow Lite for Microcontrollers (TFLite Micro)</strong> help convert models to C arrays. There’s also <strong>Edge Impulse</strong> (a no-code/low-code platform) that targets TinyML workflows.</p>
<p>Example: converting a Keras model to a TFLite quantized model:</p>
<pre><code class="language-python">import tensorflow as tf

model = ...  # your trained Keras model
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Optionally specify representative dataset for quantization
def representative_dataset():
    for x in tf.data.Dataset.from_tensor_slices(training_data).batch(1).take(100):
        yield [x]
converter.representative_dataset = representative_dataset
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8

tflite_model = converter.convert()
with open(&quot;model_quant.tflite&quot;, &quot;wb&quot;) as f:
    f.write(tflite_model)
</code></pre>
<h3>3. Porting and deploying to microcontroller</h3>
<p>Once you have a <code>.tflite</code> file, you embed it in the firmware. If using TFLite Micro, you include the model as a C array.</p>
<p>Example skeleton (C++):</p>
<pre><code class="language-cpp">#include &quot;tensorflow/lite/micro/all_ops_resolver.h&quot;
#include &quot;tensorflow/lite/micro/micro_interpreter.h&quot;
#include &quot;model_data.h&quot;  // generated header containing model bytes

constexpr int tensor_arena_size = 10 * 1024;  // adjust as needed
uint8_t tensor_arena[tensor_arena_size];

void setup() {
  // set up interpreter
  static tflite::MicroMutableOpResolver&lt;10&gt; resolver;
  resolver.AddDense();
  resolver.AddConv2D();
  // add other ops you need

  tflite::MicroInterpreter interpreter(
    model, resolver, tensor_arena, tensor_arena_size, error_reporter);

  interpreter.AllocateTensors();

  // get input &amp; output pointers
  TfLiteTensor* input = interpreter.input(0);
  TfLiteTensor* output = interpreter.output(0);
}

void loop() {
  // fill input data
  // e.g. copy sensor data window into input-&gt;data.int8 (if quantized)
  ...
  interpreter.Invoke();
  int8_t* result = output-&gt;data.int8;
  // interpret the output
  ...
}
</code></pre>
<p>You should also handle sensor reading, buffering, interrupts, power management, etc.</p>
<h3>4. Edge &amp; cloud coordination</h3>
<p>Your devices may periodically transmit <em>only the inference results</em>, or if anomaly detected, the raw data. The edge gateway or cloud can perform heavier aggregation, model updates, retraining, and fleet management.</p>
<p>You can also adopt <strong>hierarchical TinyML</strong>, where each device runs a simple model, and an edge node does ensemble decisions across multiple endpoints. For example, Sanchez-Iborra et al. propose a two-layer TinyML scheme in agriculture that reduces communication and energy costs. (<a href="https://informatica.vu.lt/journal/INFORMATICA/article/1281/text" title="Intelligent and Efficient IoT Through the Cooperation of ...">Informatica</a>)</p>
<h2>Real-World Case Studies &amp; Applications</h2>
<p>To ground this discussion, here are interesting real-world uses:</p>
<h3>Manufacturing &amp; Predictive Maintenance</h3>
<ul>
<li>TinyML sensors continuously listen to motor or bearing vibration; when patterns deviate, the device raises an alert locally instead of streaming raw time-series.</li>
<li>An ultra-low-power visual TinyML system processed object detection at 30 FPS with only ~160 mW using a co-processor + MCU setup. (<a href="https://arxiv.org/abs/2207.04663" title="An Ultra-low Power TinyML System for Real-time Visual Processing at Edge">arXiv</a>)</li>
</ul>
<h3>Healthcare &amp; Remote Monitoring</h3>
<ul>
<li>Wearable sensors performing ECG or breathing analysis locally to detect arrhythmias or apnea events in real time.</li>
<li>In environmental monitoring / conservation, one project deployed TinyML on an Arduino Nano 33 BLE to classify hornbill bird calls (for wildlife monitoring) directly at the edge. (<a href="https://arxiv.org/abs/2504.12272" title="Edge Intelligence for Wildlife Conservation: Real-Time Hornbill Call Classification Using TinyML">arXiv</a>)</li>
</ul>
<h3>Smart Agriculture / Environmental IoT</h3>
<ul>
<li>A TinyML + LoRa setup where devices predict optimal LoRa channel hopping locally to reduce packet collisions and improve link performance. Their approach increased RSSI by up to 63% and SNR by 44% compared to random hopping. (<a href="https://arxiv.org/abs/2412.01609" title="Optimizing LoRa for Edge Computing with TinyML Pipeline for Channel Hopping">arXiv</a>)</li>
<li>In a hierarchical scheme, as earlier mentioned, IoT nodes use TinyML to make local decisions, and these are aggregated by an edge node to form a global decision (e.g. soil moisture control). (<a href="https://informatica.vu.lt/journal/INFORMATICA/article/1281/text" title="Intelligent and Efficient IoT Through the Cooperation of ...">Informatica</a>)</li>
</ul>
<h3>Urban Mobility &amp; Smart Cities</h3>
<ul>
<li>A FIWARE-based architecture extended to support TinyML and ML Ops for urban systems (e.g. traffic), managing full model lifecycles across edge devices. (<a href="https://arxiv.org/abs/2411.13583" title="Enhanced FIWARE-Based Architecture for Cyberphysical Systems With Tiny Machine Learning and Machine Learning Operations: A Case Study on Urban Mobility Systems">arXiv</a>)</li>
</ul>
<h3>Consumer &amp; Ambient Devices</h3>
<ul>
<li>“No More Coffee Spills” (NMCS): a microcontroller detects brewing sound via microphone and uses a vision module (TinyML) to check whether a cup is present; alerts the user if the cup is missing. (<a href="https://files.seeedstudio.com/wiki/K1100-quick-start/TinyML-Case-Studies.pdf" title="TinyML Case Studies">Seeed Studio Files</a>)</li>
<li>Liquid classification and “electronic nose” devices built using tiny sensors + TinyML to distinguish substances like water quality or beverage types. (<a href="https://files.seeedstudio.com/wiki/K1100-quick-start/TinyML-Case-Studies.pdf" title="TinyML Case Studies">Seeed Studio Files</a>)</li>
</ul>
<h2>Performance Comparison: Edge / TinyML vs Cloud ML</h2>
<p>Here’s a rough sketch of tradeoffs (actual numbers depend heavily on use case, hardware, model):</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>TinyML / Edge</th>
<th>Cloud ML</th>
</tr>
</thead>
<tbody>
<tr>
<td>Latency</td>
<td>Very low (ms)</td>
<td>Higher (network + processing delay)</td>
</tr>
<tr>
<td>Bandwidth usage</td>
<td>Minimal (only inference results or anomalies)</td>
<td>High (raw data upload)</td>
</tr>
<tr>
<td>Energy / Power</td>
<td>Ultra constrained, optimized</td>
<td>Less constrained but cost of networking matters</td>
</tr>
<tr>
<td>Model complexity</td>
<td>Smaller, simpler models; limited capacity</td>
<td>Large, deep models</td>
</tr>
<tr>
<td>Model updates / retraining</td>
<td>Challenging to update many endpoints</td>
<td>Easier in centralized cloud</td>
</tr>
<tr>
<td>Scalability in devices</td>
<td>Better for many distributed endpoints</td>
<td>Network and cost can limit scale</td>
</tr>
<tr>
<td>Security / privacy</td>
<td>Sensitive data stays local</td>
<td>Higher exposure risk transmitting data</td>
</tr>
<tr>
<td>Reliability in connectivity loss</td>
<td>Can run offline</td>
<td>Dependent on network availability</td>
</tr>
</tbody>
</table>
<p>Benchmarks in literature show that for many tasks, quantized TinyML models give acceptable accuracy (within a few percentage points) while drastically reducing memory and compute footprint. For example, the Embedded article mentions that converting models to TFLite and pruning can make them feasible on microcontrollers. (<a href="https://www.embedded.com/deploying-neural-networks-on-microcontrollers-with-tinyml/" title="Deploying Neural Networks on Microcontrollers with TinyML">Embedded</a>)</p>
<p>Also, in some TinyML systems, all weights/features are stored on-chip, avoiding power-hungry off-chip memory accesses—this results in both latency and energy reduction. (<a href="https://arxiv.org/abs/2207.04663" title="An Ultra-low Power TinyML System for Real-time Visual Processing at Edge">arXiv</a>)</p>
<p>But it's not a silver bullet: when the model needs to adapt, learn continuously, or handle large context, cloud or hybrid architectures still play a key role.</p>
<h2>Challenges, Best Practices &amp; Tips</h2>
<h3>Challenges</h3>
<ol>
<li><strong>Memory constraints &amp; fragmentation</strong>: tiny RAM, limited flash.</li>
<li><strong>Model accuracy vs size tradeoff</strong></li>
<li><strong>Energy / battery management</strong></li>
<li><strong>Sensor noise, environment drift</strong></li>
<li><strong>OTA (Over-the-Air) updates / versioning</strong></li>
<li><strong>Security of model and data on endpoint</strong></li>
<li><strong>Toolchain fragmentation</strong></li>
</ol>
<h3>Tips &amp; Best Practices</h3>
<ul>
<li>Start with a small baseline model; profile memory and latency early.</li>
<li>Use <strong>representative datasets</strong> (on-device-like inputs) for quantization calibration.</li>
<li>Apply <strong>pruning</strong>, <strong>quantization-aware training</strong>, and <strong>knowledge distillation</strong>.</li>
<li>Consider <strong>sparse architectures</strong> or efficient building blocks (like separable convolutions).</li>
<li>Use hardware accelerators (e.g., NPU, DSP) when available.</li>
<li>Design your system for <strong>fail-safe fallback</strong> (if edge fails, send minimal data to cloud).</li>
<li>Build robust update and rollback mechanisms.</li>
<li>Monitor drift, plan for periodic re-calibration or remote retraining.</li>
<li>Secure the device (encrypt model, prevent adversarial inputs, secure boot).</li>
</ul>
<h2>Future Directions &amp; Trends</h2>
<ul>
<li><strong>Federated learning / split learning</strong> across tiny devices may distribute training more securely.</li>
<li><strong>Adaptive / self-updating models</strong> that evolve based on sensed context.</li>
<li><strong>Multimodal TinyML</strong>: combining audio, vibration, vision at the edge.</li>
<li><strong>Energy harvesting + perpetual devices</strong>, where the device lives indefinitely powered by solar / ambient.</li>
<li><strong>Better tools and abstractions</strong> (AutoML for TinyML, unified toolchains)</li>
<li><strong>Edge-to-cloud continuum</strong>, where intelligence flows across device, edge, and cloud layers.</li>
</ul>
<p>The trend is that TinyML will increasingly blur the line: instead of pushing all intelligence to cloud or all to device, systems will dynamically allocate workloads across tiers.</p>
<h2>Conclusion</h2>
<p>Edge computing and TinyML together unlock a compelling paradigm: intelligence at the source. We can build systems that respond instantly, conserve bandwidth, protect privacy, and scale across massive fleets of devices. But to get there, you need careful architecture, disciplined optimization, and robust update strategies.</p>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[Generative AI and Synthetic Data]]></title>
            <link>https://gopx.dev/diary/blogs/generative-ai-and-synthetic-data</link>
            <guid>https://gopx.dev/diary/blogs/generative-ai-and-synthetic-data</guid>
            <pubDate>Wed, 24 Sep 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>In 2025, generative AI is rapidly reshaping how data scientists think about data augmentation, anonymization, and model training—especially in domains with strict privacy constraints. In this blog, we’ll cover:</p>
<ul>
<li>What synthetic data is, and why it matters</li>
<li>How generative models (GANs, diffusion models, LLMs) are applied to create synthetic data</li>
<li>Practical use-cases in finance, healthcare, analytics</li>
<li>Challenges, pitfalls, and best practices</li>
<li>Outlook: where synthetic data is heading</li>
</ul>
<h2>What Is Synthetic Data, and Why It Matters</h2>
<p><strong>Synthetic data</strong> refers to data that is artificially generated (by algorithms) to mimic the statistical properties, structure, and relationships of real-world datasets, without directly revealing identifiable information.</p>
<p>Key motivations:</p>
<ul>
<li><strong>Privacy &amp; compliance</strong>: In sectors like healthcare, finance, and insurance, strict regulations (e.g. HIPAA, GDPR) restrict use or sharing of real sensitive records. Synthetic data offers a privacy-respecting alternative. (<a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11958975/" title="Synthetic data generation: a privacy-preserving approach ...">PMC</a>)</li>
<li><strong>Data scarcity &amp; class imbalance</strong>: Some events (fraud, rare disease cases) are infrequent. Synthetic data can artificially upsample rare classes or simulate edge-case scenarios. (<a href="https://humansintheloop.org/why-synthetic-data-is-taking-over-in-2025-solving-ais-data-crisis/" title="Why Synthetic Data Is Taking Over in 2025: Solving AI's Data ...">humansintheloop.org</a>)</li>
<li><strong>Safe sharing &amp; collaboration</strong>: Organizations can share synthetic datasets with partners or external researchers without risking exposure of sensitive data. (<a href="https://research.aimultiple.com/synthetic-data-use-cases/" title="Top 20+ Synthetic Data Use Cases">AIMultiple</a>)</li>
<li><strong>Faster prototyping / validation</strong>: You can generate custom scenarios on demand, test model robustness, validate under stress conditions. (<a href="https://www.tredence.com/blog/synthetic-data-generation" title="Synthetic Data Generation with Generative AI for Tabular ...">tredence.com</a>)</li>
</ul>
<p>However, synthetic data is not a silver bullet. The generated data must maintain <strong>utility</strong> (i.e., support model performance) and <strong>fidelity</strong> (i.e., follow real distributional patterns) without leaking sensitive information.</p>
<h2>How Synthetic Data is Created Using Generative Models</h2>
<p>Broadly, synthetic data generation can leverage different classes of generative models. Below are some of the commonly used approaches as of 2025.</p>
<h3>GANs (Generative Adversarial Networks)</h3>
<ul>
<li>A <strong>generator</strong> network produces synthetic samples; a <strong>discriminator</strong> network tries to distinguish real vs synthetic. Over adversarial training, the generator learns to produce realistic samples.</li>
<li>Variants exist for tabular, image, time-series, and mixed-modal data.</li>
<li>In healthcare, recent work proposes <strong>bias-transforming GANs (Bt-GAN)</strong> to generate fair synthetic EHR data by constraining spurious correlations and preserving subgroup densities. (<a href="https://arxiv.org/abs/2404.13634" title="Bt-GAN: Generating Fair Synthetic Healthdata via Bias-transforming Generative Adversarial Networks">arXiv</a>)</li>
<li>In finance, GANs have long been considered for simulating transaction data, though more recently diffusion-based models are gaining attention. (<a href="https://arxiv.org/html/2508.19570v1" title="Generative Models for Synthetic Data">arXiv</a>)</li>
</ul>
<h3>Diffusion Models &amp; Score-Based Generative Models</h3>
<ul>
<li>Diffusion models gradually corrupt data by adding noise, and then learn to reverse the process to sample from the clean distribution.</li>
<li>They are gaining prominence for tabular data generation because they often avoid some of the mode-collapse issues of GANs.</li>
<li>For instance, <strong>FinDiff</strong> is a diffusion-based model tailored for financial tabular data that generates mixed-type attributes (numeric, categorical) with high utility. (<a href="https://arxiv.org/abs/2309.01472" title="FinDiff: Diffusion Models for Financial Tabular Data Generation">arXiv</a>)</li>
</ul>
<h3>Variational Autoencoders (VAEs) and Their Hybrids</h3>
<ul>
<li>VAEs encode data to a latent space distribution, then decode from latent samples back to synthetic instances.</li>
<li>They can be easier to train and more stable than GANs, but may produce more “blurry” or average-like samples, especially for edge cases.</li>
<li>Some hybrid approaches (e.g., VAE + GAN, VAE with conditional priors) attempt to combine stability with realism.</li>
</ul>
<h3>LLMs / Transformer-Based Models for Tabular or Textual Synthetic Data</h3>
<ul>
<li>Large models (e.g., transformer-style architectures) originally built for text are being adapted to generate structured or semi-structured synthetic data.</li>
<li>One common pattern: convert rows into a serialized token sequence and prompt the model to generate new “rows” consistent with learned patterns.</li>
<li>In domains like healthcare or finance, LLMs can also generate synthetic narratives (clinical notes, transaction descriptions) or synthetic attribute augmentation. (<a href="https://www.dataversity.net/how-generative-ai-is-revolutionizing-training-data-with-synthetic-datasets/" title="How Generative AI Is Revolutionizing Training Data with ...">DATAVERSITY</a>)</li>
</ul>
<h3>Auxiliary Models &amp; Nested Synthetic Pipelines</h3>
<ul>
<li>In advanced pipelines, one generative model (auxiliary model) produces synthetic data which is then used to train or test another model. This nested architecture is becoming more common in large-scale AI workflows. (<a href="https://arxiv.org/abs/2501.18493" title="Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline">arXiv</a>)</li>
<li>But this layering increases risk: poor synthetic data can cascade errors downstream.</li>
</ul>
<h3>Domain-Specific Enhancements</h3>
<ul>
<li><strong>Knowledge distillation &amp; domain constraints</strong>: For example, CK4Gen integrates Cox proportional hazards model knowledge into synthetic survival datasets to preserve hazard ratios and realistic survival curves. (<a href="https://arxiv.org/abs/2410.16872" title="CK4Gen: A Knowledge Distillation Framework for Generating High-Utility Synthetic Survival Datasets in Healthcare">arXiv</a>)</li>
<li><strong>Fairness-aware generation</strong>: Enforcing constraints so that protected subgroups are fairly represented and bias amplification is avoided (like Bt-GAN above). (<a href="https://arxiv.org/abs/2404.13634" title="Bt-GAN: Generating Fair Synthetic Healthdata via Bias-transforming Generative Adversarial Networks">arXiv</a>)</li>
</ul>
<h2>Practical Use-Cases &amp; Real-World Impact</h2>
<p>Here are concrete domains and use-cases where synthetic data + generative AI have made notable progress by 2025.</p>
<h3>Healthcare &amp; Clinical Research</h3>
<ul>
<li>Synthetic EHR / patient records: Training diagnostic models, decision-support systems, and analytics models without exposing real patient records. (<a href="https://www.dataversity.net/how-generative-ai-is-revolutionizing-training-data-with-synthetic-datasets/" title="How Generative AI Is Revolutionizing Training Data with ...">DATAVERSITY</a>)</li>
<li>Clinical trial simulation: Synthetic trial arms, patient recruitment, and outcome simulation when sample sizes are low. (<a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11958975/" title="Synthetic data generation: a privacy-preserving approach ...">PMC</a>)</li>
<li>Rare disease modeling: For very low-incidence conditions, synthetic augmentation helps build predictive models. (<a href="https://www.dataversity.net/how-generative-ai-is-revolutionizing-training-data-with-synthetic-datasets/" title="How Generative AI Is Revolutionizing Training Data with ...">DATAVERSITY</a>)</li>
<li>Survival analysis: CK4Gen (cited above) specifically targets survival datasets while preserving survival curve and hazard distributions. (<a href="https://arxiv.org/abs/2410.16872" title="CK4Gen: A Knowledge Distillation Framework for Generating High-Utility Synthetic Survival Datasets in Healthcare">arXiv</a>)</li>
</ul>
<p>However, challenges in healthcare are acute. For example:</p>
<ul>
<li>Synthetic models may miss <strong>temporal dynamics</strong> or subtle signals, reducing fidelity in critical scenarios (e.g. detecting deterioration). (<a href="https://www.rgnmed.com/post/beyond-the-hype-why-synthetic-data-falls-short-in-healthcare-and-how-regenmed-circles-closes-the-gap" title="Beyond the Hype: Why Synthetic Data Falls Short ...">rgnmed.com</a>)</li>
<li>Over-reliance on synthetic data can lead models to underperform in real-world settings if distribution shift is unaccounted.</li>
</ul>
<h3>Finance, Risk &amp; Fraud Detection</h3>
<ul>
<li><strong>Fraud simulation</strong>: Synthetic transaction histories mimic fraudulent behavior to help models detect anomalies, especially when real fraud examples are rare. (<a href="https://www.netguru.com/blog/synthetic-data" title="Synthetic Data: Revolutionizing Modern AI Development in ...">Netguru</a>)</li>
<li><strong>Stress testing / scenario generation</strong>: Generating hypothetical market or economic scenarios to evaluate models (e.g., shocks, crises) without waiting for real events. (<a href="https://arxiv.org/abs/2309.01472" title="FinDiff: Diffusion Models for Financial Tabular Data Generation">arXiv</a>)</li>
<li><strong>Investment analytics &amp; synthetic factor modeling</strong>: Some asset managers use synthetic data to train models where real data is sparse or proprietary. (<a href="https://blogs.cfainstitute.org/investor/2025/07/31/how-genai-powered-synthetic-data-is-reshaping-investment-workflows/" title="How GenAI-Powered Synthetic Data Is Reshaping ...">CFA Institute Daily Browse</a>)</li>
<li><strong>Data sharing among institutions</strong>: Banks or financial bodies generate synthetic datasets to share (e.g. in consortiums) to evaluate common risk models while preserving confidentiality. (<a href="https://arxiv.org/html/2508.19570v1" title="Generative Models for Synthetic Data">arXiv</a>)</li>
</ul>
<h3>Other Domains</h3>
<ul>
<li><strong>Market research &amp; consumer analytics</strong>: Synthetic consumer responses, simulated survey data to explore “what-if” scenarios while maintaining respondent privacy. (<a href="https://www.greenbook.org/insights/data-science/the-secret-life-of-synthetic-data-why-its-taking-over-research" title="The Secret Life of Synthetic Data: Why It's Taking Over ...">greenbook.org</a>)</li>
<li><strong>Autonomous driving / robotics / simulation</strong>: Synthetic sensor and scenario data (e.g., generating rare edge-case scenes) augment real-world collected data. (<a href="https://www.nvidia.com/en-us/use-cases/synthetic-data/" title="Synthetic Data for AI &amp; 3D Simulation Workflows | Use Case">NVIDIA</a>)</li>
<li><strong>3D, vision, and multimodal pipelines</strong>: Synthetic image, video, 3D model generation for training computer vision systems. (<a href="https://www.nvidia.com/en-us/use-cases/synthetic-data/" title="Synthetic Data for AI &amp; 3D Simulation Workflows | Use Case">NVIDIA</a>)</li>
</ul>
<h3>Business Impact Metrics &amp; Estimates</h3>
<ul>
<li>Gartner estimates that by 2028, <strong>80% of data used for AI</strong> will be synthetic in nature. (<a href="https://www.ibm.com/think/insights/streamline-accelerate-ai-initiatives-synthetic-data" title="5 best practices for synthetic data use">IBM</a>)</li>
<li>Many organizations report cost savings, faster iteration cycles, and better model robustness by using synthetic data in augmentation pipelines. (<a href="https://www.netguru.com/blog/synthetic-data" title="Synthetic Data: Revolutionizing Modern AI Development in ...">Netguru</a>)</li>
<li>However, adoption is still emerging: many organizations are in pilot phases or proof-of-concept stages. (<a href="https://www.ibm.com/think/insights/streamline-accelerate-ai-initiatives-synthetic-data" title="5 best practices for synthetic data use">IBM</a>)</li>
</ul>
<h2>Challenges, Risks &amp; Best Practices</h2>
<h3>Key Risks &amp; Limitations</h3>
<ol>
<li>
<p><strong>Fidelity vs. realism trade-off</strong><br>
Synthetic datasets often approximate the “central manifold” of distributions and may underrepresent tails or extreme events, limiting generalizability. (<a href="https://www.rgnmed.com/post/beyond-the-hype-why-synthetic-data-falls-short-in-healthcare-and-how-regenmed-circles-closes-the-gap" title="Beyond the Hype: Why Synthetic Data Falls Short ...">rgnmed.com</a>)</p>
</li>
<li>
<p><strong>Model collapse / recursive degradation</strong><br>
If models are trained repeatedly on synthetic data generated by prior models (a form of recursion), performance can degrade over successive generations. This phenomenon is known as <strong>model collapse</strong>. (<a href="https://en.wikipedia.org/wiki/Model_collapse" title="Model collapse">Wikipedia</a>)</p>
</li>
<li>
<p><strong>Privacy leakage / inference attacks</strong><br>
Poor synthesis may allow adversaries to reconstruct or memorize real individuals, especially if overfitting occurs. Strong differential privacy or noise constraints are often required.</p>
</li>
<li>
<p><strong>Bias amplification &amp; fairness issues</strong><br>
Synthetic generators may propagate or amplify existing biases (e.g. underrepresented groups), leading to downstream discrimination. Techniques like fairness-aware generation are needed (e.g. Bt-GAN). (<a href="https://arxiv.org/abs/2404.13634" title="Bt-GAN: Generating Fair Synthetic Healthdata via Bias-transforming Generative Adversarial Networks">arXiv</a>)</p>
</li>
<li>
<p><strong>Validation and trustworthiness</strong><br>
Verifying that synthetic data is “safe” and “useful” is nontrivial. Standard metrics may not reveal subtle errors that hurt downstream performance.</p>
</li>
<li>
<p><strong>Distribution shift &amp; domain mismatch</strong><br>
Synthetic data might not reflect changes in real-world distributions (temporal drift, feature interactions), making models brittle in production.</p>
</li>
</ol>
<h3>Best Practices &amp; Guidelines</h3>
<p>Here are recommended practices to increase the success rate of synthetic data in production:</p>
<ul>
<li><strong>Hybrid training</strong>: Mix synthetic and real data rather than relying purely on synthetic. This balances realism and flexibility.</li>
<li><strong>Differential privacy / noise injection</strong>: Add controlled noise to ensure that synthetic outputs do not leak sensitive records.</li>
<li><strong>Diversity &amp; oversampling rare regions</strong>: Force generative models to pay attention to underrepresented regions (e.g. enforce sampling weights).</li>
<li><strong>Domain constraints / rules-based anchoring</strong>: Use domain knowledge to enforce invariants (e.g. monotonic relationships, constraints) in generation.</li>
<li><strong>Robust validation suite</strong>: Evaluate synthetic data with multiple metrics: distributional similarity (e.g. KL divergence, feature-wise distances), downstream model performance, outlier detection, and sanity checks for edge-case fidelity.</li>
<li><strong>Progressive scaling &amp; sanity checks</strong>: Start with small, controlled synthetic sets, validate, then expand.</li>
<li><strong>Governance, audit &amp; transparency</strong>: Maintain audit logs, metadata, and clear documentation (when synthetic data was used).</li>
<li><strong>Avoid deep recursion loops</strong>: Do not train new generators only on synthetic data without injecting fresh real data to avoid model collapse.</li>
<li><strong>Stress testing on boundary cases</strong>: Purposefully generate adversarial or corner-case synthetic samples to test model robustness.</li>
<li><strong>Continuous monitoring after deployment</strong>: Monitor for drift, anomalies, or degradation in model performance in real production data.</li>
</ul>
<p>IBM’s recommendations for synthetic data adoption emphasize many of these, noting that synthetic data is still under-adopted and organizations must build internal capability to validate and monitor utility. (<a href="https://www.ibm.com/think/insights/streamline-accelerate-ai-initiatives-synthetic-data" title="5 best practices for synthetic data use">IBM</a>)</p>
<h2>Sample Architecture: Synthetic Data Pipeline</h2>
<p>Below is a simplified pipeline architecture you might adopt.</p>
<pre><code class="language-txt">Raw / sensitive data  →  Preprocessing &amp; Feature Engineering  
    → Train generative model (GAN / diffusion / transformer)  
    → Synthetic dataset (with metadata, labels)  
    → Validation &amp; filtering  
         • Compare distribution against real  
         • Outlier / anomaly checks  
         • Privacy risk tests  
    → Hybrid dataset (real + synthetic)  
    → Model training / testing / validation  
    → Monitoring &amp; feedback loop  
</code></pre>
<p>Some pipelines integrate domain-checking modules or rules-based correction steps after generation to maintain consistency (e.g., enforce monotonicity, business rules). For temporal or time-series data, generative models may incorporate autoregressive or recurrent components to capture dynamics.</p>
<h2>Outlook &amp; Trends for 2025 and Beyond</h2>
<p>Here are some emerging trends and predictions:</p>
<ul>
<li><strong>Widespread integration of synthetic data in AI pipelines</strong>: As generative AI matures, synthetic data becomes a mainstream tool—not just for augmentation, but as a core component of model training. (<a href="https://arxiv.org/abs/2501.18493" title="Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline">arXiv</a>)</li>
<li><strong>Transformer-based structured-data models</strong>: More LLM-like architectures fine-tuned to generate tabular, time-series, and mixed-type data natively.</li>
<li><strong>On-the-fly synthetic data generation</strong>: Real-time synthetic augmentation during model inference or retraining (e.g. live data drift compensation).</li>
<li><strong>Better privacy guarantees</strong>: Integration of rigorous differential privacy, federated synthetic generation, and cryptographic techniques to bound leakage.</li>
<li><strong>Self-supervised or self-refining synthetic pipelines</strong>: Generative models that improve via closed-loop feedback from downstream tasks.</li>
<li><strong>Domain-specific synthetic platforms</strong>: Vertical solutions (finance, life sciences, IoT) offering synthetic generation tuned for domain constraints.</li>
<li><strong>Hybrid generative / refinement approaches</strong>: For example, Generative Data Refinement (GDR) methods that &quot;clean&quot; mixed or untrusted data rather than fully synthetic approaches. (A new direction emerging in 2025) (<a href="https://www.businessinsider.com/google-deepmind-ai-training-data-shortage-researchers-harmful-2025-9" title="A key type of AI training data is running out. Googlers have a bold new idea to fix that.">Business Insider</a>)</li>
<li><strong>Guardrails and regulation</strong>: As synthetic data use proliferates, regulatory and auditing frameworks will evolve to ensure accountability, fairness, and privacy compliance.</li>
</ul>
<h2>Sample Starter Code Sketch (Python / PyTorch pseudocode)</h2>
<p>Below is a rough sketch showing how one might build a simple GAN-based synthetic tabular generator. (This is illustrative—not production-ready.)</p>
<pre><code class="language-python">import torch
import torch.nn as nn
import torch.optim as optim

class Generator(nn.Module):
    def __init__(self, latent_dim, output_dim):
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(latent_dim, 128),
            nn.ReLU(),
            nn.Linear(128, 256),
            nn.ReLU(),
            nn.Linear(256, output_dim),
        )
    def forward(self, z):
        return self.net(z)

class Discriminator(nn.Module):
    def __init__(self, input_dim):
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(input_dim, 256),
            nn.LeakyReLU(0.2),
            nn.Linear(256, 128),
            nn.LeakyReLU(0.2),
            nn.Linear(128, 1),
            nn.Sigmoid(),
        )
    def forward(self, x):
        return self.net(x)

def train_gan(real_data, latent_dim=32, epochs=1000, batch_size=128):
    gen = Generator(latent_dim, real_data.shape[1])
    dis = Discriminator(real_data.shape[1])
    opt_g = optim.Adam(gen.parameters(), lr=1e-4)
    opt_d = optim.Adam(dis.parameters(), lr=1e-4)
    criterion = nn.BCELoss()
    for epoch in range(epochs):
        # sample real batch
        idx = torch.randperm(real_data.size(0))[:batch_size]
        real_batch = real_data[idx]
        # train discriminator
        z = torch.randn(batch_size, latent_dim)
        fake = gen(z).detach()
        d_real = dis(real_batch)
        d_fake = dis(fake)
        loss_d = criterion(d_real, torch.ones_like(d_real)) + criterion(d_fake, torch.zeros_like(d_fake))
        opt_d.zero_grad()
        loss_d.backward()
        opt_d.step()
        # train generator
        z2 = torch.randn(batch_size, latent_dim)
        fake2 = gen(z2)
        d_fake2 = dis(fake2)
        loss_g = criterion(d_fake2, torch.ones_like(d_fake2))
        opt_g.zero_grad()
        loss_g.backward()
        opt_g.step()
        if epoch % 100 == 0:
            print(f&quot;Epoch {epoch}, loss_d {loss_d.item():.4f}, loss_g {loss_g.item():.4f}&quot;)
    return gen
</code></pre>
<p>You would extend this with:</p>
<ul>
<li><strong>Conditional generation</strong> (so you can generate based on labels or categories)</li>
<li><strong>Post-processing &amp; clipping / rounding</strong> (to match feature domain)</li>
<li><strong>Privacy noise injection</strong></li>
<li><strong>Validation and rejection sampling</strong></li>
</ul>
<h2>Conclusion</h2>
<p>Generative AI–driven synthetic data is rapidly maturing into a core tool in the data scientist’s toolbox. With the right architecture, safeguards, and validation practices, synthetic data enables:</p>
<ul>
<li>Privacy-preserving model development</li>
<li>Rich augmentation for rare events</li>
<li>Safer data sharing</li>
<li>Faster experimentation</li>
</ul>
<p>But the hurdles remain: fidelity, bias, model collapse, and validation must be actively managed. As the field evolves in 2025, expect to see:</p>
<ul>
<li>more robust domain-specific synthetic data platforms</li>
<li>generative models that seamlessly produce structured, multimodal, temporal data</li>
<li>better regulatory frameworks and audit tools</li>
</ul>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[Vibe Coding - Great for Founders, Risky for Future Pros]]></title>
            <link>https://gopx.dev/diary/blogs/vibe-coding</link>
            <guid>https://gopx.dev/diary/blogs/vibe-coding</guid>
            <pubDate>Wed, 14 May 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>The software development landscape is continually evolving, with new methodologies and tools emerging to streamline the creation of digital products. Among the latest trends is the rise of AI-assisted coding, a paradigm shift that has given birth to concepts like &quot;vibe coding&quot;. This approach promises to democratize software development, allowing individuals with limited technical expertise to bring their ideas to life more rapidly than ever before.</p>
<h2>Decoding the &quot;Vibe&quot;: What Exactly is Vibe Coding?</h2>
<p>The term &quot;vibe coding&quot; gained prominence in February 2025 when renowned computer scientist <a href="https://karpathy.ai/">Andrej Karpathy</a> described it as a new kind of coding where one &quot;fully give[s] in to the vibes, embrace[s] exponentials, and forget[s] that the code even exists&quot;. This concept, aligned with advancements in artificial intelligence, particularly large language models (LLMs), refers to the practice of using natural language prompts to instruct AI agents to write code.</p>
<p>Here's a simple example to illustrate the difference:</p>
<p><strong>Traditional Coding:</strong></p>
<pre><code class="language-javascript">function calculateFibonacci(n) {
  if (n &lt;= 1) return n;
  let a = 0, b = 1;
  for (let i = 2; i &lt;= n; i++) {
    const temp = a + b;
    a = b;
    b = temp;
  }
  return b;
}
</code></pre>
<p><strong>Vibe Coding Prompt:</strong></p>
<pre><code class="language-txt">&quot;Write a function that calculates the nth Fibonacci number efficiently&quot;
</code></pre>
<blockquote>
<p>&quot;I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works&quot; - <em>Andrej Karpathy</em></p>
</blockquote>
<p>Instead of developers manually writing every line of code, they describe the desired functionality or problem they aim to solve in plain language, allowing the AI to handle the technical implementation. This signifies a fundamental shift in the programmer's role, moving from the detailed work of manual coding to the higher-level tasks of guiding, testing, and refining the code generated by AI.</p>
<h2>The Indie Advantage: Why Vibe Coding Resonates with Founders and Small Teams</h2>
<p>For indie developers and startup founders, who often operate with limited resources and tight deadlines, vibe coding presents a compelling set of advantages. The ability to rapidly translate ideas into functional prototypes can be a game-changer in the fast-paced world of technology.</p>
<h3>1. Speed and Efficiency</h3>
<p>The most significant benefit of vibe coding is the speed and efficiency it offers. This approach allows for near-instant prototyping and accelerates the overall development and launch process. Indie developers can go from a concept to a working demo or a minimum viable product (MVP) in a significantly shorter timeframe compared to traditional coding methods.</p>
<p>Here's an example of creating a simple API endpoint:</p>
<p><strong>Traditional Coding:</strong></p>
<pre><code class="language-javascript">const express = require('express');
const app = express();

app.use(express.json());

app.post('/api/users', async (req, res) =&gt; {
  try {
    const { name, email } = req.body;
    const user = await User.create({ name, email });
    res.status(201).json(user);
  } catch (error) {
    res.status(500).json({ error: error.message });
  }
});

app.listen(3000, () =&gt; {
  console.log('Server running on port 3000');
});
</code></pre>
<p><strong>Vibe Coding Prompt:</strong></p>
<pre><code class="language-txt">&quot;Create an Express.js API endpoint that handles user creation with name and email&quot;
</code></pre>
<h3>2. Lower Barrier to Entry</h3>
<p>Vibe coding democratizes software development by making it considerably easier for individuals who may not possess a traditional technical background. This empowers entrepreneurs, designers, and experts from various fields to bring their software ideas to fruition without requiring extensive coding knowledge or the immediate need to hire a team of developers. This democratization can lead to a surge of innovation as individuals with unique domain expertise can now directly participate in creating software solutions tailored to their specific needs and problems.</p>
<h3>3. Focus on Creativity</h3>
<p>By handling much of the underlying technical implementation, AI tools free up valuable time and energy, enabling creators to concentrate on the creative aspects of app development, such as user experience, functionality, and overall vision. This shift in focus allows founders to act more like product managers or directors, guiding the AI to build the application according to their desired outcomes, rather than getting bogged down in the intricacies of syntax and debugging. This can lead to a better alignment between the final product and the initial vision, ultimately resulting in a more user-centric application.</p>
<h3>4. Automation of Mundane Tasks</h3>
<p>Vibe coding tools can automate many of the mundane and repetitive aspects of programming, such as setting up basic files, handling simple data tasks, and writing standard code patterns. This automation allows developers to dedicate more of their time and mental energy to the more complex, challenging, and ultimately more rewarding aspects of software development, such as designing intricate features, solving unique problems, and enhancing the user experience.</p>
<h3>5. Rapid Prototyping</h3>
<p>Vibe coding is inherently suited for rapid prototyping and validation. It enables the swift creation of early versions of apps, also known as prototypes or MVPs, which can be used to test ideas and gather feedback from potential users. This iterative approach allows for faster validation of concepts and reduces the risk of investing significant time and resources into ideas that may not resonate with the target audience. Early feedback loops enable founders to make necessary adjustments and ensure that the final product is more likely to meet market demands.</p>
<p>The following table summarizes the key advantages of vibe coding for indie developers and founders:</p>
<table>
<thead>
<tr>
<th>Advantage</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Speed and Efficiency</td>
<td>Faster development cycles, quicker time to market for MVPs.</td>
</tr>
<tr>
<td>Lower Barrier to Entry</td>
<td>Enables individuals without deep technical skills to create software.</td>
</tr>
<tr>
<td>Focus on Creativity</td>
<td>Allows founders to concentrate on product vision and user experience.</td>
</tr>
<tr>
<td>Automation of Mundane Tasks</td>
<td>Frees up developers from repetitive coding tasks.</td>
</tr>
<tr>
<td>Rapid Prototyping</td>
<td>Facilitates quick validation of ideas and iterative development.</td>
</tr>
</tbody>
</table>
<h2>The Pitfalls for the Pro: When Vibe Coding Hinders the Journey to Mastery</h2>
<p>While vibe coding offers significant benefits for certain developer profiles, it also presents potential drawbacks, especially for those who aspire to become highly skilled and expert-level coders or developers.</p>
<h3>1. Code Quality and Performance Issues</h3>
<p>AI-generated code may not always adhere to best practices or generate the most efficient solutions. The generated code might require significant optimization and refinement to ensure acceptable performance and scalability, particularly for complex applications.</p>
<p>Here's an example of a database query optimization:</p>
<p><strong>AI-Generated Code (Potentially Inefficient):</strong></p>
<pre><code class="language-javascript">async function getUsersWithPosts() {
  const users = await User.findAll();
  const usersWithPosts = [];
  
  for (const user of users) {
    const posts = await Post.findAll({ where: { userId: user.id } });
    usersWithPosts.push({ ...user, posts });
  }
  
  return usersWithPosts;
}
</code></pre>
<p><strong>Optimized Version:</strong></p>
<pre><code class="language-javascript">async function getUsersWithPosts() {
  return await User.findAll({
    include: [{
      model: Post,
      required: false
    }],
    attributes: ['id', 'name', 'email']
  });
}
</code></pre>
<h3>2. Debugging Challenges</h3>
<p>Code generated by AI can be dynamic and may lack a clear architectural structure, making it difficult to trace errors and understand the underlying logic. If a developer does not fully comprehend how the AI-generated code functions, the process of identifying and fixing bugs can become significantly more complex and time-consuming. Even Karpathy himself acknowledged that AI tools are not always capable of understanding or fixing bugs effectively, sometimes necessitating trial-and-error approaches. Aspiring expert developers need to cultivate strong debugging skills, which are honed through understanding the intricacies of code and systematically identifying and resolving issues.</p>
<h3>3. Security Risks</h3>
<p>AI-generated code might inadvertently contain security vulnerabilities if the developer lacks the expertise to identify and address them. Blindly trusting AI suggestions without a thorough understanding of security principles and best practices can lead to the introduction of security gaps that could be exploited. A deep understanding of security is paramount for any professional developer.</p>
<h2>Vibe Coding vs. Traditional Development: A Tale of Two Priorities</h2>
<p>Vibe coding and traditional software development methodologies represent distinct approaches with different priorities and trade-offs. Vibe coding places a strong emphasis on speed and rapid prototyping, allowing for quick iteration and validation of ideas. In contrast, traditional development often prioritizes a deeper understanding of the codebase and underlying principles, aiming for greater control and maintainability. Vibe coding operates at a higher level of abstraction, utilizing natural language prompts to instruct AI, whereas traditional development involves granular control over every aspect of the code. This makes vibe coding more accessible to individuals without extensive technical training, while traditional development demands significant technical expertise.</p>
<p>When using vibe coding for initial prototypes, developers might accept a higher level of risk regarding code quality and long-term maintainability. However, professional development for critical applications typically prioritizes robustness and reliability. The development lifecycle in vibe coding often aligns with rapid prototyping and iterative development, while traditional methods may follow more structured approaches like <strong>Waterfall</strong> or <strong>Agile</strong> with a strong emphasis on planning and documentation. Ultimately, the choice between these approaches depends on the specific goals of the project, the developer's skill level, and the long-term vision for the software.</p>
<h2>The Shadow of Technical Debt: Navigating the Risks of Rapid AI-Generated Code</h2>
<p>The rapid generation of code facilitated by vibe coding, while offering immediate benefits, carries the inherent risk of accumulating technical debt. Technical debt, in software development, refers to the future consequences of prioritizing speed of delivery over achieving an optimal solution. Essentially, shortcuts taken during the development process to meet deadlines or simplify implementation can lead to additional work and higher costs in the future.</p>
<p>In the context of vibe coding, technical debt can accrue through several avenues. Developers might hastily accept AI-generated code without conducting thorough reviews or fully understanding its implications. This lack of comprehension can make future modifications and debugging significantly more challenging. Furthermore, AI-generated code might exhibit inconsistent coding styles or fail to adhere to established best practices, contributing to a less maintainable codebase. Insufficient testing of AI-generated code can also lead to the accumulation of technical debt, as undetected bugs and vulnerabilities may persist within the application.</p>
<p>The consequences of unchecked technical debt can be substantial. It often leads to increased maintenance costs and longer development times as developers spend more time addressing issues arising from the initial shortcuts. The quality of service can suffer, with a higher likelihood of software malfunctions and system failures. Technical debt can also hinder the ability to scale the application to accommodate growing user bases or new features. As developers become increasingly occupied with fixing existing problems, the capacity for innovation diminishes. Poorly managed technical debt can also introduce security vulnerabilities, making the application more susceptible to attacks. At a higher level, it can lead to unreliable project plans and erode customer trust, ultimately decreasing code quality and increasing the frequency of bugs. The burden of working with a debt-ridden codebase can even impact a team's ability to attract and retain talented developers. While vibe coding can offer an initial burst of speed, neglecting the resulting technical debt can create significant obstacles in the long run, potentially undermining the initial advantages and even leading to project failure.</p>
<h2>Building a Solid Foundation: Why Core Coding Skills Remain Essential for Aspiring Experts</h2>
<p>For developers with aspirations of achieving mastery in their field and building robust, scalable applications, a strong foundation in core coding skills remains indispensable, even in the age of AI-assisted development. This foundational knowledge encompasses areas such as software architecture, design patterns, coding standards, and thorough testing methodologies.</p>
<p>A deep understanding of software architecture is essential for designing systems that are not only functional but also scalable, maintainable, and reliable. Software architecture provides the high-level blueprint for the software, defining its components and their interactions. It involves comprehending various architectural patterns and their associated trade-offs. This knowledge allows developers to make informed decisions about the structure of their applications, ensuring they can adapt to future growth and complexity, a critical aspect that a purely vibe-driven approach might overlook. Without a solid grasp of architectural principles, developers risk creating systems that are difficult to evolve or manage effectively.</p>
<p>The importance of design patterns cannot be overstated. Design patterns are reusable solutions to commonly occurring problems in software design. They improve code reusability, enhance maintainability and scalability, promote consistency and best practices, and facilitate communication among developers. Design patterns represent the collective experience of the software engineering community, offering proven approaches to common challenges. Understanding and applying these patterns leads to more robust and well-structured code, a hallmark of professional development. Relying solely on AI for code generation might result in solutions that do not leverage these established patterns, potentially leading to less efficient or harder-to-understand code.</p>
<p>Adherence to coding standards is another crucial element of professional software development. Coding standards ensure that code is consistent and of high quality, improving readability and maintainability. They facilitate team collaboration and reduce the likelihood of errors. While AI might generate syntactically correct code, it may not always conform to the specific coding standards of a project or team. A deep understanding of coding standards allows developers to ensure consistency and clarity in their codebase.</p>
<p>Finally, the necessity of thorough testing is paramount for aspiring expert developers. This includes various types of testing, such as unit tests, integration tests, and end-to-end tests, which are essential for ensuring that the code functions as expected and for preventing regressions. Relying solely on AI-generated code without rigorous testing can lead to unstable and unreliable applications. A strong understanding of testing methodologies and the ability to write effective tests are fundamental skills for any developer aiming for expertise.</p>
<h2>Finding the Right Rhythm: A Balanced Approach to Vibe Coding</h2>
<p>For indie developers and founders, vibe coding can be a valuable tool when used strategically and with an awareness of its limitations. Here's how to make the most of it:</p>
<h3>1. Best Practices</h3>
<ul>
<li>Use vibe coding for rapid prototyping and initial idea validation</li>
<li>Always focus on understanding the generated code</li>
<li>Implement proper testing for all AI-generated code</li>
<li>Refactor AI-generated code to maintain coding standards</li>
<li>Invest in foundational learning of core programming concepts</li>
</ul>
<p>Here's an example of proper testing implementation:</p>
<p><strong>AI-Generated Component:</strong></p>
<pre><code class="language-javascript">function UserProfile({ user }) {
  return (
    &lt;div&gt;
      &lt;h1&gt;{user.name}&lt;/h1&gt;
      &lt;p&gt;{user.email}&lt;/p&gt;
      &lt;button onClick={() =&gt; user.logout()}&gt;Logout&lt;/button&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<p><strong>Proper Testing Implementation:</strong></p>
<pre><code class="language-javascript">import { render, screen, fireEvent } from '@testing-library/react';

describe('UserProfile', () =&gt; {
  const mockUser = {
    name: 'John Doe',
    email: 'john@example.com',
    logout: jest.fn()
  };

  test('renders user information correctly', () =&gt; {
    render(&lt;UserProfile user={mockUser} /&gt;);
    expect(screen.getByText('John Doe')).toBeInTheDocument();
    expect(screen.getByText('john@example.com')).toBeInTheDocument();
  });

  test('calls logout function when button is clicked', () =&gt; {
    render(&lt;UserProfile user={mockUser} /&gt;);
    fireEvent.click(screen.getByText('Logout'));
    expect(mockUser.logout).toHaveBeenCalled();
  });
});
</code></pre>
<h3>2. For Expert Developers</h3>
<ul>
<li>Maintain a strong foundation in software architecture and design patterns</li>
<li>Use version control diligently to track all changes</li>
<li>Seek peer review for critical components</li>
<li>Be mindful of technical debt and continuously refactor</li>
</ul>
<h2>Conclusion</h2>
<p>Vibe coding represents an exciting evolution in software development, offering a powerful set of tools for indie developers and founders seeking to rapidly prototype and bring their ideas to life. Its ability to lower the barrier to entry and accelerate development cycles makes it a compelling option for individuals and small teams operating with limited resources.</p>
<p>However, for those who aspire to become expert-level coders or developers, relying solely on vibe coding carries significant risks. A deep understanding of core programming principles, software architecture, design patterns, coding standards, and rigorous testing methodologies remains essential for achieving mastery and building robust, scalable applications.</p>
<p>The most successful developers in the future will likely be those who can strategically integrate AI-assisted tools like vibe coding into their workflow while maintaining a strong foundation in traditional software development practices. As AI continues to reshape the software development landscape, a balanced approach that embraces innovation while upholding fundamental principles will be key to achieving excellence in the craft.</p>
<h2>References</h2>
<ol>
<li><a href="https://azumo.com/project-outsourcing-handbook/version-control-and-code-quality-in-development">Version Control and Code Quality in Development</a></li>
<li><a href="https://www.smashingmagazine.com/2023/05/impact-agile-methodologies-code-quality/">Impact of Agile Methodologies on Code Quality</a></li>
<li><a href="https://blog.replit.com/what-is-vibe-coding">What is Vibe Coding?</a></li>
<li><a href="https://devops.com/does-using-ai-assistants-lead-to-lower-code-quality/">Does Using AI Assistants Lead to Lower Code Quality?</a></li>
<li><a href="https://leaddev.com/software-quality/how-ai-generated-code-accelerates-technical-debt">How AI-Generated Code Accelerates Technical Debt</a></li>
<li><a href="https://www.reddit.com/r/SoftwareEngineering/comments/1kjwiso/maintaining_code_quality_with_widespread_ai/">Maintaining Code Quality with Widespread AI</a></li>
<li><a href="https://en.wikipedia.org/wiki/Vibe_coding">Vibe Coding - Wikipedia</a></li>
<li><a href="https://cloudsecurityalliance.org/blog/2025/04/09/secure-vibe-coding-guide">Secure Vibe Coding Guide</a></li>
<li><a href="https://www.reddit.com/r/AskProgramming/comments/1jfia88/does_ai_actually_improve_code_quality_or_just/">Does AI Actually Improve Code Quality?</a></li>
<li><a href="https://www.thoughtworks.com/radar/techniques/complacency-with-ai-generated-code">Complacency with AI-Generated Code</a></li>
<li><a href="https://blog.bonfy.ai/code-review-in-the-age-of-ai-best-practices-for-reviewing-ai-generated-code">Code Review in the Age of AI</a></li>
<li><a href="https://www.leanware.co/insights/best-practices-ai-software-development">Best Practices for AI in Software Development</a></li>
<li><a href="https://zencoder.ai/blog/how-to-use-ai-in-coding">How to Use AI in Coding</a></li>
<li><a href="https://www.reddit.com/r/ChatGPTCoding/comments/1fyti60/8_best_practices_to_generate_code_with_generative/">8 Best Practices to Generate Code with Generative AI</a></li>
<li><a href="https://www.reddit.com/r/AskProgramming/comments/1jtoyso/how_do_you_validate_aigenerated_code_in/">How to Validate AI-Generated Code</a></li>
<li><a href="https://blog.codacy.com/best-practices-for-coding-with-ai">Best Practices for Coding with AI</a></li>
<li><a href="https://stackoverflow.blog/2024/09/10/gen-ai-llm-create-test-developers-coding-software-code-quality/">Gen AI and Code Quality</a></li>
<li><a href="https://securetrust.io/blog/secure-ai-generated-code/">Secure AI-Generated Code</a></li>
<li><a href="https://www.readysetcloud.io/blog/allen.helton/tdd-with-ai/">TDD with AI</a></li>
<li><a href="https://www.sonarsource.com/blog/ai-code-assurance-sonar/">AI Code Assurance</a></li>
<li><a href="https://dev.to/volkmarr/is-generated-code-harder-to-maintain-1n1n">Is Generated Code Harder to Maintain?</a></li>
<li><a href="https://invozone.com/blog/ai-generated-code-maintenance-challenges/">AI-Generated Code Maintenance Challenges</a></li>
<li><a href="https://www.stauffer.com/news/blog/the-impact-of-ai-in-the-software-development-lifecycle">Impact of AI in Software Development Lifecycle</a></li>
<li><a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/how-an-ai-enabled-software-product-development-life-cycle-will-fuel-innovation">AI-Enabled Software Product Development</a></li>
<li><a href="https://www.developer-tech.com/news/ai-impact-on-software-development-jobs/">AI Impact on Software Development Jobs</a></li>
<li><a href="https://www.ibm.com/think/topics/ai-in-software-development">AI in Software Development</a></li>
<li><a href="https://ieeechicago.org/the-impact-of-ai-and-automation-on-software-development-a-deep-dive/">Impact of AI and Automation on Software Development</a></li>
<li><a href="https://brainhub.eu/library/software-developer-age-of-ai">Software Developer in the Age of AI</a></li>
<li><a href="https://www.reddit.com/r/Futurology/comments/1jc6r40/will_ai_really_eliminate_software_developers/">Will AI Really Eliminate Software Developers?</a></li>
<li><a href="https://www.brookings.edu/articles/how-ai-powered-software-development-may-affect-labor-markets/">AI-Powered Software Development and Labor Markets</a></li>
<li><a href="https://www.reddit.com/r/ExperiencedDevs/comments/1hm8gxj/ai_wont_replace_software_engineers_but_an/">AI Won't Replace Software Engineers</a></li>
<li><a href="https://www.forbes.com/councils/forbestechcouncil/2025/04/04/the-future-of-code-how-ai-is-transforming-software-development/">The Future of Code: How AI is Transforming Software Development</a></li>
<li><a href="https://www.pluralsight.com/resources/blog/business-and-leadership/AI-in-software-development">AI in Software Development</a></li>
<li><a href="https://www.deloitte.com/uk/en/Industries/technology/blogs/2024/the-future-of-coding-is-here-how-ai-is-reshaping-software-development.html">The Future of Coding: How AI is Reshaping Software Development</a></li>
<li><a href="https://onlinecs.baylor.edu/news/will-ai-replace-SWEs">Will AI Replace Software Engineers?</a></li>
<li><a href="https://www.whitesmith.co/blog/evolution-of-programming-skills/">The Evolution of Programming Skills</a></li>
<li><a href="https://dev.to/bikashdaga/ai-and-the-evolution-of-coding-will-ai-tools-replace-programmers-13af">AI and the Evolution of Coding</a></li>
<li><a href="https://www.mixmax.com/engineering/the-evolution-of-coding">The Evolution of Coding</a></li>
<li><a href="https://www.oreilly.com/radar/ai-and-programming-the-beginning-of-a-new-era/">AI and Programming: The Beginning of a New Era</a></li>
<li><a href="https://kvytechnology.com/blog/software/ai-in-software-development/">AI in Software Development</a></li>
<li><a href="https://dzone.com/articles/from-algorithms-to-ai-the-evolution-of-programming">From Algorithms to AI: The Evolution of Programming</a></li>
<li><a href="https://www.signalfire.com/blog/ai-evolution-of-coding">AI: Evolution of Coding</a></li>
<li><a href="https://thebootstrappedfounder.com/the-evolution-of-coding-in-the-ai-era/">The Evolution of Coding in the AI Era</a></li>
<li><a href="https://www.atlassian.com/agile/software-development/code-reviews">Code Reviews in Agile Software Development</a></li>
</ol>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[Clean my Mac]]></title>
            <link>https://gopx.dev/diary/notes/clean-my-mac</link>
            <guid>https://gopx.dev/diary/notes/clean-my-mac</guid>
            <pubDate>Mon, 28 Oct 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>These notes provide a quick guide on how to clean system junk files on your Mac.</p>
<h2>Why clean your Mac?</h2>
<p>Regular system cleaning helps:</p>
<ul>
<li>Improve your Mac's performance</li>
<li>Free up valuable storage space</li>
<li>Remove unnecessary files and applications</li>
<li>Maintain system stability</li>
</ul>
<h2>Common areas to clean</h2>
<ol>
<li>User Cache Files</li>
<li>System Log Files</li>
<li>Language Files</li>
<li>User Log Files</li>
<li>Broken Login Items</li>
<li>System Cache Files</li>
</ol>
<h2>1. User Cache Files</h2>
<ul>
<li>
<p><strong>Why</strong>: Created to speed up apps and processes by storing frequently accessed data.</p>
</li>
<li>
<p><strong>Made of</strong>: Temporary app data, browser caches, preview thumbnails, and downloaded app components.</p>
</li>
<li>
<p><strong>Stored in</strong>: <code>/Library/Caches</code> and <code>/Library/Caches</code></p>
</li>
<li>
<p><strong>Clear Command</strong>:</p>
<pre><code class="language-bash">rm -rf ~/Library/Caches/* &amp;&amp; sudo rm -rf /Library/Caches/*
</code></pre>
</li>
</ul>
<h2>2. System Log Files</h2>
<ul>
<li>
<p><strong>Why</strong>: Created to track system events, errors, and activities for troubleshooting.</p>
</li>
<li>
<p><strong>Made of</strong>: System reports, crash logs, diagnostic reports, and installation logs.</p>
</li>
<li>
<p><strong>Stored in</strong>: <code>/var/log</code> and <code>~/Library/Logs</code></p>
</li>
<li>
<p><strong>Clear Command</strong>:</p>
<pre><code class="language-bash">sudo rm -rf /var/log/* &amp;&amp; rm -rf ~/Library/Logs/*
</code></pre>
</li>
</ul>
<h2>3. Language Files</h2>
<ul>
<li>
<p><strong>Why</strong>: Stored to support multiple language interfaces in applications.</p>
</li>
<li>
<p><strong>Made of</strong>: Localization files (.lproj folders) for unused languages.</p>
</li>
<li>
<p><strong>Stored in</strong>: <code>/Applications/[AppName]/Contents/Resources</code></p>
</li>
<li>
<p><strong>Clear command</strong>:</p>
<pre><code class="language-bash">find /Applications -name &quot;*.lproj&quot; -type d ! -name &quot;en.lproj&quot; ! -name &quot;Base.lproj&quot; -exec rm -rf {} \;
</code></pre>
</li>
</ul>
<h2>4. User Log Files</h2>
<ul>
<li>
<p><strong>Why</strong>: Created to track user-specific application activities and errors.</p>
</li>
<li>
<p><strong>Made of</strong>: App-specific logs, crash reports, and usage data.</p>
</li>
<li>
<p><strong>Stored in</strong>: <code>~/Library/Logs</code> and <code>~/Library/Application Support</code></p>
</li>
<li>
<p><strong>Clear command</strong>:</p>
<pre><code class="language-bash">rm -rf ~/Library/Logs/* &amp;&amp; rm -rf ~/Library/Application\ Support/*/logs/*
</code></pre>
</li>
</ul>
<h2>5. Broken Login Items</h2>
<ul>
<li>
<p><strong>Why</strong>: Remnants of uninstalled applications that were set to launch at startup.</p>
</li>
<li>
<p><strong>Made of</strong>: Launch agents, daemons, and login item references.</p>
</li>
<li>
<p><strong>Stored in</strong>: <code>~/Library/LaunchAgents</code> and <code>/Library/LaunchAgents</code></p>
</li>
<li>
<p><strong>Clear command</strong>:</p>
<pre><code class="language-bash">rm -rf ~/Library/LaunchAgents/* &amp;&amp; sudo rm -rf /Library/LaunchAgents/*
</code></pre>
</li>
</ul>
<h2>6. System Cache Files</h2>
<ul>
<li>
<p><strong>Why</strong>: Created by macOS to improve system performance and app loading times.</p>
</li>
<li>
<p><strong>Made of</strong>: System-level caches, kernel caches, and framework caches.</p>
</li>
<li>
<p><strong>Stored in</strong>: <code>/System/Library/Caches</code> and <code>/Library/Caches</code></p>
</li>
<li>
<p><strong>Clear command</strong>:</p>
<pre><code class="language-bash">sudo rm -rf /System/Library/Caches/* &amp;&amp; sudo rm -rf /Library/Caches/*
</code></pre>
</li>
</ul>
<h2>Important Notes</h2>
<ul>
<li>Always backup your data before running these commands</li>
<li>Some of these commands require administrator privileges (sudo)</li>
<li>Some system caches will be regenerated automatically after deletion</li>
<li>Be careful with these commands as they can affect system performance if not used properly</li>
<li>You can manually delete files by navigating to their respective storage locations using Finder (⌘ + Shift + G to enter the path)</li>
</ul>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[N accounts on 1 machine]]></title>
            <link>https://gopx.dev/diary/notes/n-accounts-on-1-machine</link>
            <guid>https://gopx.dev/diary/notes/n-accounts-on-1-machine</guid>
            <pubDate>Thu, 10 Oct 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>If you have two GitHub accounts, such as <strong><a href="http://github.com/gopx-office">github.com/gopx-office</a></strong> and <strong><a href="http://github.com/gopx-personal">github.com/gopx-personal</a></strong>, you can easily manage both on the same machine.</p>
<blockquote>
<p>Tip: This method works for more than two accounts as well!</p>
</blockquote>
<h2>Steps to follow</h2>
<ol>
<li>Generate SSH keys for each account</li>
<li>Add SSH keys to the SSH agent</li>
<li>Add SSH keys to GitHub</li>
<li>Create an SSH config file with host entries</li>
<li>Clone repositories using the correct account</li>
</ol>
<h2>1. Generate SSH Keys</h2>
<p>Navigate to your <code>.ssh</code> directory:</p>
<pre><code class="language-bash">cd ~/.ssh
</code></pre>
<p>Generate SSH keys for each account:</p>
<pre><code class="language-bash">ssh-keygen -t rsa -C &quot;office-email@gmail.com&quot; -f &quot;github-gopx-office&quot;
ssh-keygen -t rsa -C &quot;personal-email@gmail.com&quot; -f &quot;github-gopx-personal&quot;
</code></pre>
<p>After entering the command the terminal will ask for passphrase, leave it empty and proceed.</p>
<pre><code class="language-bash">Generating public/private rsa key pair.
Enter file in which to save the key (/Users/username/.ssh/github-gopx-office): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/username/.ssh/github-gopx-office.
Your public key has been saved in /Users/username/.ssh/github-gopx-office.pub.
The key fingerprint is:
SHA256:XXXXXXXXXXXXXX your-office-email@gmail.com
The key's randomart image is:
+---[RSA 2048]----+
|       ...       |
|      .o .       |
|      o. .       |
|     ... ..      |
|      ... .      |
|     ...  .      |
|    .o..   .     |
|   ..o    ..     |
+----[SHA256]-----+
</code></pre>
<p>After generating the keys, your <code>.ssh</code> folder will contain both the public key <code>github-gopx-office.pub</code> and the private key <code>github-gopx-office</code>. The public key has a <code>.pub</code> extension, while the private key does not.</p>
<h2>2. Add SSH Keys to the Agent</h2>
<p>Add your SSH keys to the SSH agent:</p>
<pre><code class="language-bash">ssh-add -K ~/.ssh/github-gopx-office
ssh-add -K ~/.ssh/github-gopx-personal
</code></pre>
<p>Learn more about adding keys to the SSH agent, check out - <a href="https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent">Generating a new SSH key and adding it to the ssh-agent</a>.</p>
<h2>3. Add SSH Keys to GitHub</h2>
<p>Copy and add your public keys to GitHub. For doing this we need to:</p>
<h3>1. Copy the public key</h3>
<p>We can copy the public key either by opening the github-gopx-office.pub file in vim and then copying the content of it.</p>
<pre><code class="language-bash">vim ~/.ssh/github-gopx-office.pub
vim ~/.ssh/github-gopx-personal.pub
</code></pre>
<p><strong>OR</strong></p>
<p>We can directly copy the content of the public key file in the clipboard.</p>
<pre><code class="language-bash">pbcopy &lt; ~/.ssh/github-gopx-office.pub
pbcopy &lt; ~/.ssh/github-gopx-personal.pub
</code></pre>
<h3>2. Paste the public key on GitHub</h3>
<ol>
<li>Sign in to GitHub Account</li>
<li>Goto <strong>Settings</strong> &gt; <strong>SSH and GPG keys</strong> &gt; <strong>New SSH Key</strong></li>
<li>Paste your copied public key and give it a Title of your choice</li>
</ol>
<p><strong>OR</strong></p>
<ol>
<li>Sign in to GitHub</li>
<li>Paste this link in your browser <a href="https://github.com/settings/keys">https://github.com/settings/keys</a></li>
<li>Click on New SSH Key and paste your copied key</li>
</ol>
<h2>4. Configure SSH for Multiple Accounts</h2>
<p>Create or open the SSH config file:</p>
<pre><code class="language-bash">touch ~/.ssh/config
open ~/.ssh/config
</code></pre>
<p>Add the following configurations:</p>
<pre><code class="language-bash"># Office account
Host github.com-gopx-office
    HostName github.com
    User git
    IdentityFile ~/.ssh/github-gopx-office

# Personal account
Host github.com-gopx-personal
    HostName github.com
    User git
    IdentityFile ~/.ssh/github-gopx-personal
</code></pre>
<h2>5. Clone Repositories</h2>
<p>To clone repositories, use the correct SSH host for the respective account:</p>
<pre><code class="language-bash">git clone git@github.com-gopx-personal:{owner}/{repo}.git
git clone git@github.com-gopx-office:{owner}/{repo}.git
</code></pre>
<h2>Final Configuration</h2>
<p>For each repository, set the appropriate user information:</p>
<pre><code class="language-bash">git config user.email &quot;office-email@gmail.com&quot;
git config user.name &quot;Gopx Office&quot;
</code></pre>
<p>Repeat for the personal account:</p>
<pre><code class="language-bash">git config user.email &quot;personal-email@gmail.com&quot;
git config user.name &quot;Gopx Personal&quot;
</code></pre>
<p>To ensure proper push/pull commands, set the remote origin:</p>
<pre><code class="language-bash">git remote add origin git@github.com-gopx-personal:gopx-personal/repo.git
git remote add origin git@github.com-gopx-office:gopx-office/repo.git
</code></pre>
<blockquote>
<p>You can now use git push and git pull as usual.</p>
</blockquote>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[MDX Hello World Showcase]]></title>
            <link>https://gopx.dev/diary/notes/notes-world</link>
            <guid>https://gopx.dev/diary/notes/notes-world</guid>
            <pubDate>Sat, 21 Sep 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>This page demonstrates various MDX components and syntax to help you understand MDX parsing.</p>
<h2>Text Formatting</h2>
<p>You can use <em>italic</em>, <strong>bold</strong>, and <em><strong>bold italic</strong></em> text.</p>
<p>You can also use <s>strikethrough</s> and <code>inline code</code>.</p>
<h2>Links</h2>
<p>Here's a <a href="https://mdxjs.com/">link to MDX documentation</a>.</p>
<h2>Lists</h2>
<p>Unordered list:</p>
<ul>
<li>Item 1</li>
<li>Item 2
<ul>
<li>Subitem 2.1</li>
<li>Subitem 2.2</li>
</ul>
</li>
<li>Item 3</li>
</ul>
<p>Ordered list:</p>
<ol>
<li>First item</li>
<li>Second item</li>
<li>Third item</li>
</ol>
<h2>Blockquotes</h2>
<blockquote>
<p>This is a blockquote.<br>
It can span multiple lines.</p>
</blockquote>
<h2>Code Blocks</h2>
<p>Inline code: <code>console.log(&quot;Hello, MDX!&quot;);</code></p>
<p>JavaScript code block:</p>
<pre><code class="language-jsx">const greeting = &quot;Hello, MDX!&quot;;
console.log(greeting);
</code></pre>
<p>Python code block:</p>
<pre><code class="language-python">print(&quot;Hello, MDX!&quot;)
</code></pre>
<h2>Tables</h2>
<table>
<thead>
<tr>
<th>Feature</th>
<th>Supported</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>Tables</td>
<td>Yes</td>
<td>This table</td>
</tr>
<tr>
<td>Code</td>
<td>Yes</td>
<td><code>code</code></td>
</tr>
<tr>
<td>Math</td>
<td>Yes</td>
<td>$E = mc^2$</td>
</tr>
</tbody>
</table>
<h2>Task Lists</h2>
<ul>
<li>[x] Learn MDX basics</li>
<li>[ ] Master advanced MDX concepts</li>
<li>[ ] Build an MDX-powered website</li>
</ul>
<h2>Horizontal Rule</h2>
<hr>
<h2>Math Equations</h2>
<p>$y = mx + b$</p>
<h2>Footnotes</h2>
<p>Here's a sentence with a footnote.[^1]</p>
<p>[^1]: This is the footnote content.</p>
<h2>Conclusion</h2>
<p>This page showcases various MDX components and syntax. Happy MDX parsing!</p>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[Windows Crash of July 19, 2024]]></title>
            <link>https://gopx.dev/diary/blogs/window-crash-190724</link>
            <guid>https://gopx.dev/diary/blogs/window-crash-190724</guid>
            <pubDate>Sat, 20 Jul 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>On July 19, 2024, a significant global outage occurred affecting Microsoft Windows users, primarily triggered by a problematic update from <a href="%22https://www.crowdstrike.com%22">CrowdStrike</a>, a cybersecurity company. This incident led to widespread instances of the &quot;Blue Screen of Death&quot; (BSOD), a critical error screen displayed by Windows when it encounters a system failure. The repercussions of this outage were felt across various sectors, including airlines, banks, telecommunications, and emergency services, causing operational disruptions and significant public concern.</p>
<h2>Overview of the Incident</h2>
<p>The BSOD incidents began surfacing early on July 19, with users reporting their systems crashing and displaying error messages indicating that the device had encountered a problem and needed to restart. This situation escalated quickly as more users across multiple countries, including the United States, Australia, India, and Japan, experienced similar issues. As the day progressed, reports indicated that the outage had grounded flights, disrupted banking services, and caused chaos in supermarkets and other essential services.</p>
<h2>Causes of the outage</h2>
<p>The root cause of the BSOD was identified as a recent update to the <a href="https://www.crowdstrike.com/blog/statement-on-falcon-content-update-for-windows-hosts">CrowdStrike Falcon Sensor</a>, which is designed to provide endpoint protection and cybersecurity solutions. According to CrowdStrike, the update inadvertently led to stability issues in Windows systems, resulting in the BSOD errors. The company acknowledged the problem and stated that their engineering team was actively working to resolve the issue by reverting the changes made by the update</p>
<p>The National Cyber Security Coordinator of Australia confirmed that the outage was due to a technical issue with a third-party software platform, ruling out the possibility of a cyber-attack. Microsoft also confirmed that they were in contact with CrowdStrike to address the situation and mitigate the impact on users globally.</p>
<h2>Impact of the Outage</h2>
<p>The outage's impact was extensive, with reports indicating that critical services were severely disrupted:</p>
<ul>
<li><strong>Airlines</strong>: Numerous flights were grounded or canceled, particularly in the United States and Australia, as airport systems became inoperable, preventing check-ins and other essential operations.</li>
<li><strong>Banking and Financial Services</strong>: Banking institutions faced significant challenges, with many users unable to access online banking services, leading to concerns about financial transactions and operations.</li>
<li><strong>Telecommunications</strong>: Major telecom companies reported outages, affecting communication services for millions of users.</li>
<li><strong>Emergency Services</strong>: Critical services, including hospitals and emergency response teams, experienced difficulties due to the reliance on Windows systems for operational management.</li>
</ul>
<h2>Technical Analysis of the Windows Crash</h2>
<h3>1. Understanding the Blue Screen of Death (BSOD)</h3>
<p>The BSOD is a critical failure screen that Windows displays when it encounters a fatal system error. This error can occur due to various reasons, including hardware failures, driver issues, or corrupted system files. The BSOD typically includes a stop code that helps identify the source of the problem.</p>
<p>Example of a BSOD Error Code:</p>
<pre><code class="language-shell">STOP: 0x0000007E (0xFFFFFFFFC0000005, 0xFFFFF8024A123456, 0xFFFFF8024A123456, 0xFFFFF8024A123456)
</code></pre>
<p>In this case, the stop code <code>0x0000007E</code> indicates a <code>SYSTEM_THREAD_EXCEPTION_NOT_HANDLED</code> error, often associated with driver problems. The parameters that follow provide additional context for debugging.</p>
<h3>2. Memory Management and Pointer Issues</h3>
<p>One of the prevalent causes of BSODs is improper memory management, particularly when dealing with pointers. As discussed in a recent Twitter thread by Zach Vorhies, a Google whistleblower, the misuse of pointers can lead to catastrophic failures. For example, consider the following C++ structure:</p>
<pre><code class="language-cpp">struct Obj {
    int a;
    int b;
};
</code></pre>
<p>When a pointer is created to this structure:</p>
<pre><code class="language-cpp">Obj* obj = new Obj();
</code></pre>
<p>The address of <code>obj</code> might be something like <code>0x9030</code>. Accessing its members would involve offsets from this base address:</p>
<ul>
<li><code>obj</code> is at <code>0x9030</code></li>
<li><code>obj-&gt;a</code> is at <code>0x9030 + 0x4</code></li>
<li><code>obj-&gt;b</code> is at <code>0x9030 + 0x8</code></li>
</ul>
<p>However, if the pointer is null:</p>
<pre><code class="language-cpp">Obj* obj = NULL;
</code></pre>
<p>Then attempting to access obj-&gt;a would lead to an invalid memory access:</p>
<pre><code class="language-cpp">print(obj-&gt;a); // This will cause a crash
</code></pre>
<p>The program attempts to read from an invalid memory address, leading to a BSOD. This scenario highlights the critical importance of null pointer checks before dereferencing pointers, especially in system-level programming.</p>
<h3>3. The Role of Drivers</h3>
<p>In the case of the CrowdStrike Falcon Sensor update, the driver responsible for system security likely interacted with the Windows kernel in a way that introduced instability. Drivers operate at a high privilege level, meaning that any bugs can lead to system-wide failures. As Vorhies pointed out, if a programmer forgets to check for null pointers, it can lead to accessing invalid memory regions, causing the operating system to crash out of caution.</p>
<p>When a system driver encounters an error, it cannot simply terminate like a user-mode application; instead, the operating system must crash to prevent further damage. This is why 95% of BSODs are attributed to issues within system drivers, as noted by Vorhies.</p>
<h3>4. Error Handling and Exception Management</h3>
<p>Proper error handling is crucial in preventing crashes. In languages like C/C++, developers often use structured exception handling (SEH) to catch exceptions and handle them gracefully. Here's an example:</p>
<pre><code class="language-cpp">__try {
    int *ptr = NULL;
    *ptr = 42; // This will cause an access violation
} __except(EXCEPTION_EXECUTE_HANDLER) {
    printf(&quot;An exception occurred.\n&quot;);
}
</code></pre>
<p>In the case of the CrowdStrike update, if the driver did not handle exceptions correctly, it could lead to a system crash.</p>
<h3>5. Computational Analysis of the Update Process</h3>
<p>When an update is deployed, it often involves downloading and installing new drivers. This process can be mathematically modeled to understand its impact on system stability. Let’s define some variables:</p>
<ul>
<li><strong>U</strong>: The update size (in MB)</li>
<li><strong>D</strong>: The download time (in seconds)</li>
<li><strong>I</strong>: The installation time (in seconds)</li>
<li><strong>R</strong>: The risk of failure (a function of <strong>U</strong>, <strong>D</strong>, and <strong>I</strong>)</li>
</ul>
<p>A simple model can be expressed as:</p>
<p>$ R = f(U, D, I) = k \cdot \left(\frac{U}{D + I}\right) $</p>
<p>Here, <strong>k</strong> is a constant representing the system's resilience to updates. A higher <strong>R</strong> indicates a greater risk of failure, highlighting the need for careful management of update processes.</p>
<h3>6. Recovery Steps and Best Practices</h3>
<p>In response to the crisis, CrowdStrike outlined recovery steps for affected users. These include booting into Safe Mode, navigating to the CrowdStrike driver directory, and removing the problematic driver file. This process is critical for restoring functionality but poses challenges, especially for cloud environments and remote users.</p>
<p>Example Recovery Steps:</p>
<ol>
<li>Boot Windows into Safe Mode or the Windows Recovery Environment.</li>
<li>Navigate to <code>C:\Windows\System32\drivers\CrowdStrike</code></li>
<li>Locate and delete the file matching <code>C-00000291*.sys</code></li>
<li>Boot the host normally.</li>
</ol>
<p>For cloud environments like AWS and Azure, specific commands and procedures are required to revert to stable states.</p>
<h2>Conclusion</h2>
<p>The Windows crash on July 19, 2024, serves as a stark reminder of the vulnerabilities inherent in software updates, particularly in critical systems. The incident can be attributed to a combination of factors, including improper memory management, inadequate error handling, and the complexities of driver interactions.</p>
<p>By understanding these computational principles and examining the underlying code, developers can appreciate the importance of rigorous testing and validation in software updates. As organizations increasingly rely on interconnected systems, the need for robust cybersecurity measures and contingency plans becomes paramount to mitigate the impact of such outages in the future.</p>
<p>This incident not only highlights the challenges faced by software developers but also emphasizes the critical role of thorough testing and validation in ensuring system stability and security.</p>
<h2>References</h2>
<ol>
<li>
<p><strong>Blue Screen of Death (BSOD)</strong>:</p>
<ul>
<li>Microsoft Documentation on BSOD: <a href="https://support.microsoft.com/en-us/help/14238/windows-10-troubleshoot-blue-screen-errors">Microsoft Support - Troubleshoot blue screen errors</a></li>
</ul>
</li>
<li>
<p><strong>Memory Management and Pointers</strong>:</p>
<ul>
<li>C++ Pointers and Memory Management: <a href="https://www.geeksforgeeks.org/pointers-in-c/">GeeksforGeeks - Pointers in C/C++</a></li>
<li>Understanding Null Pointers: <a href="http://www.cplusplus.com/doc/tutorial/pointers/">Cplusplus.com - Pointers</a></li>
</ul>
</li>
<li>
<p><strong>Driver Development</strong>:</p>
<ul>
<li>Windows Driver Development: <a href="https://docs.microsoft.com/en-us/windows-hardware/drivers/">Microsoft Docs - Windows Drivers</a></li>
<li>Understanding System Drivers: <a href="https://wiki.osdev.org/Driver">OSDev Wiki - Drivers</a></li>
</ul>
</li>
<li>
<p><strong>Error Handling in C/C++</strong>:</p>
<ul>
<li>Structured Exception Handling: <a href="https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/structured-exception-handling?view=msvc-160">Microsoft Docs - Structured Exception Handling</a></li>
</ul>
</li>
<li>
<p><strong>Buffer Overflow</strong>:</p>
<ul>
<li>Buffer Overflow Vulnerabilities: <a href="https://owasp.org/www-community/attacks/Buffer_Overflow_Attack">OWASP - Buffer Overflow</a></li>
</ul>
</li>
<li>
<p><strong>Mathematical Modeling in Software Engineering</strong>:</p>
<ul>
<li>Software Reliability Engineering: <a href="https://ieeexplore.ieee.org/document/6880151">IEEE - Software Reliability Engineering</a></li>
</ul>
</li>
<li>
<p><strong>CrowdStrike Falcon</strong>:</p>
<ul>
<li>CrowdStrike Official Website: <a href="https://www.crowdstrike.com/">CrowdStrike</a></li>
</ul>
</li>
</ol>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[The End of Junior Engineers]]></title>
            <link>https://gopx.dev/diary/blogs/end-of-je</link>
            <guid>https://gopx.dev/diary/blogs/end-of-je</guid>
            <pubDate>Tue, 09 Jul 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>In a dimly lit office, a fresh-faced junior engineer named Alex stared at the computer screen, watching in disbelief as lines of code wrote themselves. The once bustling room, filled with the hum of eager conversations and the clatter of keyboards, had fallen eerily silent. In the corner, an AI-driven robot arm effortlessly assembled circuit boards, a task that Alex and his colleagues used to handle. The transformation was swift and ruthless, as artificial intelligence (AI) had not just joined the workforce but had begun to dominate it. The promise of innovation had turned into a dystopian reality where entry-level engineering jobs were vanishing at an alarming rate. This blog delves into the unsettling reality of why AI is poised to wipe out these jobs, supported by chilling statistics, real-life horror stories, and unsettling daily news.</p>
<h2>The AI Invasion in Engineering</h2>
<p>AI is infiltrating every aspect of engineering, from design and simulation to predictive maintenance and quality control. Machine learning algorithms can analyze massive datasets, spot patterns, and make decisions with a speed and accuracy that humans cannot match. While this might seem like progress, it poses a significant threat to those just starting their engineering careers.</p>
<h2>Alarming Statistics</h2>
<h3>1. <strong>Automation and Job Loss</strong></h3>
<p>According to the World Economic Forum's 2020 report, AI and automation could displace 85 million jobs by 2025. While 97 million new roles may be created, these often require advanced skills that most junior engineers currently lack . As Martin Ford mentions in his book &quot;Rise of the Robots: Technology and the Threat of a Jobless Future,&quot;</p>
<blockquote>
<p>&quot;As technology advances, it eliminates the need for many types of jobs, particularly those that involve routine tasks&quot;.</p>
</blockquote>
<h3>2. <strong>Decline in Entry-Level Positions</strong></h3>
<p>A 2022 McKinsey study revealed a 15% decrease in entry-level engineering positions over the past five years, primarily due to automation and AI technologies . Erik Brynjolfsson and Andrew McAfee, in &quot;The Second Machine Age,&quot; argue that</p>
<blockquote>
<p>&quot;technological progress is going to leave behind some people, perhaps even a lot of people, as it races ahead&quot; .</p>
</blockquote>
<h3>3. <strong>AI in Manufacturing</strong></h3>
<p>In the automotive industry, giants like Tesla and BMW have embraced AI in their manufacturing processes. PwC estimates AI could boost productivity by up to 40%, but at the cost of human jobs, particularly those of junior engineers who handle routine tasks . Jerry Kaplan states in &quot;Humans Need Not Apply,&quot;</p>
<blockquote>
<p>&quot;AI and robotics are accelerating the automation of routine tasks, which were once the domain of junior engineers and other entry-level professionals&quot; .</p>
</blockquote>
<h2>Real-Life Nightmares</h2>
<ol>
<li><strong>Automotive Industry</strong>: Tesla's Gigafactories are a dystopian glimpse into our future. AI runs the show, from quality control to predictive maintenance and even design. The result? Fewer jobs for junior engineers, as machines outpace human capabilities.</li>
<li><strong>Software Development</strong>: AI tools like GitHub Copilot are taking over coding. Junior software engineers, who used to spend their days writing and debugging code, now find these tasks increasingly automated. Efficiency is up, but the opportunities for hands-on learning and growth are vanishing.</li>
<li><strong>Construction</strong>: AI-driven project management tools are becoming the norm in construction. These tools can predict timelines, manage resources, and identify risks better than any human. The fallout? A reduced need for junior engineers who once handled these tasks.</li>
</ol>
<h2>Daily News and Trends</h2>
<ol>
<li><strong>AI in Recruitment</strong>: Companies are using AI-driven recruitment tools to screen candidates, quickly weeding out those without advanced skills. Entry-level candidates must now have a deep understanding of AI and machine learning to even stand a chance.</li>
<li><strong>Upskilling Initiatives</strong>: Companies like IBM and Google are pushing upskilling programs to help employees adapt. Junior engineers must scramble to develop new skills to stay relevant in an AI-dominated job market.</li>
<li><strong>Educational Shifts</strong>: Universities are frantically updating their curricula to include AI and machine learning. For example, MIT's comprehensive AI program prepares students for this new world, underscoring the necessity of AI literacy for future engineers.</li>
</ol>
<h2>The Grim Future for Junior Engineers</h2>
<p>While AI is casting a long, dark shadow over junior engineering jobs, there are still a few glimmers of hope. Here's how junior engineers can try to survive:</p>
<ol>
<li><strong>Embrace Continuous Learning</strong>: In an AI-driven world, staying still is falling behind. Junior engineers must constantly expand their skill sets by diving into AI, machine learning, and data science.</li>
<li><strong>Develop Soft Skills</strong>: Technical skills alone won't cut it. Creativity, problem-solving, and communication are more important than ever. These soft skills might be the last refuge in a world increasingly dominated by machines.</li>
<li><strong>Use AI to Your Advantage</strong>: Rather than fighting the inevitable, junior engineers should learn to work alongside AI tools to enhance their productivity. Understanding AI can provide a crucial edge.</li>
<li><strong>Seek Mentorship</strong>: Finding mentors experienced in AI and related fields is vital. Mentors can help junior engineers navigate this treacherous landscape and find opportunities for growth.</li>
</ol>
<h2>Conclusion</h2>
<p>AI is not just reshaping the engineering landscape; it's threatening the very existence of junior engineers. To survive this transition, junior engineers must embrace continuous learning, focus on soft skills, leverage AI tools, and seek mentorship. Viewing AI not as a replacement but as an enabler might be the only way to avoid being left behind in this relentless march of technology. The question remains: will junior engineers adapt quickly enough, or will they be swept away by the AI tide?</p>
<h2>References</h2>
<ol>
<li>World Economic Forum. (2020). <a href="https://www.weforum.org/reports/the-future-of-jobs-report-2020">The Future of Jobs Report 2020</a>.</li>
<li>McKinsey &amp; Company. (2022). <a href="https://www.mckinsey.com/featured-insights/future-of-work/the-future-of-work-after-covid-19">The Future of Work after COVID-19</a>.</li>
<li>PwC. (2018). <a href="https://www.pwc.com/gx/en/industries/technology/artificial-intelligence/ai-and-robotics.html">AI and Robotics</a>.</li>
<li>Tesla. (2023). <a href="https://www.tesla.com/gigafactory">Gigafactory</a>.</li>
<li>GitHub. (2023). <a href="https://github.com/features/copilot">GitHub Copilot</a>.</li>
<li>Kaplan, J. (2015). <em>Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence</em>. Yale University Press.</li>
<li>IBM. (2023). <a href="https://www.ibm.com/skills">IBM Skills</a>.</li>
<li>Google. (2023). <a href="https://ai.google/">Google AI</a>.</li>
<li>Massachusetts Institute of Technology. (2023). <a href="https://computing.mit.edu/">MIT Schwarzman College of Computing</a>.</li>
</ol>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[Minimalistic vs Noob Programmers]]></title>
            <link>https://gopx.dev/diary/blogs/fake-minimalism</link>
            <guid>https://gopx.dev/diary/blogs/fake-minimalism</guid>
            <pubDate>Mon, 08 Jul 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Explore the crucial differences between true minimalistic programmers and those who merely imitate minimalism. This blog post delves into the characteristics of minimalistic artist programmers and novice programmers attempting to fake minimalism, using examples from React and Tailwind CSS. Learn about efficient state management, optimal use of frameworks, and best practices for writing clean, maintainable code. Discover how real pro programmers take minimalism to the next level, combining efficiency, readability, and performance optimization. Gain insights into identifying genuine minimalism in programming and avoiding common pitfalls of oversimplification.</p>
<h2>The Problem</h2>
<p>In the world of programming, minimalism is a highly valued principle. However, there's a fine line between a programmer who practices minimalism artfully and one who uses it as a facade to cover a lack of skill. This blog will help you identify the key differences between a minimalistic artist programmer and a noob programmer who fakes minimalism, using examples from React and Tailwind CSS.</p>
<h2>Understanding Minimalism in Programming</h2>
<p>Minimalism in programming means using only what is necessary to achieve the desired functionality, without overcomplicating the code. It involves clean, efficient, and readable code. Let's look at how this principle applies to React and Tailwind CSS.</p>
<h3>Minimalistic Artist Programmer</h3>
<p>A minimalistic artist programmer embraces minimalism without compromising functionality or readability. They focus on writing clean, efficient code that's easy to maintain. Here are some characteristics and examples:</p>
<ol>
<li>Efficient use of language features</li>
<li>Clear and descriptive naming conventions</li>
<li>Modular and reusable code structures</li>
<li>Thoughtful state management</li>
<li>Optimal use of libraries and frameworks</li>
</ol>
<p><strong>React Example:</strong></p>
<p>A minimalistic artist will use React hooks and concise state management techniques.</p>
<pre><code class="language-jsx">import { useState } from 'react';

function Counter() {
  const [count, setCount] = useState(0);

  return (
    &lt;div&gt;
      &lt;p&gt;You clicked {count} times&lt;/p&gt;
      &lt;button onClick={() =&gt; setCount(count + 1)}&gt;
        Click me
      &lt;/button&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<p>In this example, the useState hook is used effectively to manage the state of the counter. The code is clear, concise, and easy to understand. Notice how:</p>
<ol>
<li>The state is declared with a meaningful name (count)</li>
<li>The setCount function is used correctly within the onClick handler</li>
<li>The component's logic is straightforward and focused on a single responsibility</li>
</ol>
<p><strong>Tailwind CSS Example:</strong></p>
<p>A minimalistic artist uses Tailwind CSS classes to style components efficiently.</p>
<pre><code class="language-html">&lt;button className=&quot;bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded&quot;&gt;
  Button
&lt;/button&gt;
</code></pre>
<p>Here, the button is styled using Tailwind CSS utility classes, achieving a modern look with minimal code. This approach:</p>
<ol>
<li>Eliminates the need for separate CSS files</li>
<li>Provides a consistent design system</li>
<li>Allows for quick adjustments and responsive design</li>
<li>Reduces the overall file size by only including used styles</li>
</ol>
<h3>Noob Programmer Faking Minimalism</h3>
<p>A noob programmer might misunderstand minimalism, leading to oversimplified or non-functional code. Here are some examples:</p>
<p><strong>React Example:</strong></p>
<pre><code class="language-tsx">function Counter() {
  let count = 0;

  return (
    &lt;div&gt;
      &lt;p&gt;You clicked {count} times&lt;/p&gt;
      &lt;button onClick={() =&gt; count++}&gt;
        Click me
      &lt;/button&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<p>In this example, the programmer avoided using useState for state management, resulting in a non-functional component because React does not re-render the component when <code>count</code> changes. This demonstrates:</p>
<ol>
<li>Lack of understanding of React's rendering mechanism</li>
<li>Misuse of component-level variables instead of state</li>
<li>Ineffective event handling that doesn't trigger re-renders</li>
</ol>
<p><strong>Tailwind CSS Example:</strong></p>
<p>A noob programmer might misuse Tailwind CSS classes, leading to redundant or ineffective styles.</p>
<p>A noob programmer might misuse Tailwind CSS classes, leading to redundant or ineffective styles.</p>
<pre><code class="language-html">&lt;button className=&quot;text-center p-2&quot;&gt;
  Button
&lt;/button&gt;
</code></pre>
<p>Here, the button lacks essential styling to differentiate it from plain text. The minimal classes used do not achieve the desired effect. This shows:</p>
<ol>
<li>Insufficient understanding of Tailwind's utility classes</li>
<li>Lack of consideration for user experience and visual hierarchy</li>
<li>Missed opportunities for responsive design and interactive states</li>
</ol>
<h3>The Real Pro Programmer Approach</h3>
<p>A real pro programmer takes minimalism to the next level, combining efficiency, readability, and maintainability. Here's how they might approach our examples:</p>
<p><strong>React Example:</strong></p>
<pre><code class="language-jsx">import { useCallback, useState } from 'react';

const Counter = () =&gt; {
  const [count, setCount] = useState(0);
  const increment = useCallback(() =&gt; setCount(prev =&gt; prev + 1), []);

  return (
    &lt;div&gt;
      &lt;p&gt;You clicked {count} times&lt;/p&gt;
      &lt;button onClick={increment}&gt;Click me&lt;/button&gt;
    &lt;/div&gt;
  );
};
</code></pre>
<p>In this pro version:</p>
<ol>
<li>The component is defined as an arrow function for consistency.</li>
<li><code>useCallback</code> is used to memoize the increment function, optimizing performance.</li>
<li>The state update uses a functional update to ensure accuracy with rapid clicks.</li>
</ol>
<p><strong>Tailwind CSS Example:</strong></p>
<pre><code class="language-jsx">const Button = ({ children, onClick }) =&gt; (
  &lt;button
    className=&quot;bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded focus:outline-none focus:shadow-outline transition duration-300 ease-in-out&quot;
    onClick={onClick}
  &gt;
    {children}
  &lt;/button&gt;
);
</code></pre>
<p>Here, the pro programmer:</p>
<ol>
<li>Creates a reusable Button component for consistency across the application.</li>
<li>Adds focus and transition styles for better accessibility and user experience.</li>
<li>Uses props to make the button flexible and reusable.</li>
</ol>
<h2>Best Practices for Minimalistic Programming</h2>
<p>To truly embrace minimalism in programming, consider these best practices:</p>
<ol>
<li><strong>Understand the tools</strong>: Gain a deep understanding of the frameworks and libraries you're using.</li>
<li><strong>Prioritize readability</strong>: Write code that's easy for others (and your future self) to understand.</li>
<li><strong>Use built-in features</strong>: Leverage language and framework features instead of reinventing the wheel.</li>
<li><strong>Optimize for performance</strong>: Consider the performance implications of your code choices.</li>
<li><strong>Refactor regularly</strong>: Continuously improve your code to maintain its minimalistic nature.</li>
</ol>
<h2>Conclusion</h2>
<p>Distinguishing between a minimalistic artist programmer and a noob programmer faking minimalism requires looking at the effectiveness and clarity of their code. A true minimalistic programmer will use minimal code to achieve maximum functionality and readability, while a noob programmer might use minimalism as a cover for lack of skill, resulting in incomplete or ineffective code.</p>
<p>By examining examples in React and Tailwind CSS, you can better understand how to identify true minimalism in programming. Remember, minimalism is about simplicity and efficiency, not cutting corners.</p>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[Pair programming lowers quality]]></title>
            <link>https://gopx.dev/diary/blogs/pair-prog-sucks</link>
            <guid>https://gopx.dev/diary/blogs/pair-prog-sucks</guid>
            <pubDate>Sat, 06 Jul 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Pair programming has long been touted as a panacea for improving code quality and reducing bugs. The idea of having two developers collaborating on a single task seems intuitive - more minds working together should lead to better results, right? However, the reality paints a different picture. Numerous studies and real-world experiences suggest that pair programming, in fact, lowers the overall quality of the codebase.</p>
<h2>The Negative Impact on Code Quality</h2>
<p>Contrary to popular belief, pair programming has been shown to have a detrimental effect on code quality. A study conducted by the University of Utah found that code produced through pair programming exhibited 15% more defects compared to code written by individual developers. This surprising result challenges the notion that pair programming leads to higher-quality code.</p>
<h2>Lack of Individual Accountability</h2>
<p>When working in pairs, developers may feel less accountable for the quality of their code. The presence of a partner can create a false sense of security, leading to a more casual approach to coding and a higher tolerance for technical debt. Developers may be more inclined to take shortcuts or implement suboptimal solutions, knowing that their partner will likely not catch or challenge these decisions.</p>
<h2>Increased Cognitive Load</h2>
<p>Pair programming places a significant cognitive load on developers. Constantly communicating, explaining their thought process, and navigating the codebase together can be mentally exhausting. This fatigue can lead to reduced focus, increased errors, and a lower overall quality of the code produced during pair programming sessions.</p>
<h2>Personality Clashes and Communication Issues</h2>
<p>Pair programming requires effective communication and collaboration between developers. However, personality clashes, differences in coding styles, and communication breakdowns can hinder this process. When developers struggle to work together harmoniously, the quality of the code suffers as a result.</p>
<h2>The Myth of Faster Development</h2>
<p>Another common misconception about pair programming is that it leads to faster development cycles. While pair programming may initially seem more efficient, with two developers working on a single task, studies have shown that this is not always the case.</p>
<p>A research experiment conducted by NASA found that pair programming took 15% more time to complete compared to individual programming. This is likely due to the increased overhead of communication, coordination, and the need to reach consensus on design decisions.</p>
<h2>The Importance of Individual Skill Development</h2>
<p>While pair programming can be a valuable tool for knowledge sharing and mentorship, it should not be the sole focus of a development team's efforts. Developers need to have opportunities to work independently, hone their skills, and take ownership of their code. Overreliance on pair programming can stifle individual growth and lead to a team of developers who are overly dependent on their partners.</p>
<h2>Conclusion</h2>
<p>The idea of pair programming improving code quality is a myth that has been perpetuated by the software development community for far too long. While pair programming can have its benefits, such as knowledge sharing and reduced bus factor, it should not be seen as a silver bullet for improving code quality. Developers need to focus on building their individual skills, taking responsibility for their code, and communicating effectively with their teammates. By striking a balance between collaborative and individual work, development teams can produce high-quality code that meets the needs of their</p>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[GitHub Templates]]></title>
            <link>https://gopx.dev/diary/notes/gh-templates</link>
            <guid>https://gopx.dev/diary/notes/gh-templates</guid>
            <pubDate>Fri, 05 Jul 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>GitHub templates streamline software development by providing structured formats for bug reports, commits, issues, pull requests, and feature requests. They ensure consistency, improve communication, and facilitate efficient problem-solving within teams. This document covers various template types, including detailed templates for bug reports, Git commit messages, issues, pull requests, and enhancement requests. Implementing these templates enhances workflow, reduces misunderstandings, and accelerates high-quality software delivery.</p>
<h2>How it Works?</h2>
<p>Templates provide a structured format for consistently capturing key information needed for common software development tasks like bug reports, commits, issues, pull requests (PRs), and enhancement/feature requests. Using templates ensures all relevant details are provided, streamlining the process and making it easier to prioritize, reproduce, and resolve the reported issues or requested changes.</p>
<h2>Templates</h2>
<p>Following are the templates for bug reports, commits, issues, PRs, enhancement/feature requests.</p>
<h3>Bug-Report Template</h3>
<pre><code class="language-markdown">### Overview

[Provide a brief summary of the bug.]

### How to Reproduce

1. [Step 1]
2. [Step 2]
3. [Step 3]
4. [Include as many steps as necessary to reproduce the issue]

### Environment Information

- **Operating System**: Windows 10
- **Browser**: Chrome 98.0.4758.102
- **Application Version/Commit**: v2.1.0

### Actual and Expected Behavior

- **Actual**: [Describe what happened]
- **Expected**: [Describe what you expected to happen]

### Additional Context

[Include any additional information that might be relevant]
</code></pre>
<h3>Git Commit Message Convention</h3>
<blockquote>
<p>This is adapted from <a href="https://github.com/conventional-changelog/conventional-changelog/tree/master/packages/conventional-changelog-angular">Angular's commit convention</a>.</p>
</blockquote>
<p>Messages must be matched by the following regex:</p>
<pre><code class="language-js">/^(revert: )?(feat|fix|docs|style|refactor|perf|test|workflow|build|ci|chore|types|wip)(\(.+\))?: .{1,72}/;
</code></pre>
<details>
<summary>Full Message Format</summary>
<br />
A commit message consists of a **header**, **body** and **footer**. The header has a **type**, **scope** and **subject**:
<pre><code class="language-html">&lt;type&gt;(&lt;scope&gt;): &lt;subject&gt;
&lt;BLANK LINE&gt;
&lt;body&gt;
&lt;BLANK LINE&gt;
&lt;footer&gt;
</code></pre>
<p>The <strong>header</strong> is mandatory and the <strong>scope</strong> of the header is optional.</p>
<h4>Revert</h4>
<p>If the commit reverts a previous commit, it should begin with <code>revert:</code>, followed by the header of the reverted commit. In the body, it should say: <code>This reverts commit &lt;hash&gt;.</code>, where the hash is the SHA of the commit being reverted.</p>
<h4>Type</h4>
<p>If the prefix is <code>feat</code>, <code>fix</code> or <code>perf</code>, it will appear in the changelog. However, if there is any <a href="#footer">BREAKING CHANGE</a>, the commit will always appear in the changelog.</p>
<p>Other prefixes are up to your discretion. Suggested prefixes are <code>docs</code>, <code>chore</code>, <code>style</code>, <code>refactor</code>, and <code>test</code> for non-changelog related tasks.</p>
<h4>Scope</h4>
<p>The scope could be anything specifying the place of the commit change.</p>
<h4>Subject</h4>
<p>The subject contains a succinct description of the change:</p>
<ul>
<li>use the imperative, present tense: &quot;change&quot; not &quot;changed&quot; nor &quot;changes&quot;</li>
<li>don't capitalize the first letter</li>
<li>no dot (.) at the end</li>
</ul>
<h4>Body</h4>
<p>Just as in the <strong>subject</strong>, use the imperative, present tense: &quot;change&quot; not &quot;changed&quot; nor &quot;changes&quot;.<br>
The body should include the motivation for the change and contrast this with previous behavior.</p>
<h4>Footer</h4>
<p>The footer should contain any information about <strong>Breaking Changes</strong> and is also the place to<br>
reference GitHub issues that this commit <strong>Closes</strong>.</p>
<p><strong>Breaking Changes</strong> should start with the word <code>BREAKING CHANGE:</code> with a space or two newlines. The rest of the commit message is then used for this.</p>
</details>
<h3>Issue Template</h3>
<pre><code class="language-markdown">## Prerequisites

Please answer the following questions for yourself before submitting an issue. **YOU MAY DELETE THE PREREQUISITES SECTION.**

- [ ] I am running the latest version
- [ ] I checked the documentation and found no answer
- [ ] I checked to make sure that this issue has not already been filed
- [ ] I'm reporting the issue to the correct repository (for multi-repository projects)

## Issue Type

[ ] Bug Report
[ ] Feature Request
[ ] Question

## Description

[Provide a clear and concise description of the issue or request. If it's a bug, describe the unexpected behavior. If it's a feature request, explain the new functionality you're proposing. For questions, clearly state your question.]

## Steps to Reproduce (if applicable)

1. [Step 1]
2. [Step 2]
3. [Step 3]
4. [Include as many steps as necessary to reproduce the issue, if applicable]

## Expected Behavior

[Describe what you expected to happen.]

## Actual Behavior

[Describe what actually happened.]

## Environment Information

- **Operating System**: [e.g., Windows 10, macOS 11.3, Ubuntu 20.04]
- **Browser (if applicable)**: [e.g., Chrome 98.0.4758.102, Firefox 98.0]
- **Application Version/Commit (if applicable)**: [e.g., v2.1.0 or Git commit hash]

## Screenshots (if applicable)

[Include screenshots or images that help illustrate the issue, if applicable.]

## Additional Information

[Include any additional information that might be relevant to understanding or resolving the issue.]

---

**By submitting this issue, I confirm that I have read and understood the project's guidelines, and I believe this is not a duplicate of an existing issue.**
</code></pre>
<h3>Pull-Request Template</h3>
<pre><code class="language-markdown">## Description

[Provide a brief description of the purpose of this pull request.]

## Related Issues

[Include links to any related issues or discussions that are addressed by this pull request.]

## Changes Made

[Highlight the key changes made in this pull request. This can include new features, bug fixes, improvements, etc.]

## Screenshots (if applicable)

[Include screenshots or gifs demonstrating the changes made, if applicable.]

## Checklist

- [ ] I have tested these changes locally.
- [ ] I have updated the documentation accordingly.
- [ ] The code follows the project's coding standards.
- [ ] All tests pass successfully.
- [ ] The branch is up-to-date with the base branch.
- [ ] There are no merge conflicts.
- [ ] I have added necessary comments to the code, especially in complex areas.

## Additional Notes

[Include any additional information that might be relevant for the reviewers or that explains the context of the changes.]

## Reviewer Guidelines

[Provide guidelines for reviewers, such as specific areas to focus on or any particular concerns.]

## Definition of Done

[Specify the criteria that need to be met for this pull request to be considered &quot;done&quot; and ready for merging.]

## Closing Notes

[Include any closing notes, acknowledgments, or shoutouts to contributors.]

---

**By submitting this pull request, I confirm that my contributions are made under the terms of the project's guidelines.**
</code></pre>
<h3>Enhancement/Feature Request Template</h3>
<pre><code class="language-markdown">## Description

[Provide a clear and concise description of the enhancement or feature you are proposing.]

## Use Case

[Describe the specific use case or scenario where this enhancement or feature would be beneficial.]

## Proposed Solution

[Outline your proposed solution for the enhancement or feature. If applicable, include code snippets, diagrams, or any other details that help convey your idea.]

## Alternatives Considered

[If you've considered alternative solutions, list them here with a brief explanation of why you chose the proposed solution over the alternatives.]

## Benefits

[Explain the benefits and value that the proposed enhancement or feature would bring to users or the project.]

## Implementation Details

[If you have any suggestions on how the enhancement or feature could be implemented, provide high-level implementation details. If you are unsure, you can leave this section blank.]

## Additional Context

[Include any additional information that might be relevant to understanding or discussing the proposed enhancement or feature.]

---

**By submitting this enhancement or feature request, I confirm that I have read and understood the project's guidelines, and I believe this is not a duplicate of an existing request.**
</code></pre>
<h2>Conclusion</h2>
<p>Using these templates for bug reports, commits, issues, pull requests, and enhancement/feature requests can significantly improve the efficiency and clarity of your development process. They provide a structured approach to communication within your team and with external contributors, ensuring that all necessary information is captured consistently. By adopting these templates, you can streamline your workflow, reduce misunderstandings, and ultimately deliver better software more quickly.</p>
<p>Remember to adapt these templates to fit your specific project needs and team preferences. Regular reviews and updates of these templates can help keep them relevant and effective as your project evolves.</p>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[Ollama on external drive]]></title>
            <link>https://gopx.dev/diary/notes/ollama-external</link>
            <guid>https://gopx.dev/diary/notes/ollama-external</guid>
            <pubDate>Fri, 21 Jun 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Explore the benefits of running Large Language Models (LLMs) locally using Ollama, with a focus on data security, reduced latency, and customization. Learn how to set up Ollama on an external drive for efficient model management and storage, giving users full control over their data and model deployment. This guide covers the advantages of local LLMs, Ollama's features, and step-by-step instructions for installation and configuration on external storage.</p>
<h2>Why run LLMs locally?</h2>
<ol>
<li>
<p><strong>Data security:</strong><br>
Local LLMs can process data on-site, reducing the risk of data breaches by eliminating the need to transmit data over the internet. This can also help meet regulatory requirements for data privacy and security.</p>
</li>
<li>
<p><strong>Reduced latency:</strong><br>
Running LLMs locally can reduce the response time between a request and the model's response. This can be especially beneficial for applications that require real-time data processing.</p>
</li>
<li>
<p><strong>Customization:</strong><br>
Local LLMs can be tailored to specific needs and requirements, allowing for better performance than general-purpose models.</p>
</li>
<li>
<p><strong>Control:</strong><br>
Local deployment gives users complete control over their hardware, data, and the LLMs themselves. This can be useful for optimization and customization according to specific needs and regulations.</p>
</li>
<li>
<p><strong>Flexibility:</strong><br>
Local deployment can also provide greater flexibility than working with third-party servers, which may limit businesses to pre-defined models and functionality.</p>
</li>
</ol>
<h2>But why Ollama?</h2>
<p>Ollama bridges the gap between large language models (LLMs) and local development, allowing you to run powerful LLMs directly on your machine. Here’s how Ollama empowers you:</p>
<ol>
<li>
<p><strong>Simplified LLM Interaction:</strong><br>
Ollama’s straightforward CLI and API make it easy to create, manage, and interact with LLMs, accessible to a wide range of users.</p>
</li>
<li>
<p><strong>Pre-built Model Library:</strong><br>
Access a curated collection of ready-to-use LLM models, saving you time and effort.</p>
</li>
<li>
<p><strong>Customization Options:</strong><br>
Fine-tune models to your specific needs, customize prompts, or import models from various sources for greater control.</p>
</li>
</ol>
<h2>How to setup?</h2>
<h3>1. Download Ollama for your OS from <a href="https://www.ollama.com/download">here</a>.</h3>
<blockquote>
<p>To checkout the code base and community integrations, check out the <a href="https://github.com/ollama/ollama">Ollama GitHub repository</a>.</p>
</blockquote>
<h3>2. After downloading, run your first model.</h3>
<p>This will install the model at <code>~/.ollama/models/</code> and allow you to interact with it.</p>
<pre><code class="language-bash">ollama run llama3 # or any other model you want
</code></pre>
<h3>3. How to install models in an external drive?</h3>
<p>For this, follow these steps after connecting your hard drive to your machine.</p>
<h4>1. Execute this command to create a models directory in your external drive:</h4>
<pre><code class="language-bash">mkdir -p ai_models/ollama/models
</code></pre>
<p>This will generate a directory structure like this in your external drive:</p>
<pre><code class="language-md">ai_models/
└── ollama/
    └── models/
</code></pre>
<h4>2. Creating a symlink in your ~/.ollama/models directory to your external drive:</h4>
<p>The Ollama is designed to search for its models in the <code>~/.ollama/models</code> directory by default. However, if you want to store your Ollama models on an external drive (in this case, <code>~/Volumes/drive/ai_models/ollama/models</code>), you can create a symlink to redirect the library's search to the external location.</p>
<ol>
<li>Remove the default models directory in your <code>~/.ollama</code> directory.</li>
</ol>
<pre><code class="language-bash">sudo rm -rf models
# enter the password 🔑
</code></pre>
<ol start="2">
<li>Create a symlink.</li>
</ol>
<pre><code class="language-bash">Replace lines: 88-88
sudo ln -s /Volumes/drive/ai_models/ollama/models ~/.ollama/models
</code></pre>
<p>Replace lines: 91-91<br>
3. Check the <code>~/.ollama</code> for symlink. Execute this command to list all items with info:</p>
<pre><code class="language-bash">ls -la
</code></pre>
<p>You will see a similar output:</p>
<pre><code class="language-bash">total 48
drwxr-xr-x@   8 gopalverma  staff   256 Jun 20 00:38 .
drwxr-x---+ 107 gopalverma  staff  3424 Jun 22 10:57 ..
-rw-r--r--@   1 gopalverma  staff  6148 Jun 19 17:34 .DS_Store
-rw-------    1 gopalverma  staff  4976 Jun 20 00:38 history
-rw-------@   1 gopalverma  staff   387 Jun 19 14:40 id_ed25519
-rw-r--r--@   1 gopalverma  staff    81 Jun 19 14:40 id_ed25519.pub

Replace lines: 108-108
drwxr-xr-x@   3 gopalverma  staff    96 Jun 19 14:40 logs
lrwxr-xr-x    1 root        staff    41 Jun 19 16:32 models -&gt; /Volumes/GopalSSD/ai_models/ollama/models
# above line means that the models directory is linked to the external drive
</code></pre>
<ol start="4">
<li>Install and Run the model.</li>
</ol>
<pre><code class="language-bash">Replace lines: 116-116
ollama run llama3
</code></pre>
<p>This will install the model at <code>/Volumes/drive/ai_models/ollama/models</code> and allow you to interact with it. Use it!</p>
<h2>Conclusion</h2>
<p>Running LLMs locally with Ollama on an external drive offers numerous benefits, including enhanced data security, reduced latency, and greater control over your AI models. By following the steps outlined in this guide, you can easily set up Ollama to use models stored on an external drive, giving you the flexibility to manage large model files without consuming your main system's storage.</p>
<p>This approach not only allows you to leverage the power of LLMs locally but also provides a scalable solution for storing and accessing multiple models. As AI continues to evolve, having a setup that allows for easy expansion and management of your model library will become increasingly valuable.</p>
<p>Remember, the key advantages of this setup include:</p>
<ul>
<li>Improved data privacy and security</li>
<li>Reduced dependency on cloud services</li>
<li>Flexibility in model selection and customization</li>
<li>Efficient use of storage resources</li>
</ul>
<p>By mastering the use of Ollama with external storage, you're well-positioned to explore and experiment with various LLMs while maintaining full control over your AI environment.</p>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[Divergent branches reconciliation]]></title>
            <link>https://gopx.dev/diary/notes/divergent-branches-reconcile</link>
            <guid>https://gopx.dev/diary/notes/divergent-branches-reconcile</guid>
            <pubDate>Wed, 19 Jun 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>When working with local and remote branches in Git, you might encounter this error when pushing or pulling from the remote (GitHub).</p>
<pre><code class="language-bash">hint: You have divergent branches and need to specify how to reconcile them.
hint: You can do so by running one of the following commands sometime before
hint: your next pull:
hint:
hint:   git config pull.rebase false  # merge
hint:   git config pull.rebase true   # rebase
hint:   git config pull.ff only       # fast-forward only
hint:
hint: You can replace &quot;git config&quot; with &quot;git config --global&quot; to set a default
hint: preference for all repositories. You can also pass --rebase, --no-rebase,
hint: or --ff-only on the command line to override the configured default per
hint: invocation.
fatal: Need to specify how to reconcile divergent branches.
</code></pre>
<h2>What happened?</h2>
<ul>
<li>Your local branch and the branch on GitHub (remote branch) have different changes.</li>
<li>Git doesn't know how to combine (reconcile) these changes.</li>
</ul>
<h2>Why it matters?</h2>
<ul>
<li>You need to decide how to merge these changes so both branches are up to date.</li>
</ul>
<h2>How to fix?</h2>
<h3>1. Merge</h3>
<p>Combine changes and keep all commits (default strategy).</p>
<pre><code class="language-bash">git pull --no-rebase origin main

#or

git pull origin main
</code></pre>
<h3>2. Rebase</h3>
<p>Move your changes on top of the remote changes, creating a straight line of commits.</p>
<pre><code class="language-bash">git pull --rebase origin main
</code></pre>
<h3>3. Fast-forward</h3>
<p>Only update if there are no new local changes.</p>
<pre><code class="language-bash">git pull --ff-only origin main
</code></pre>
<h2>Summary</h2>
<p>Choose a strategy (merge, rebase, or fast-forward) to update your branch.</p>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[MDX Hello World Showcase]]></title>
            <link>https://gopx.dev/diary/blogs/hello-world</link>
            <guid>https://gopx.dev/diary/blogs/hello-world</guid>
            <pubDate>Tue, 18 Jun 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>This page demonstrates various MDX components and syntax, including text formatting, links, lists, blockquotes, code blocks, tables, task lists, horizontal rules, math equations, and footnotes. It serves as a comprehensive guide to help you understand MDX parsing and its capabilities.</p>
<h2>Text Formatting</h2>
<p>You can use <em>italic</em>, <strong>bold</strong>, and <em><strong>bold italic</strong></em> text.</p>
<p>You can also use <s>strikethrough</s> and <code>inline code</code>.</p>
<h2>Links</h2>
<p>Here's a <a href="https://mdxjs.com/">link to MDX documentation</a>.</p>
<h2>Lists</h2>
<p>Unordered list:</p>
<ul>
<li>Item 1</li>
<li>Item 2
<ul>
<li>Subitem 2.1</li>
<li>Subitem 2.2</li>
</ul>
</li>
<li>Item 3</li>
</ul>
<p>Ordered list:</p>
<ol>
<li>First item</li>
<li>Second item</li>
<li>Third item</li>
</ol>
<h2>Blockquotes</h2>
<blockquote>
<p>This is a blockquote.<br>
It can span multiple lines.</p>
</blockquote>
<h2>Code Blocks</h2>
<p>Inline code: <code>console.log(&quot;Hello, MDX!&quot;);</code></p>
<p>JavaScript code block:</p>
<pre><code class="language-jsx">const greeting = &quot;Hello, MDX!&quot;;
console.log(greeting);
</code></pre>
<p>Python code block:</p>
<pre><code class="language-python">print(&quot;Hello, MDX!&quot;)
</code></pre>
<h2>Tables</h2>
<table>
<thead>
<tr>
<th>Feature</th>
<th>Supported</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>Tables</td>
<td>Yes</td>
<td>This table</td>
</tr>
<tr>
<td>Code</td>
<td>Yes</td>
<td><code>code</code></td>
</tr>
<tr>
<td>Math</td>
<td>Yes</td>
<td>$E = mc^2$</td>
</tr>
</tbody>
</table>
<h2>Task Lists</h2>
<ul>
<li>[x] Learn MDX basics</li>
<li>[ ] Master advanced MDX concepts</li>
<li>[ ] Build an MDX-powered website</li>
</ul>
<h2>Horizontal Rule</h2>
<hr>
<h2>Math Equations</h2>
<p>$y = mx + b$</p>
<h2>Footnotes</h2>
<p>Here's a sentence with a footnote.[^1]</p>
<p>[^1]: This is the footnote content.</p>
<h2>Conclusion</h2>
<p>This page showcases various MDX components and syntax. Happy MDX parsing!</p>
<h2>Random Code Demo</h2>
<pre><code class="language-js">for (const el of enterElements) {
  el.classList.add('transition-enter')
  el.classList.add('transition-enter-from')
}

// Replace the children of the container with
// elements from the new code
container.replaceChildren(...newChildren)
// Force layout reflow
forceReflow()

for (const el of enterElements) {
  el.classList.remove('transition-enter-from')
  el.classList.add('transition-enter-to')
}

// Here the transition starts
// from `.transition-enter-from` to `.transition-enter-to`

for (const el of enterElements) {
  // Transition Finished
  el.addEventListener('transitionend', () =&gt; {
    el.classList.remove('transition-enter-to')
  })
}
</code></pre>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[Renaming git remote]]></title>
            <link>https://gopx.dev/diary/notes/renaming-git-remote</link>
            <guid>https://gopx.dev/diary/notes/renaming-git-remote</guid>
            <pubDate>Sat, 01 Jun 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Let’s say that your current remote name is “origin_1”. And now you want to change the remote name to  “origin_2”.</p>
<p><strong>1. Confirm the name of your current remote by running the following command:</strong></p>
<pre><code class="language-bash">git remote -v
</code></pre>
<p>You should see an output like the following</p>
<pre><code class="language-bash">origin_1  https://github.com/username/repository.git (fetch)
origin_1  https://github.com/username/repository.git (push)
</code></pre>
<p><strong>2. Now that the current remote name is confirmed — you can change it by running the following command:</strong></p>
<pre><code class="language-bash">git remote rename origin_1 origin_2
</code></pre>
<p>This command tells git to rename the current remote to something different. In this example, we’re changing the remote name to “origin_2”, but you can change your remote to be anything you want.</p>
<p><strong>3.  Verify that your remote has changed from “beanstalk” to “origin” by running the following command:</strong></p>
<pre><code class="language-bash">git remote -v
</code></pre>
<p>You should see an output like the following</p>
<pre><code class="language-bash">origin_2  https://github.com/username/repository.git (fetch)
origin_2  https://github.com/username/repository.git (push)
</code></pre>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[90% of devs are damn bad!]]></title>
            <link>https://gopx.dev/diary/blogs/90pc-useless</link>
            <guid>https://gopx.dev/diary/blogs/90pc-useless</guid>
            <pubDate>Sat, 16 Dec 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Exploring the controversial claim that 90% of programmers are subpar, this article delves into the reasons behind widespread underperformance in the field. It examines key factors such as lack of fundamental skills, overreliance on online resources, and insufficient practice. The piece also emphasizes the critical need to raise industry standards, promote continuous learning, and foster a culture of code quality to address these issues and prepare for future technological challenges.</p>
<h2>Bitter Truth</h2>
<p>It's a bold statement, but one that many experienced developers would likely agree with: the majority of programmers out there are not particularly skilled or talented. While the field of software engineering has grown exponentially in recent years, the quality of code and programming ability has not kept pace.</p>
<h2>Why So Many Programmers Underperform</h2>
<p>There are a few key reasons why so many programmers fall short of expectations:</p>
<h3>1. Lack of Fundamental Skills</h3>
<p>Many programmers, especially those new to the field, lack a strong grasp of core computer science concepts like data structures, algorithms, and software design principles. They may be able to piece together working code, but their solutions are often inefficient, buggy, and difficult to maintain.</p>
<h3>2. Overreliance on Tutorials and Stack Overflow</h3>
<p>In the age of the internet, it's easy for programmers to find pre-written code snippets and solutions online. While this can be a helpful learning tool, too many developers simply copy-paste code without understanding how or why it works. This leads to a shallow knowledge base and an inability to solve novel problems.</p>
<h3>3. Insufficient Practice and Experience</h3>
<p>Programming is a skill that requires consistent, deliberate practice to master. Many programmers, however, are content to coast on the bare minimum, rarely challenging themselves with new technologies or complex projects. As a result, their skills stagnate, and they fail to develop the depth of experience needed to become truly proficient.</p>
<h2>The Importance of Raising the Bar</h2>
<p>The prevalence of mediocre programmers has serious consequences for the tech industry and the quality of software products. Poorly written code leads to bugs, security vulnerabilities, and technical debt that can haunt a project for years.</p>
<p>To address this issue, the programming community needs to raise the bar for what constitutes acceptable skill and expertise. This means:</p>
<ul>
<li>Emphasizing fundamental computer science education.</li>
<li>Encouraging continuous learning and skill development.</li>
<li>Fostering a culture of code quality, testing, and best practices.</li>
<li>Holding programmers accountable for the quality of their work.</li>
</ul>
<p>By elevating the standards of the profession, we can ensure that the next generation of developers is better equipped to tackle the complex challenges of the modern digital landscape.</p>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
        <item>
            <title><![CDATA[Healthy life as a Developer]]></title>
            <link>https://gopx.dev/diary/blogs/health-for-dev</link>
            <guid>https://gopx.dev/diary/blogs/health-for-dev</guid>
            <pubDate>Wed, 13 Dec 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>As a software developer, it's easy to get caught up in the demands of your work - long hours hunched over a computer, deadlines looming, and the constant pressure to deliver new features and updates. However, neglecting your physical and mental health can have serious consequences, both for your career and your overall well-being.</p>
<h2>The Risks of an Unhealthy Lifestyle</h2>
<p>Many software developers fall into unhealthy patterns, such as:</p>
<ul>
<li>Sitting for extended periods without breaks</li>
<li>Skipping meals or relying on unhealthy snacks</li>
<li>Irregular sleep schedules and lack of sleep</li>
<li>High stress levels and burnout</li>
<li>Minimal physical activity</li>
</ul>
<p>These behaviors can lead to a host of issues, including:</p>
<ul>
<li>Musculoskeletal problems like back pain, neck strain, and carpal tunnel syndrome</li>
<li>Weight gain and increased risk of chronic diseases like diabetes and heart disease</li>
<li>Decreased cognitive function, mood, and productivity</li>
<li>Burnout, depression, and anxiety</li>
</ul>
<p>Maintaining good health is not only important for your personal well-being, but it can also positively impact your professional performance and career trajectory.</p>
<h2>Developing Healthy Habits</h2>
<p>Fortunately, there are several steps software developers can take to prioritize their health:</p>
<ol>
<li><strong>Incorporate Regular Exercise</strong>: Even a brief daily walk or a few minutes of stretching can make a significant difference. Consider joining a gym or trying out activities like cycling, swimming, or rock climbing.</li>
<li><strong>Improve Your Posture and Ergonomics</strong>: Invest in a comfortable, adjustable chair and desk setup to minimize strain on your body. Take regular breaks to stretch and move around.</li>
<li><strong>Maintain a Balanced Diet</strong>: Avoid relying on fast food or unhealthy snacks. Incorporate more whole, nutrient-dense foods into your meals and stay hydrated throughout the day.</li>
<li><strong>Prioritize Sleep</strong>: Aim for 7-9 hours of quality sleep each night. Establish a consistent sleep routine and create a relaxing bedtime environment.</li>
<li><strong>Practice Stress Management</strong>: Experiment with techniques like meditation, yoga, or mindfulness to help manage stress and prevent burnout. Don't be afraid to take breaks or ask for help when needed.</li>
<li><strong>Schedule Regular Checkups</strong>: Make time for annual physicals, dental cleanings, and other preventive healthcare appointments to catch any issues early.</li>
</ol>
<p>By making your health a priority, you can not only improve your quality of life but also enhance your productivity, creativity, and overall job performance as a software developer.</p>
<p>Remember, taking care of yourself is not a luxury - it's a necessity for a long, fulfilling career in the tech industry.</p>
]]></content:encoded>
            <author>hi@gopx.dev (Gopal Verma)</author>
        </item>
    </channel>
</rss>