Tech writers don’t write. → Not in the way most people think. We don’t sit down with a blank page and “make it up.” We’re not wordsmiths polishing clever sentences. We’re not decorators. We’re architects. And in the age of AI, our role has quietly evolved into something far more powerful—and far more essential. Here’s what the new tech writer actually does: 1. We curate. We filter the noise. From dev notes, internal wikis, messy Notion pages, AI-generated drafts—we gather what matters and discard what doesn’t. 2. We verify. We don’t just copy and paste. We check, clarify, recheck. Because what’s written in the spec doc isn’t always what’s true in production. 3. We restructure. We’re not just editing for grammar. We’re rearchitecting information to match how real users actually read and retain it. Good docs don’t just inform. They guide. 4. We translate. We bridge the gap between engineering and end user. Between product complexity and business clarity. Between AI output and human understanding. 5. We strategize. We don’t “just write the docs.” We shape documentation ecosystems—mapping user journeys, designing content models, identifying gaps before they become support tickets. If you’re hiring a writer to “clean up” your AI-generated documentation, you’re looking for the wrong skillset. You don’t need a cleaner. You need an operator. One who understands: • How your product works • What your users need • What your GTM team is saying • What your AI tools are missing • And how to bring it all together—seamlessly Because in 2025, tech writers aren’t just writers. We’re content strategists with dev-level instincts. And the companies that understand this? They’re the ones whose products get adopted faster, retained longer, and supported less.
Writing Code Documentation
Explore top LinkedIn content from expert professionals.
-
-
𝟭. 𝗕𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁 (𝗕𝗥𝗗): A BRD captures high-level business needs and objectives from a stakeholder’s perspective. It focuses on why a project is being undertaken and what value it brings to the business. 𝗞𝗲𝘆 𝗘𝗹𝗲𝗺𝗲𝗻𝘁𝘀: • Business objectives • Stakeholder needs • High-level business requirements • Scope of the project • Business rules • Assumptions and constraints 𝟮. 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁 (𝗙𝗥𝗗): An FRD translates high-level business needs into detailed functional requirements that describe how a system should behave. It focuses on system interactions, workflows, and features that will fulfill business requirements. 𝗞𝗲𝘆 𝗘𝗹𝗲𝗺𝗲𝗻𝘁𝘀: • Functional requirements (detailed descriptions of features) • System workflows • Use cases and user stories • UI/UX requirements (screens, wireframes) • Data flow diagrams 𝟯. 𝗦𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀 𝗦𝗽𝗲𝗰𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 (𝗦𝗥𝗦): An SRS is a comprehensive document that includes both functional and non-functional requirements, providing a complete specification of how the software should work. It is often used by developers and testers for system implementation. 𝗞𝗲𝘆 𝗘𝗹𝗲𝗺𝗲𝗻𝘁𝘀: • Functional requirements (features & capabilities) • Non-functional requirements (performance, security, scalability) • System architecture & design constraints • Data models • Interfaces (API, external system interactions) While the 𝗕𝗥𝗗, 𝗙𝗥𝗗, and 𝗦𝗥𝗦 serve different purposes, they all contribute to 𝗰𝗹𝗲𝗮𝗿 𝗮𝗻𝗱 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲𝗱 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀. In 𝗔𝗴𝗶𝗹𝗲 𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁𝘀, these documents may be replaced with 𝗣𝗿𝗼𝗱𝘂𝗰𝘁 𝗕𝗮𝗰𝗸𝗹𝗼𝗴𝘀, 𝗨𝘀𝗲𝗿 𝗦𝘁𝗼𝗿𝗶𝗲𝘀, 𝗮𝗻𝗱 𝗘𝗽𝗶𝗰𝘀, but in 𝗪𝗮𝘁𝗲𝗿𝗳𝗮𝗹𝗹 𝗼𝗿 𝗵𝘆𝗯𝗿𝗶𝗱 𝗺𝗼𝗱𝗲𝗹𝘀, they are still widely used. Which of these documents do you use in your projects? Let’s discuss in the comments! 👇 #BusinessAnalysis #IIBA #BRD #FRD #SRS #RequirementsEngineering #SoftwareDevelopment
-
Demystifying CI/CD Pipelines: A Simple Guide for Easy Understanding 1. Code Changes: Developers make changes to the codebase to introduce new features, bug fixes, or improvements. 2. Code Repository: The modified code is pushed to a version control system (e.g., Git). This triggers the CI/CD pipeline to start. 3. Build: The CI server pulls the latest code from the repository and initiates the build process. Compilation, dependency resolution, and other build tasks are performed to create executable artifacts. 4. Predeployment Testing: Automated tests (unit tests, integration tests, etc.) are executed to ensure that the changes haven't introduced errors. This phase also includes static code analysis to check for coding standards and potential issues. 5. Staging Environment: If the pre deployment tests pass, the artifacts are deployed to a staging environment that closely resembles the production environment. 6. Staging Tests: Additional tests, specific to the staging environment, are conducted to validate the behavior of the application in an environment that mirrors production. 7. Approval/Gate: In some cases, a manual approval step or a set of gates may be included, requiring human intervention or meeting specific criteria before proceeding to the next stage. 8. Deployment to Production: If all tests pass and any necessary approvals are obtained, the artifacts are deployed to the production environment. 9. Post deployment Testing After deployment to production, additional tests may be performed to ensure the application's stability and performance in the live environment. 10. Monitoring: Continuous monitoring tools are employed to track the application's performance, detect potential issues, and gather insights into user behaviour. 11. Rollback (If Necessary): If issues are detected post deployment, the CI/CD pipeline may support an automatic or manual rollback to a previous version. 12. Notification: The CI/CD pipeline notifies relevant stakeholders about the success or failure of the deployment, providing transparency and accountability. This iterative and automated process ensures that changes to the codebase can be quickly and reliably delivered to production, promoting a more efficient and consistent software delivery lifecycle. It also helps in catching potential issues early in the development process, reducing the risk associated with deploying changes to production.
-
It took me some extra hours in late night, but here you go. I have simplified an Ideal GitHub Actions Flow for you. 👇 1) 🧭 Triggers: 🧲 GitHub Event fires → Can be a push, PR, manual dispatch, or a scheduled trigger. 📜 Workflow file executes → GitHub reads the YAML config and starts the pipeline. 🔁 Workflow Trigger hits the CI Phase → We now jump into the first main section: CI. 2) 🔧 CI Phase: 📋 Lint & Validate → Checks formatting and file syntax — like YAML, Dockerfiles, Terraform, etc. 🏗️ Build Artifacts → Your app gets compiled or packaged (Docker images, binaries, etc). 🧬 Unit Tests → Quick tests that verify individual components or logic. 🧪 Integration Tests → Validates if your services/modules interact correctly. 📊 Code Coverage → Checks how much of your code is covered by tests — helps improve test quality. 🔒 Security Scanning → Tools like CodeQL or Trivy catch vulnerabilities early. 3) 🧮 Matrix + CI Result Evaluation 🧮 Matrix Execution → Parallel jobs (across OS versions, Python/Node versions, etc). ✅ CI Results → Only proceed if everything passes — block if even one test fails. 4) 🚀 CD Phase (Continuous Deployment) 🚀 CD Phase starts → If CI is clean, we move toward releasing. 🧪 Deploy to Staging → Ship to a safe sandbox environment that mirrors production. 🔥 Smoke Tests in Staging → High-level sanity checks (e.g., “Does the login page load?”). 🛑 Approval Required → Human checkpoint — usually from senior engineer or release manager. ✅ Approval Granted → Deploy to Production → This is your official go-live moment. 🔍 Post-Deployment Tests → Sanity and health checks to ensure production is stable. 5) ♻️ Ops, Rollbacks, and Notifications 🔁 Rollback Plan (if needed) → If post-deploy tests fail, we roll back to the last good version. 📣 Notify Engineers → DevOps team gets pinged (Slack, Teams, PagerDuty, etc). 📡 Monitoring & Logging → Live dashboards, alerts, and logs keep watch over the system. 6) ✅ Final Status Updates 🟢 Update Status Badge → Those fancy CI badges on your README get updated. 📌 GitHub Repository Status reflects build/deploy result → Shows up directly on your pull request for reviewers. Get started with GitHub Actions Hands-on way: https://lnkd.in/gcReECUU Consider ♻️ reposting if you have found this useful. Cheers, Sandip Das
-
The Medical Device Iceberg: What’s hidden beneath your product is what matters most. Your technical documentation isn’t "surface work". It’s the foundation that the Notified Body look at first. Let’s break it down ⬇ 1/ What is TD really about? Your Technical Documentation is your device’s identity card. It proves conformity with MDR 2017/745. It’s not a binder of loose files. It’s a structured, coherent, evolving system. Annexes II & III of the MDR guide your structure. Use them. But make it your own. 2/ The 7 essential pillars of TD: → Device description & specification → Information to be supplied by the manufacturer → Design & manufacturing information → GSPR (General Safety & Performance Requirements) → Benefit-risk analysis & risk management → Product verification & validation (including clinical evaluation) → Post-market surveillance Each one matters. Each one connects to the rest. Your TD is not linear. It’s a living ecosystem. Change one thing → It impacts everything. That’s why consistency and traceability are key. 3/ Tips for compiling TD: → Use one “intended purpose” across all documents → Apply the 3Cs: ↳ Clarity (write for reviewers) ↳ Consistency (same terms, same logic) ↳ Connectivity (cross-reference clearly) → Manage it like a project: ↳ Involve all teams ↳ Follow MDR structure ↳ Trace everything → Use “one-sheet conclusions” ↳ Especially in risk, clinical, V&V docs ↳ Simple, precise summaries → Avoid infinite feedback loops: ↳ One doc, one checklist, one deadline ↳ Define “final” clearly 4/ Best practices to apply: → Add a summary doc for reviewers → Update documentation regularly → Create a V&V matrix → Maintain URS → FRS traceability → Hyperlink related docs → Provide objective evidence → Use searchable digital formats → Map design & mfg with flowcharts Clear TD = faster reviews = safer time to market. Save this for your next compilation session. You don't want to start from scratch? Use our templates to get started: → GSPR, which gives you a predefined list of standards, documents and methods. ( https://lnkd.in/eE2i43v7 ) → Technical Documentation, which gives you a solid structure and concrete examples for your writing. ( https://lnkd.in/eNcS4aMG )
-
Master these strategies to write clean, reusable code across all data roles. Here is how you keep your code clean, efficient, and adaptable: 1. 𝗠𝗼𝗱𝘂𝗹𝗮𝗿 𝗗𝗲𝘀𝗶𝗴𝗻: Break down your code into distinct functions that handle individual tasks. This modular approach allows you to reuse functions across different projects and makes debugging far easier. 2. 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻: Comment your code clearly and provide README files for larger projects. Explain what your functions do, the inputs they accept, and the expected outputs. This makes onboarding new team members smoother and helps your future self understand the logic quickly. 3. 𝗣𝗮𝗿𝗮𝗺𝗲𝘁𝗲𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻: Use parameters for values that could change over time, such as file paths, column names, or thresholds. This flexibility ensures that your code is adaptable without requiring major rewrites. 4. 𝗜𝗻𝘁𝗲𝗻𝘁𝗶𝗼𝗻𝗮𝗹 𝗡𝗮𝗺𝗶𝗻𝗴: Variable, function, and class names are your first layer of documentation. Make them descriptive and consistent. 5. 𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝘁 𝗦𝘁𝘆𝗹𝗲: Adopt a coding standard and stick to it. Whether it’s the way you format loops or how you organize modules, consistency makes your code predictable and easier to follow. 6. 𝗘𝗿𝗿𝗼𝗿 𝗛𝗮𝗻𝗱𝗹𝗶𝗻𝗴: Include error handling in your functions. Use try-except blocks to catch exceptions, and provide informative messages that indicate what went wrong and how to fix it. 7. 𝗧𝗲𝘀𝘁𝗶𝗻𝗴: Implement unit tests to verify that each function performs as expected. This proactive approach helps identify issues early and ensures that changes don’t introduce new bugs. 8. 𝗩𝗲𝗿𝘀𝗶𝗼𝗻 𝗖𝗼𝗻𝘁𝗿𝗼𝗹: Use Git or another version control system to manage changes to your code. It allows you to track progress, roll back mistakes, and collaborate seamlessly. 9. 𝗖𝗼𝗱𝗲 𝗥𝗲𝘃𝗶𝗲𝘄𝘀: Encourage peer reviews to catch potential issues, share best practices, and foster a culture of collaborative learning. 10. 𝗥𝗲𝘃𝗶𝗲𝘄 𝗮𝗻𝗱 𝗥𝗲𝗳𝗮𝗰𝘁𝗼𝗿: Review your code after a break, seeking opportunities to simplify and improve. Refactoring is your path to more robust and efficient code. Whether writing small SQL queries or building large Python models, a clean coding style will make you a more efficient analyst. It’s an investment that will pay off in productivity and reliability. What’s your top tip for writing reusable code? ---------------- ♻️ Share if you find this post useful ➕ Follow for more daily insights on how to grow your career in the data field #dataanalytics #datascience #python #cleancode #productivity
-
Pinterest revolutionised internal documentation by adopting Docs-as-Code strategy. 🚀 Docs-as-Code: a simple yet powerful approach to scale documentation alongside code. Let's understand the strategy in simple words and dive into the related tools. Pinterest engineering teams faced multiple challenges in managing technical documentation. Here were a few common ones:- 1️⃣ Outdated documentation. 2️⃣ Lack of doc centralisation resulting in knowledge fragmentation. 3️⃣ Learning curve to adopt a new documentation tool. 4️⃣ Absence of review process They solved the above challenge by:- 👉 Leveraging common Markdown format for docs in all the projects. 👉 Keeping the doc files besides the code files in every repository. 👉 Using CI/CD tooling for validating the docs. 👉 Building a centralized layer for rendering and discovering the docs. Doc centralization drew a boundary between the content and style. This allowed developers to focus only on the content than dealing with specific style nuances. The centralized tool was known as PDocs and here's how it worked:- 🎯 Each package had a yaml file that defined the doc structure. 🎯 After successful check-in, the tool scanned the yaml file in each repo. 🎯 It then rendered the recently updated documentation. 🎯 The docs were then indexed for improving the search experience. They also developed a Wiki to PDocs converter that increased the documentation by 20%. 🔥 Within two years, PDocs resulted in the following company-wide impact:- 🌐 Improved satisfaction in internal surveys. 🌐 Better experience than existing wiki-style tools. 🌐 140+ doc projects from 60+ GitHub repos written by 80+ teams. With the emergence of AI, the team has also developed features to chat with the docs in PDocs. Also, updates are pushed real-time to knowledge-store providers ensuring AI tools have the latest info. Similar to Infra-as-Code, I believe Docs-as-Code would be a game changer for software teams. One no longer needs to rely on subject matter expert or lead developer. 💡 With LLMs writing both code and the doc, it would surely accelerate the development velocity. If you have used a similar tool in the past, share your experience in the comments below. 👇 Also, do you think the product has a potential to become a SaaS offering, given that multiple companies face the same problem? 🤔 #tech #softwareengineering #softwaredevelopment
-
Most documentation teams are underutilised. Not because they lack capability. But because we treat them as post-production support. In many product organisations, technical writers are brought in after features are built. They depend on Engineers and Product Managers to “explain what was done.” So documentation becomes: • A translation exercise • A cleanup job • A rushed afterthought And then we wonder why docs lag. Here’s the uncomfortable truth: The issue isn’t a lack of writing capacity. It’s structural exclusion. One simple shift changes everything: Include writers in feature discussions early. What happens? • Writers understand the feature at the intent level — not the summary level. • They spot usability gaps before release. • They question unclear flows. • They influence structure, not just describe it. Documentation improves. But more importantly, the product improves. If your documentation team only publishes, you’re underutilising product intelligence. So the next time someone asks, “Why do writers need a product walkthrough?” Ask instead: "Why are they excluded from decision rooms?" Structure determines contribution. Capability rarely fails — architecture does. #TechWriting #TechWriters #Documentation #TechDocs #Product #SDLC #Redefine #Redesign #Structure
-
Bad documentation is a business killer. And by "bad," I mean incomplete. Missing pieces. And speaking only to the converted. There are two kinds of documentation. 1. The kind that makes it easy for someone to try your product. 2. The kind that helps people already convinced go deep into your product. We see a lot of companies get the second kind right. The technical stuff is nailed. "Here’s how to integrate it into your stack. Here’s the API doc. Here’s how to use advanced feature XYZ.” All good stuff. Necessary. It should be there, but if that’s all you got… you got a problem. Imagine being pumped to try a new tool. You go to the docs site to understand some of the use cases and SLAP you hit a wall of tech jargon. No use cases. No workflows. No clear examples of outcomes I can get with the tool. It’s like going to a restaurant and being handed the recipe book. "Oh, that's cool," you might think. "But can I just have the baked alaska please?" Docs have to cater to people who are already bought in, for sure. Docs should also give curious newbies a reason to try the product. Docs should woo and impress curious visitors. Show readers the “how,” sure. But more importantly, you gotta show the “why.” Use cases. Workflows. Real-world examples. These things belong in your marketing AND in your docs. A good docs site is part conversion tool, part customer enabler. It’s underappreciated and often overlooked. Build them right and docs can be one of the hardest working assets your business can has. Make your documentation help people fall in love with your product, or at the very least, make it dead simple for them to say, “I get it. Let’s try this.” Don’t sleep on your docs, y'all.
-
Ever looked at old code and thought, "Who wrote this? And why?", only to realize it was YOU? 🤦♂️ That’s why internal documentation is a lifesaver! It turns cryptic code into clear, maintainable logic. Here’s how to document like a pro: 🔹 File-Level Documentation: Start with a high-level summary. What’s the purpose of this file? Is it handling authentication, processing payments, or managing user data? Give future developers (including yourself) a clear idea of what’s inside before they even start reading the code. 🔹 Function-Level Documentation: Each function should answer three key questions: ✅ What does this function do? (Describe its purpose) ✅ What inputs does it take? (List expected parameters & data types) ✅ What does it return? (Explain the output) This way, anyone can understand what’s happening—without guessing! (see example in the image below 👇) 🔹 Line-Level Comments: Not every line needs a comment, but complex or non-obvious logic does. Example: # 𝘜𝘴𝘪𝘯𝘨 𝘣𝘪𝘵𝘸𝘪𝘴𝘦 𝘈𝘕𝘋 𝘵𝘰 𝘤𝘩𝘦𝘤𝘬 𝘪𝘧 𝘯𝘶𝘮𝘣𝘦𝘳 𝘪𝘴 𝘦𝘷𝘦𝘯 (𝘱𝘦𝘳𝘧𝘰𝘳𝘮𝘢𝘯𝘤𝘦 𝘰𝘱𝘵𝘪𝘮𝘪𝘻𝘢𝘵𝘪𝘰𝘯) if num & 1 == 0: print("Even number") Even if it seems obvious today, your future self (or a teammate) will appreciate the clarity. 🚀 The Goal? Make your code self-explanatory so that debugging, onboarding, and refactoring become painless. This is just one of the many best practices I cover in my new Become a Better Data Engineer course. If writing cleaner, more maintainable code is on your to-do list, this course is for you 🚀 https://bit.ly/3CJN7qd Who else has been saved by well-documented code? Share your stories below! 👇 #DataEngineering #CleanCode #InternalDocumentation
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development