The Ruby programming language has been a great part of my software development toolkit over the years. A recent find triggered a reflection on how it all began for me.
While doing some spring cleaning I ran across a box of old shirts from storage. Some of them were from programming-related conferences, but the one that triggered this trip down memory lane was the one from a conference I opportunistically attended on a whim.
The year was 2003 (or perhaps late 2002). I was part of a software team working in Java and C++ targeting earlier consumer mobile devices (such as the iPAQ). We had our own mini-language in development and the lead developer on the project was looking for inspiration for creating an array pack function. He then pulled out the api docs for such a function in what he described as a little-known programming language that was already quite popular in Japan at the the time - Ruby.
I was curious now, and started exploring what I could find on Ruby and trying it for some basic scripting. And with a dose of serendipity, I was happy to discover that later that year the Third International Ruby Conference would be held right on my doorstep in Austin. At this point I wouldn't have gone out my way to attend, but with an inexpensive conference coming to me and time on my hands I said "Why not?". So I, along with the team lead and some other friends, signed up and went to see what this obscure little language and community was all about.

If was fun, and different. I was still new to the language, still learning, and not using it professionally yet. There were probably less than 50 people there (we fit in a single hotel meeting room). I thought I remembered someone asking for show of hands on who used Ruby professionally as David Heinemeir Hansson described in this CoRecursive interview, but I'm not sure if he was there. Looking back at the schedule, he didn't give a talk that year. Ruby on Rails wasn't released at that point, but he had been working on building Basecamp and extracting the framework for a first public release the very next year. In fact, At RubyConf the following year, Rails would be one of the attractions.

It is fun to look back on that as a time of potential. Not only was there no widely known project in Ruby at that time, there wasn't even a package manager. It was at this very conference that the attendees hacked together the initial implementation of Ruby Gems.
At RubyConf 2003, when put on the spot by David A. Black about the topic, Matz basically said "if you build it, it will be included in ruby core. That led Jim Weirich, Rich Kilmer, Black, and Fowler to spend the next night coding what would become the RubyGems. They even demo-ed it at the conference. It has grown quite a bit since then and now could be called the de facto standard for library distribution in Ruby.
I started using Rails as soon as it was released (pre-1.0) and have used Ruby for many other things over the years. While I don't use Ruby/Rails exclusively, I have seen them evolve and they have been a valuable set of tools over the years.
Most developers use many tools that are well established with broad communities. This was a chance to remind myself that if you look around and try some new things now and then, you might learn a little and experience (or get to help build) something that ends up becoming something much bigger.
]]>Back in 2016 I wrote about Architectural Decision Records (ADRs) in my post "Why did we do it that way?". These are lightweight documents stored in a software project codebase along with the code that document point-in-time decisions about the architecture for future reference. Since then I have had projects that used them and others that didn't in lieu of other ways of documenting decisions.
I recently ran across a great post from Kelvin Meeks on ADRs that lists a lot of great information and resources. I also like the project he links to as an example containing a good number of records that is still an active project at the time I am writing this.
In fact, since my original post a Github Organization for ADRs has been created that includes articles, resources, guidance, and tooling for using these records.
If you have ever lamented not capturing why a choice was made or want to improve how decisions are documented by having them live right along with the project(s) they describe, see if ADRs might be a good fit for your situation.
]]>One of the early conversations I have with most teams I work with is about decomposition. There is often just a high level of variation in size, complexity, and other aspects of work in progress. This makes it hard to project when things will complete, when we will be able to get to new things, and how much of in-progress work was essential vs. deferrable.
From its introduction, I have used Richard Lawrence's excellent How to Split a User Story flowchart. I keep recommending it because I have found few better resources.
The flowchart guides you through splitting work items into smaller items, with the emphasis on those items remaining valuable on their own.
This flow chart was created in the context of Scrum and Agile teams and a little bit of the terminology may be unfamiliar to those who are not well versed in those areas. You will see terms like Story and velocity that may require some translation. If I could change one thing I would genericize some of that language. But don't let that stop you from getting value from this chart.
| Term | Translation |
|---|---|
| Story | Short for User Story originating in agile disciplines, but could be a Work Item, issue, idea, or other placeholder of desired value |
| velocity | Another Agile term similar to throughput, or how much value (released work) is created in a given period of time |
Most people will find section two easy to pick up and starting to apply right away. Say you have a ticket someone has added and you want to make sure things that are planned and worked on are kept small and are able to flow. The questions on the chart easily help identify dimensions for splitting based on performance, workflow, steps, etc.
The above decomposition makes some assumptions that the work item was already in pretty good shape and assumes you want to find smaller pieces for iteration/feedback or that some parts will deliver value more quickly.
Section one, however, attempts to help ensure it is a well formed work item in the first place. Included is a check on INVEST Criteria, which are and excellent set of criteria for work items and should be explored if you aren't familiar with these already. In addition is a reminder to check the relatibe size of the story compared to the amount of work that normally gets completed.
As an example of the size check, let's say your throughtput (one possible substitute for velocity) is 5 items per week. If a new candidate item feels like it is as bigger than 2 of those items put together, you probably want to look at splitting it as it represents a significant chunk of your weekly work tied up in one thing.
Section three is mainly a reminder to step back after doing any splitting and make sure the results still meet criteria like INVEST and can't be decomposed further. Once you are familiar with following this flowchart, you will be surprised how quickly you can apply these checks and happy with the positive impact on your planning.
]]>Have you ever joined a project and not soon after questioned why things are the way they are? Why was this language (or framework) chosen over another? Why do we (or don't we) organize things into microservices? Why are errors handled the way they are? What drove the choice of database, or API design, or external service?
These questions always come up. It's a natural part of understanding a new work environment. It happens every time. And I don't like being in the position of saying "I'm not sure why this was done/used" in the course of fixing something or implementing something new.
It is just as much an issue for those who have been on a project for any length of time. Why did we make a particular decision six months ago? Not being able to recall enough detail about past decisions isn't usually a deal-breaker for moving forward, but it can significantly help provide context and understanding behind the evolution of a project to the current state. That helps make further changes without breaking or un-intentionally invalidating past decisions.
We make decisions all the time. Some are big, some small. Some have big implications, some not. Projects that try to design/plan a lot up front make decisions early (sometimes with extensive documentation) but validate them later. Other projects decide just-in-time and may not document them at all.
I have seen such decisions documented in formal Word documents, Wikis, User stories (as additional details to the story), email, etc. What has been most useful is making sure they are easy to find, easy to add, and reasonably consistent in format.
If, like me, you are looking for a lean approach to experiment with you might consider the Architectural Decision Record (ADR). These are simply text files checked-in to a project's repository with a simple format that can be as sparsely or densely detailed as needed.
As described, ADRs seem to have just enough information: Context of the decision, the Decision itself (once made), Status (proposed, accepted, rejected, superseded, or deprecated), and Consequences of the decision. Since there are potentially some relationships between decisions (such as superseding) there is some maintenance effort needed to properly revise past decisions so the worth of that tradeoff should be part of any experiment using them.
In my experience, they are probably just as valuable in making a decision as in documenting for the future. The process of describing the Context helps in framing the decision and writing a the Consequences for a proposal help communicate information not only for approval but for generating additional work items or changes that may result.
A very public example of these that I have seen recently is in the Arachne Framework, a web development framework for Clojure. The project has a dedicated repository of ADRs since the project itself is composed of multiple repositories. Some of the records already present include:
Note that since this is a relatively young open source project (at the time this article was written) many of these ADRs documented proposals. By the time you read this, some of these may be accepted or even superseded.
I have been recommending these as an experiment to teams I work with and using these on projects with overall positive results. There is even a handy command line tool by Nat Pryce to help create/manage ADRs.
So, whether you are a solo developer or a full team and have been considering a simple way to track decisions, consider experimenting with ADRs or something similarly lightweight.
I'd love to hear experience reports or other approaches that have worked for you.
]]>So you heard about Coderetreat and have the opportunity to attend one. Or maybe you read [5 Reasons Why You Should Attend a Coderetreat]({{ site.url}}/5-reasons-why-you-should-attend-coderetreat/), heard about it from a friend or worker, or saw it listed in a community events calendar. In any case, you are intrigued but you have questions or concerns. Are you ready to participate in one?
First of all, I would recommend that all attendees (whether you have never attended or been to many of them) do some familiarization and preparation. We will go over this the morning of the retreat, but it can help to have prepared beforehand.
Read about the purpose and philosophy of Coderetreat. If you are interested, also consider reading about the history of how it was created.
Familiarize yourself with the problem we will be working on together--Conway's Game of Life. There are a number of great resources out there if you look, including:
Read up on the following. While we will be learning and exploring areas of the following during the Coderetreat, it will help to have some familiarity before the day of the event:
Get your development environment ready (If you are able to bring a computer)
Here are some typical questions and tips to help you prepare for a Coderetreat.
In most cases, yes. However, if you have no prior experience and have not written more than a "Hello World" program in any language you might have trouble keeping up. Coderetreat is probably not the best place to learn the basic fundamentals of programming, but you don't need to be an expert (or even intermediate). We try to help those with less experience pair up with those who have more as best we can.
In this situation, here are some suggestions:
It is common for many participants to not have experience with some or all of these. That is one of the great things to learn and practice during the event. However, trying or practicing it out on your own before certainly helps! Here are some suggestions for practicing:
Try working through code Kata and focusing on using tests and following Test-driven development. There are some great Katas listed in places like http://codekata.com/ and sites that make it easy to work on problems applying tests such as CodeWars and cyber-dojo. I recommend starting with writing tests if you aren't familiar with using them and then try applying TDD after you are comfortable.
Pairing is difficult to practice unless you can find friends or coworkers that are willing. You can, though, mentally prepare for sitting and working with others or try an online community that is oriented around giving feedback to others such as Exercism.io. You can also practice reading over public code on Github or other sites and think about how you would ask questions about the code or provide feedback in a constructive way (and in real time). Just find something that interests you in a language you know or are learning. Code reading is a helpful learning activity even on your own!
While it is helpful to prepare in advance, don't worry too much. These events are intended to be an experience of learning and practice. Most Coderetreat facilitators will do their best to pair the right people together and adapt the activities and pace of the overall event to the participants in attendance. If you have further concerns that you don't see mentioned here, contact the host/facilitator of an event you are considering attending and ask.
]]>As I write this it's that time of year where I'm preparing for Global Day of Coderetreat! If you write code--even if you are just starting out--you should consider taking part if on of these events. If you can't attend a global event near you this year, look for one of the many that happen throughout the year. Chances are, no matter when you are reading this there is a Coderetreat event not too far away. Why should you care? Please read on!
Maybe you aren't familiar with Coderetreat. I encourage you to go read up on it. For those short on time: Coderetreat is an event for developers, coders, programmers--whatever you choose to call yourself--where you work with other developers to solve a fun problem with the intent of practicing coding techniques, design, and collaboration. It is designed to be fun but to get you out of your comfort zone in order to encourage new ways of solving problems. This is done by introducing various strategies around design and implementation techniques and constraints that encourage you to approach problems differently than you normally might.
Ok. Still with me? Maybe you are thinking it sounds good but you are still skeptical about what you'll get out of it. Here are my top reasons that you'll be glad you attended.
Coderetreat gets you away from the everyday environment where you normally work. This allows you to focus on deliberate learning. While you may certainly learn and practice ways of improving in your normal work, Coderetreat allows you to improve fundamental skills without other constraints that may impede learning.
Another benefit of practicing outside your daily work: The pressure to deliver is removed. This allows you to focus on how you approach a problem and the affect on the solution. Take the time to think through a problem, but also pay attention to the feedback you get from code, tests, etc. and make adjustments. This is something we typically don't do well enough when under pressure to deliver. And if we can practice these things more outside of work and develop better skills, then they will be easier to apply when we have schedules and deadlines to meet.
Working in a group and in pairs with everyone intent on learning has a great effect. While you can certainly get benefit out of excercises like code katas on your own, working with people of different skill levels, experience, and ways of thinking is powerful and Coderetreat packs a lot of that into a single day. Just hearing what some of the other attendees learned or how they approached things is extremely powerful.
Simply put, you'll get some hands-on coding experience with people you don't normally work with. You will build relationships and connect with other people who are interested in augmenting their skills, craft, and profession.
You may have only heard of things like Pair Programming, Test-driven Development, SOLID Principles, refactoring, etc, or you may use some of them regularly. Coderetreat gives you a chance to work with people that have real-world experience and for those with experience to practice mentoring. Often people with deep experience will participate in languages new to them and practice working with a beginners mindset.
And for those interested in learning a new language, this is a great chance to learn with others (if you can convince someone else to work in your chosen language).
Hopefully something here is encouragement enough to try a Coderetreat. They are typically free to attend with food covered by sponsors. It is a great learning experience, fun, and you can can attend them over and over and learn something new every time.
]]>I like the term feedback. As Wikipedia describes it:
Feedback occurs when outputs of a system are routed back as inputs as part of a chain of cause-and-effect that forms a circuit or loop. The system can then be said to feed back into itself.1
As software developers, we should always consider the value we get out of the many activities we use to deliver software. All too often someone champions the latest tool or practice because it is new, interesting, or just seems right. Or perhaps it is because everyone -- or the just the right people -- seem to be doing it (you may have heard this referred to as cargo culting). And so we try out whatever it is, maybe getting value, maybe not.
I certainly advocate experimenting with something in the hopes of improvement. But along with this goes some guess of what this new tool or technique will provide that we can validate. For me, Feedback is at the top of the list. I like to think of when, and how we get feedback and how it will be used to make things better.
I especially like the definition of 'feedback' above because it encourages a "systems thinking" view. Just as employees can benefit from peer feedback and companies benefit from customer feedback, we software developers can benefit from our own feedback mechanisms. Feedback can come in many forms about different aspects of the product: Design, behavior, performance, maintainability, etc. We are part of a complex system where just part of it is a person at a computer making code changes using an editor or IDE.
Simple Feedback 02 by Trevithj, CC BY-SA 3.0 DEED Attribution-ShareAlike 3.0 Unported
In this, the first of a series on developer feedback mechanisms, I start with one of the most commonly encountered developer practices that provide a feedback loop: Example-based Testing.
What better topic than this to start with an example!
I would have a difficult time finding a Java programmer that isn't somewhat familiar with JUnit. Pick almost any language or stack and there is a similar testing framework available. Fundamentally, such frameworks support writing code that is not intended for production release that exercises other code that is meant for production.
Imagine we have a Calculator class designed to evaluate strings with mathematical expressions. To gain confidence in the code, we can think of a specific interaction we need to make and the expected result and then turn this into an executable test. In many languages it would look similar to this Java example using the JUnit framework2:
public class CalculatorTest {
@Test
public void evaluatesTripleOperandAdditionExpression() {
// Set up the test
Calculator calculator = new Calculator();
// Call the production code
int sum = calculator.evaluate("1+2+3");
// Verify that everything is as expected
assertEquals(6, sum);
}
}
This test has a context using a Calculator object with no explicit configuration or prior usage. As described in the name it is focused on the example of triple-operand addition, so the test uses the calculator object this way by passing a string to evaluate. Finally, it asserts that result is correct and that the calculator correctly determined the operators and operands in this string and calculated the correct result.
If you have done any automated testing you were probably introduced to testing this way. Many of the most popular testing frameworks were created with this approach in mind: JUnit, NUnit, test-unit, etc. However, while most commonly used to test lower level units of code (classes, methods, etc), you can also target collaboration between objects, modules, components, etc.
What may be new to you is the label for this style of test: Example-Based Test. But that is exactly what is going on here. All the components of the test -- the setup, using the code, and verifying what happened -- all are based on some example of what is expected of the test subject.
A programmer using this style of testing iteratively constructs example after example (building a unique test for each) until she considers the set of tests to be complete enough for the task at hand.
What feedback did we get in the example above? I can think of a few types:
"1+2+3"Calculator to perform this task: She is taking on the perspective of a userWhat don't we know yet?
Certainly we will need some more examples (and thus tests) to get more confidence that the Calculator does the right thing. But we just listed out some examples above as a start to that. And we might even be able to sit down with a potential user (even a non-programmer) that could one day want to perform addition through an application that uses this code and get some more examples.
Sometimes the setup, execution, or verification steps become challenging to write in a test. The experience of using the API being tested is yet another form of feedback about things like design, usability, and testability of the code. Based on the feedback of creating the test, we might want to change the design of the external interface of the code under test.
There are other ways to help address some of these challenges, such as Test-Driven Development, which I'll cover in another post.
Feedback is most valuable when it is timely. The later a problem is discovered from when it is created, the more likely its resolution is to be rushed, postponed, or dismissed. Fortunately, example-based tests are well suited for creating and running early in development and then running frequently during the lifetime of the product features they describe.
Consider when a developer creates an example-based test alongside the production code they are working with. All those things we stand to learn from creating and running the test are discovered pre-commit when they are least expensive to address. And when they check in both passing test and code together, the commit will essentially always be somewhat self-documenting.
So some developer discipline is needed to execute these tests frequently. We need to trust the developer making the change to run relevant tests and confirm they pass. Conveniently, many software stacks today have tooling that will help run targeted tests in the background based on the code that is being edited. If care is taken to use good testing practices, lower level (especially unit tests) that are example-based can normally be kept quick enough that their regular execution does not become a bottleneck.
These days most development teams hopefully use some form of Continuous Integration or build server. This is a great way to run a whole suite of tests more frequently, such as on any check-in. That way we get feedback on whether the behavior of any of the system changed unexpectedly. This is helpful for when it is prohibitively expensive for an individual developer to always run comprehensive tests before check-in, and this practice keeps the feedback loop relatively close as long as tests aren't run too long after check-in.
With so many great options for common software stacks, this shouldn't be too hard. Basically, you need to:
Once you and your team begin using these tests and gaining confidence, you can add tasks to Continuous Integration builds to execute and notify if problems are found.
As with most activities, Example-based tests can be misused and be more hindrance than help. I like to say that these are situations where they fail to provide good, timely feedback. So after you start using Example-based testing look for the following:
It is really difficult to come up with exclusions here, but one of the most often cited places to avoid writing example-based tests is for code you don't own/control. That being said, I know people who use example based tests (at least temporarily) when evaluating a new library they are exploring.
What about cases where there are many potential inputs/outputs and thus a large number of examples are possible (or required) to gain confidence in system behavior? This is not uncommon with testing algorthims and there are complementary styles of testing that can be helpful in these cases that will be described in later posts.
Hopefully a feedback-oriented view of example-based testing helps you experiment and set expectations of value. I will certainly cover other testing-related topics in looking at ways that developers can leverage feedback.
Footnotes:
..
]]>I don't exactly remember the first time I encountered a Law. No, not one of the type intended to govern societal behavior. I'm referring to Laws like those that relate to something like physics, mathematics, nature, social sciences, etc. It was probably in a class or in a textbook in my youth and most likely was the "law of gravity" or joking references to Murphy's Law. Wikipedia defines this type of Law as:
a universal principle that describes the fundamental nature of something, the universal properties and the relationships between things, or a description that purports to explain these principles and relationships.1
One of the core problems in software development, I believe, is remembering and applying the lessons of the past. With the pace of change, experimentation, career path trends, and background diversity of new developers we are challenged to keep up with the knowledge and wisdom that has come before us. Many formal mentoring/apprenticeship programs are highly focused on practical, hands-on application. How do we leverage the wisdom of those who have come before us?
We encounter principles all the time in books, talks, classes, web sites, etc. I have used them in training, printed works, and to help guide me on how to approach certain problems. They sound authoritative, and with good reason: they have stood the test of time (in most cases). In some cases there has been testing and verification, while others are assumed. Whether they are truly conjecture, hypothesis, theory, or law is sometimes debateable, still they are principles that are held highly.
So as someone engaged in gathering people and technology together to deliver products and services I think it is valuable to occasionally reflect on the Laws that relate to Software. How could this be valuable you ask? While some of these may remain without thorough validation, they help guide and shape the thought of our industry. They only serve to help achieve shared understanding of the fundamental nature of software if we remember and apply them.
With Software, there are some different areas to consider. Let's not only look at the technical side of things, but those areas involved with conceiving of ideas, collaboration, and execution from the people that make things happen.
Software doesn't write itself, at least not yet.
"organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations""
—M. Conway2
This law is often referenced in the agile community and was named as one of the contributing laws to the Scrum framework in particular. With it we consider how organization of people, approval, hierarchy, etc. affects the design and architecture of the software systems we build.
Conway's law can be found applied in various contexts:
Little's Law concerns predictability. MIT professor Dr. John Little describes the relationship between average Arrival (or Departure/Throughput) rate, the average time in the system (Lead Time), and the average # of items in the system (Work In Progress):
The long-term average number of customers in a stable system L is equal to the long-term average effective arrival rate, λ, multiplied by the (Palm‑)average time a customer spends in the system, W; or expressed algebraically: \[{L} = {λ}{W} \]
More commonly in Kanban this is adapted as follows:
\[{Avg Lead Time}={ {Avg WIP} \over {Avg Throughput}} \]
An often overlooked aspect of Little's Law is that in order to hold true, certain assumptions must also hold true:
Focusing on holding to these assumptions allows Little's Law to hold true and allows greater predictability from the system. Understanding the relationships helps understand how to make good decisions regarding each. In Kanban alone practitioners are often advised to focus on these assumptions to guide their policies and actions and use the underlying formulas for understanding how things are related rather than direct calculation.
Little's law can be found applied in various contexts:
Parkinson's Law is usually stated as the adage:
Work expands so as to fill the time available for its completion
Several corollaries or restatements also exist with the same underlying message. This principle is usually attributed to human nature. In Lean/Agile development circles it inspires focus through concepts like:
Ziv's Law states that:
software development is unpredictable and that the documented artifacts such as specifications and requirements will never be fully understood.3
This has also been referred to as The Uncertainty Principle in Software Engineering:
Uncertainty is inherent and inevitable in software development processes and products
Again, those familiar with the Agile Manifesto should be familiar with this law's influence on manifesto principles and focusing on conversations and iteration over formal specification.
Anyone who has ever planned anything nontrivial should be familiar with this adage that states:
It always takes longer than you expect, even when you take into account Hofstadter's Law.4
The recursive nature of this law is inspiration for, among other things, breaking down tasks to hopefully minimize the optimism or uncertainty that is still present. It also highlights the frustration many experience with the process of estimating work.
This law describes an effect that has several other names: the centipede effect/syndrome (inspired by the poem The Centipede's Dilemma) or hyper-reflection.
For a new software system, the requirements will not be completely known until after the users have used it.[^humphry]
The attribution goes to English psychologis George Humphrey who wrote about the difficulty performing tasks with conscious thought once those tasks had become unconscious habit. This effect may somewhat be related to the concept of Analysis Paralysis where over-thinking becomes the cause of delay or inaction.
In Agile software circles this law has also inspired the practice of iterative implementation of small slices of work with a high level of collaboration and review with the customer/stakeholder.
"adding manpower to a late software project makes it later" —Fred Brooks, The Mythical Man Month5
One of the most common quotes, although all too often heard after having more people added to a late project. Brooks himself apparently referred to the law as oversimplification but it does force us to think about the impact of adding people and when. People have ramp-up time, require knowledge from others, and add to the complexity/communication needs of the implementing team. The fundamental message is that the costs are usually much higher than is often estimated.
This project management principle describes how uncertainty changes over time6. It describes that uncertainty tends to decrease over the course of a project, narrowed by reductions in variability. Variability is typically reduced through research and decisions that remove uncertainty.
The Cone of Uncertainty can be found in applications like:
Note that there are some interpretations that question the validation and empiricism used in establishing the Cone of Uncertainty.
Software isn't very useful without hardware, is it? How does it impact us?
Moore's Law is:
the observation that the number of transistors in a dense integrated circuit has doubled approximately every two years.7
While this is more of an observation, it has influenced product cycles that have anticipated or expected a certain level of computing power to be available when planning large scale products. In recent years the pace originally cited has slowed. This has influenced many observers producing software to shift toward better use of multiple cores/processors instead of anticipating increasing power in one. This changes the approach used in scaling for increasing computing workloads.
"It is impossible to fully specify or test an interactive system designed to respond to external inputs"
-- Peter Wegner8
Similar to Ziv's Law and Hofstadter's Law, this lemma speaks directly to our ability to thoroughly describe either by document or executable specification (test) when we aren't fully control of what is sent to the system. Unfortunately, most useful systems require interactivity and some degree of external input.
No doubt that Wegner's Lemma inspired our next entry.
Stated by John Postel who wrote a specification for the Transmission Control Protocol (TCP), this law/principle says:
Be conservative in what you do, be liberal in what you accept from others (often reworded as "Be conservative in what you send, be liberal in what you accept")9
It is not surprising that this comes from a specifier of the TCP. In fact other works such as RFC 1122 went on to recommend against sending messages with legal but obscure protocol features the might expose issues on the receiving end.
Of interest to those of us concerned with maintaining software products over time are the laws of software evolution formulated by Lehman and Belady10.
These laws are said to apply classes of system that are written to perform some read-world activity and adapt to varying requirements and circumstances in that environment. This is in contrast to programs written to an exact (comprehensive) specification or completely determine what a program can do.
These laws can help us understand concepts like Evolutionary Design, Technical Debt, and Building Quality In.
Software evolves more rapidly as it approaches chaotic regions11
There are some others that are either that might be worth considering but I have had less time thinking through.
Born out of challenges in urban planning is a class of problem call Wicked Problems. From Wikipedia, a Wicked Problem is:
a problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize. The use of term "wicked" here has come to denote resistance to resolution, rather than evil.
On the surface this resonates with me. Some of the resources below talk about applicability to software development, so I consider this worth further exploration.
Resources:
I find value in occasionally stepping back and re-reading these as a reminder. Sometimes this helps me think through a challenge I have encountered in a different way. Other times I end up just reinforcing my understanding. It is also interesting to reflect on just how validated each of these are, and whether there is more that can be done to prove or disprove these principles.
What laws, lemmas, or other useful conjecture should be considered along these? What have I missed?
]]> Exam by Alberto G. on Flickr, cropped, CC BY 2.0 DEED Attribution 2.0 Generic
One of the great challenges of adopting an Agile methodology, Kanban, or similar approach, whether for development software or other type of work, is evaluating how you are doing and what to improve. Also challenging is understanding how well you are improving (or not) over time.
Beyond built-in ceremonies used to solicit improvement feedback, there are some evolving methods of self assessment that may be useful to generate feedback on the depth of your implementation. Through such an assessment, you or your team will likely find a different view of your implementation that will inspire change.
In my occasional role as a "coach" teams would often ask how they were doing compared to all the others out there. Many individuals attend user group meetings and participate in online communities getting ideas and suggestions from others. These teams would retrospect and perform other activities that generate ideas on how they could make things better. But still there was often curiosity... "Are we a good Kanban (or Scrum) team?"
Sure, Scrum teams (and many Kanban teams) have their retrospectives, Kanban teams may have operations reviews, and done well these help identify ways to incrementally improve. Over time we can look at metrics like cycle time to get insight into improvement.
Maturity Models are a popular idea. You can search and find various proposals for an Agile Maturity Model. There are also those who question them or question maturity models in general. I tend to share concerns about measuring an overall maturity level due to the tendency to oversimplify and not recognize when there are elements of good value in the current implementation.
This type of assessment is often performed by an Agile Consultant. While they can be based on any method (such as a maturity model), they are often custom and involve interviews, questionnaires, and observation resulting in a report and recommendations. I have been a part of these and while they are valuable, they are often performed without deep understanding of the team's context.
A number of people have been looking at a variation of assessment that focuses on depth of the implementation. These assessment approaches tend to be multidimensional as well as relative and used as a tool for a team and coach to guide improvement.
Borrowing from the often used iceberg metaphor, many adoptions focus primarily on some surface practices while glossing over or ignoring aspects of the approach that provide depth and are not easily seen. A depth assessment tries to highlight what areas are possibly too shallow and may benefit from developing greater depth of implementation.
The first of these that I encountered was a Depth of a Kanban tool described by Christophe Achouiantz and based on work by David J. Anderson, Håkan Fors, and others. I found that this approach requires more coach guidance as a team without deep understanding of practices and outcomes may have trouble assessing themselves in some areas. The descriptions linked to here even describe it as more of a coaching tool. The result is a ranking of depth in each of the core practice areas of Kanban (and a desired effects dimension as well) giving a coach and team areas in which to focus.
More recently, Mike Burrows, author of Kanban from the Inside created values-based depth assessment for Kanban that focuses the dimensions and questions on Kanban values instead of practices. Rather than rating descriptions, he has a well written set of questions that are suitable for spreadsheet or survey tool to collect ratings from the whole team.
The result is a ranking of depth around values, meaning the team might focus more on improving Flow and Customer Focus than on just Limiting WIP or Making Process Policies Explicit. Mike is building on his values-based assessment and other work through his new company, Agendashift.
For Scrum teams there are similar self-assessments I have seen described:
I'm still experimenting with the use of depth assessments and intend to follow up with an experience report in the future. I encourage you to take a look at some of the options mentioned here and try yourself (or with your team).
]]> bios (bible), Robotlab, 2007 by Mirko Tobias Schäfer (cropped), CC BY 2.0 DEED Attribution 2.0 Generic
I spend more and more time writing things other than code these days. As a developer, programming is important but communicating well with others is equally valuable. Depending on what kind of role and environment they are in, a developer can easily find themselves crafting or contributing to a number of types of written communication.
For example:
Some of these have not always been common activities for me and have become more frequent over the years as responsibilities, roles, and companies, have changed. As I have progressed the need to write well has become increasingly important. If this resonates, you might (as I have) look for feedback mechanisms that can help.
When I am writing code I rely heavily on feedback mechanisms to help me end up with a good result in a short period of time. I'm heavily influenced by the technical practices, workflows, and lean/agile approaches I have learned over the years. For example:
Since those types of feedback have been so valuable when writing code I looked for similar mechanism when writing. If you have ever used a word processor you have probably used a spelling or grammar checker. Those are useful tools, but what else is available?
What follows are some of things I have used or have been experimenting with.
This is a simple one. After writing a bit and I feel finished, stuck, or unsure, a good walk or short break often gives me just enough of a rest to spark a new perspective or fresh view when I resume writing. Sometimes, if time allows, just sitting on it overnight does even more good as it gives my mind some background processing time to think about what I've written.
This approach requires no one else and is fairly low cost as long as you manage time well.
A review from another person is some of the best feedback you can get. You get help with understanding, clarity, different perspective, and sometimes new ideas. If you can solicit multiple reviewers this is even more effective.
This comes at a cost, however, as other people usually have plenty of time constraints themselves. Often you will only get one round of this type of feedback.
As with Pair Programming, the benefits of having another person working with you in real time are huge. You get more than one perspective on the content, in-line editorial comments, and get to toss around multiple ideas sometimes before a single word makes it on the page.
This is most commonly done at a single computer with one person typing and the other guiding with ideas and real-time review. As with Pair Programming, you need a strategy for switching the person at the keyboard, but the end result usually requires far less additional review.
An enjoyable variant of this requires two devices and a bit of tooling. This involves two writers working on the same document at once. I have mostly done this using a Google Doc which allows real-time editing/review from more than one user. I have worked on documents with two of us writing different sections while a third person reviews and comments. This is a great way to use a group of people to swarm on a document and get it written/reviewed quickly. This has been helpful for proposals, blog posts, web site copy, etc.
This requires more than one person but can get you to solid written content more quickly.
Somewhat similar to failing tests or static analysis metrics, having some quantitative feedback while writing can help indicate when things are off track. In addition to spelling and grammar checking assistants there are tools that provide statistics and readability tests on writing that can be easily used.
There are number of these to explore. Since I write so much content using Markdown I usually use Marked 2 to preview my document as I save it. Marked 2 includes a few different views that display statistics such as:
These are the statistics for this blog post: 
Each of these require a little learning to understand what they are telling you but might give you an indication that the content could be simpler or otherwise improved. Also, when you make edits you can see the effect of your changes. As with any metrics, these don't guarantee that your writing is good or well communicated, but certainly can help indicate when things are off track.
When integrated into an editor, this feedback can be almost automatic and run frequently. Without direct editor or previewer support there are still a number of tools to provide this kind of analysis. These are just a few:
Has someone ever asked "Do you hear what you are saying?" Sometimes just reading aloud (to prevent skimming) can help identify problems in clarity, flow, grammar, etc. Give it a try!
I have been trying another approach to this by using speech synthesis on my computer to read back what I have written and I focus on listening. I'm sure I picked up this idea from somewhere but can't remember where but I have been enjoying it. I get to try out what I have written as a listener and usually I try not to read along. This seems to help for a number of reasons, most likely that it has the benefit of trying out the content using another sense.
There is another bit of feedback that comes from this approach. While speech synthesis has improved in recent years, it is still less capable when complexity or word choice get out of control. I have found that when the speech tool has trouble pronouncing a word or making a sentence seem fluid there probably is a simpler way to write the text.
Getting this kind of feedback can be pretty easy. There are a number of speech synthesis tools out there, many inexpensive. Since I am on a Mac I use the built in Text-to-speech tool which lets me easily highlight and have certain sections, paragraphs, or even sentences read from most applications.
As the saying goes: "Practice Make Perfect". Regular blog posts can help as well as writing a little every day. In addition to blogging, daily journaling can be another easy way to practice and can be as simple as using text files, tools like Evernote, or a dedicated journaling application like DayOne.
Writing (effectively) is an important skill for developers to have. Developers used to many different feedback mechanisms while writing code can find similar types of feedback when writing. Experiment with some and get in regular practice. Your teammates, collaborators, users, and customers will appreciate it!
]]>Have you Deliberately Practiced anything this week?
Practice does not make perfect. Only perfect practice makes perfect. -- Vince Lombardi
While perfectionism can impede or prevent pragmatically getting things done, it can help us get better at our regular performance of a skill or activity. The key idea behind Deliberate Practice is to build "muscle memory" by deliberately doing something with proper feedback and/or coaching to ensure learning and correctness. While you will probably slow down to practice this way, the result should be an improvement to normal performance.
Despite Vince's best intentions, "perfect" practice probably won't result in actual perfect performance but it should get closer.
As a software developer who is also a coach of various aspects of software practices and process I don't always find myself writing code in any given week. I still want to keep my skills sharp when I'm not coding but even if I was doing active development this week I would still look for some time to apply some deliberate practice outside the normal pressures of software delivery.
This week it was a Programming Kata called the Numbers In Words Kata since I needed to keep things short and only had time to practice on my own. I could have attended a Coding Dojo as well.
For whatever it is you do, there are likely some great ideas for how you can practice. For example, a writer might find some of these ideas valuable.
Will you make time to try some deliberate practice this week? Most likely someone has already identified some forms of practice for what you do.
If you are a developer like me you might try Cyber-Dojo or Codewars. A writer would look for [other forms of practice]. There are even kata for coaching. Find something that works for you and make time this week to improve.
]]>Global Day of Coderetreat 2014 is fast approaching! As I write this there is only a little over a week till November 15, 2014 when groups of software developers and programmers around the world will participate in day-long sessions of Deliberate Practice and community known as Coderetreat.
This year I will be facilitating the session in Austin that Agile Velocity is hosting. Come join us, meet some good people, and improve your craft!
For more information and a link for FREE registration see this blog post.
]]>After a long hiatus, a good number of years now, I'm eager to do a little writing again if nothing else just for me. I don't think anything from my past blogging efforts (other than company blogging) is relevant or timely anymore, so I'm starting from scratch again.
 Rodeo USA by Rene Schwietzke, on Flickr
Here we go!
]]>