Intense Minimalism https://intenseminimalism.com by Erin Casali on design, leadership, complexity, psychology, business, and more. Wed, 23 Apr 2025 10:38:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.3 11203095 The data behind remote work https://intenseminimalism.com/2025/the-data-behind-remote-work/ Mon, 21 Apr 2025 15:50:24 +0000 https://intenseminimalism.com/?p=2754 While unfortunately some organizations are still making decisions about on-site office work that are based on old management principles and as a distraction, many are trying to evolve for a better future of work. As one of my interests is organizational design and I’m specialized in remote work, I often found useful to have some numbers at hand to show impact and trend at a larger scale. Here are some of these data points that might be helpful.

Is remote work effective for companies?

For companies that adopt it, either fully remote or by employee choice (where the role type allows), remote work companies are correlated with general productivity gains.

šŸ“ˆ
+5–10%
productivity

šŸ’¶
up to $10k
savings

While in the benchmark economy, on-site and remote work are nearly equally productive, in the counterfactual remote work is 7–10% more productive.

M. Delventhal, A. Parkhomenko (2024) Spatial Implications of Telecommuting

The relative productivity of WFH imply a 5 percent productivity boost […] due to re-optimized working arrangements.

J. M. Barrero, N. Bloom, S. J. Davis (2021) Why Working from Home Will Stick

Companies can save up to $10,600 per employee that works remotely.

US Career Institute (2024) 50 Eye-Opening Remote Work Statistics for 2024

Is remote work effective for employees?

The benefits at an individual level are major. While we shouldn’t discount that not everyone can work from home, it’s also important to frame remote work as “anywhere”, not necessarily home, giving flexibility to many people, save time, and on average reduce expenses.

ā±
72 min
avg saved / day

šŸ’¶
up to $12k
savings

Workers who can telecommute experience welfare gains, and those who cannot suffer losses. Broader access to jobs reduces wage inequality across residential locations.

M. Delventhal, A. Parkhomenko (2024) Spatial Implications of Telecommuting

The worldwide average is 72 minutes in commute time saved every day.

The average employee can save up to $12,000 per year by working remotely.

US Career Institute (2024) 50 Eye-Opening Remote Work Statistics for 2024

Are there other benefits of remote work?

Commuting, work spaces, and fixed work hours are a limitation for many people, especially minorities. Evidence shows how remote work is beneficial also for these groups, and society at large.

šŸ§‘ā€šŸ¦½ā€āž”ļø
+9%
disability employment

šŸ
+33%
minorities applications

🌳
-54%
greenhouse gases

WFH increased full time disability employment by 9%, implied increase of 36% in computer occupations. Many individuals with a disability have substantial work capacity, and WFH provides the
means to realize this capacity.

WFH likely to not only benefit individuals with a disability, but also improve public finances through higher tax revenues and reduced expenditures.

N. Blook, G. B. Dahl, D. Rooth (2024) Work from home and disability employment [slides]

Women are more likely to WFH after childbirth. Regions with greater remote work increase experienced a decrease in child penalties.

P. Zarate (2024) Remote Work and Child Penalties [slides]

Remote workers cut greenhouse gas emissions by up to 54% by not commuting to an office five days a week. One day of remote work cut emissions by 2% while 2 to 4 days of remote work cut emissions up to 29%.

US Career Institute (2024) 50 Eye-Opening Remote Work Statistics for 2024

A discrete change in job posting to remote status (holding all else constant) is associated with an approximately 15% increase in applicants who are female, 33% increase in applicants with under represented minorities (URM) status, and 17% increase in applicant experience.

D. H. Hsu, P. Tambe (2025) Remote Work and Job Applicant Diversity

Other insightful data points

While not directly about remote work, there are some data points that inform remote work as well, to frame it better in the wider organizational and societal context.

80%
of employees who said they received meaningful feedback in the past week were fully engaged — regardless of how many days they worked in the office.

4x
The boost in engagement from meaningful feedback is four times greater than the boost from having the right number of days in the office.

Gallup Hybrid Work Study, USA, February 2025

This is particularly relevant as it shows that the factors that lead to engagement are the same, and can be actioned on, even in remote environment.

Erosion of employee connection to organization’s mission or purpose:
On-site (remote-capable): -6%
On-site (non-remote-capable): -5%
Hybrid: -12%
Exclusively remote: -9%

Gallup Hybrid Work Study, USA, February 2025

This shows how the cultural disconnection isn’t related to remote work, even if some managers like to put the blame there. The decrease is across the board, regardless of role, which indicates a reason that lies outside the remote variable. We could speculate that this is due to the generally adversarial take a lot of companies took toward employees (i.e. layoffs).

What’s the impact of RTO mandates?

Surely all the companies enacting Return To Office (RTO) mandates do this to improve the company status? Unfortunately, it’s routinely proven that not only there’s no financial benefit, but the effects on people’s morale are significative.

āš–
0%
stock change

šŸ˜“
29%
struggle recruiting

šŸ§‘ā€šŸ’¼
outflow
senior staff

No stock market reaction to deviation in policy choice [between fully remote and RTO companies].

S. Flynn, A. C. Ghent (2024) Determinants and Consequences of Return to Office Policies

We do not find significantly different financial profitability or stock market valuation for RTO firms after RTO.

Y. Ding, M. Ma (2024) Return to Office Mandates [slides] (University of Pittsburgh)

We find significant declines in employees’ job satisfactions mandates but no significant changes in financial performance or firm values after RTO mandates.

Y. Ding, M. Ma (2024) Return to Office Mandates [slides] (University of Pittsburgh)

And if employees’ job satisfaction is lowered, this means that companies are not going to have an easy time after:

Almost half (42%) of firms who mandated returns have experienced higher than normal employee attrition, with 29% now struggling to recruit.

Unispace survey (2023)

The return-to-office (RTO) mandate at Microsoft led to a significant outflow of senior employees to competitors.

D. Van Dyjcke (2024) Return to Office and the Tenure Distribution (University of Michigan)

What are the real reasons of RTO mandates?

Given there are no major benefits in forcing people from remote work back to an office (hybrid or full-time), where are these mandates coming from? Let’s have a look.

šŸ“‰
stock
blame shifting

šŸ¢
real estate
investments

Many firms issues their RTO mandates after stock price crashes, including UPS, Amazon, Boeing, Nike and SNAP.

Y. Ding, M. Ma (2024) Return to Office Mandates [slides] (University of Pittsburgh)

Our results are consistent with managers using RTO mandates to reassert control over employees and blame employees as a scapegoat for bad firm performance.

Y. Ding, M. Ma (2024) Return to Office Mandates [slides] (University of Pittsburgh)

See any correlation between these two data points?

Three in four business leaders surveyed (75%) indicated that they have increased their real estate portfolio in the last two years

Unispace survey (2023)

Office utilization decline: stabilized at only 50% of pre-pandemic levels, while office vacancy rates have nearly doubled.

S. Krause (2024) The Impact of Work from Home on Commercial Real Estate [slides]

40% of companies cited a desire to make better use of the office space they pay for as a reason behind their RTO policies.

ResumeĀ·org (2024) 1 in 3 Companies Are Forcing Return-to-Office Due to Existing Office Leases Agreements

What do you do if you have real estate in your portfolio, you are an executive at a company, and the vacancy rates are so high? Of course, RTO is an obvious answer.

What are the trends?

While it’s difficult to predict major trends as remote work is also correlated with the global economy and needs of the workforce, however we still have some numbers from the past we can refer to.

🪓
28%
remote days

ā›°
93–98%
remote preference

🌱
5–25%
remote + choice

Exclusively remote: 50% future expectation, 60% preference
Hybrid: 25% future expectation, 33% preference
On-site: 24% future expectation, 8% preference

Gallup Hybrid Work Study, USA, February 2025

1 in 10 companies will lessen or eliminate RTO policies upon lease expiration.

ResumeĀ·org (2024) 1 in 3 Companies Are Forcing Return-to-Office Due to Existing Office Leases Agreements

5% fully remote
20% employee’s choice
43% hybrid (some time in office)
32% full time in office

Flex Index Stats, USA, 8,675 companies, Q4 2024

In 2019, approximately 7% of paid workdays were worked remotely, while by 2025, this figure increased to 28%.

In 2019, remote work was primarily viewed as a perk; by 2025, it has become a standard expectation, with 98% of employees preferring some form of remote work.

K. McDemott (2025) The U.S. remote working statistics you need to know for 2025

Work-from-home numbers have held steady throughout most of 2023 [25%, a 5x increase from pre-pandemic’s 5%]. And according to remote-work experts, they’re expected to rebound.

B. Schulz (2023) 2023 was the year return-to-office died

Fully remote jobs have also increased over the last two years from 10% in Q1 2023 to 15% in Q4 2024.

We found that new, fully in-office job postings declined from 83% to 68% during 2023. And over the course of 2024, the rate of new, fully in-office jobs decreased to 61%.

R. Half (2025) Remote Work Statistics and Trends for 2025

What are some remote work myths that we know are false?

There are a lot of myths related to remote work beyond the basics of productivity and efficiency for companies.

Myth: leadership is difficult when remote
Ethical leadership is as effective, if not more effective, in remote work environments. These are leaders that have a different set of skills compared to the in-office one, and value wellbeing and output more than presence and time.

Remote supervision does not negatively impact the relationship between ethical leadership and affective commitment and, in some cases, may be positive.

E. R. Serviss, et. al. (2023) Ethical leadership in a remote working context

Myth: on-site employees are more engaged
Turns out, they are not. Instead, they are the group of people that are the least engaged. While surely this can change from organization to organization, and we need to recognize that remote employees in an organization with bad remote policies are likely to be brought down, the numbers are quite clear as an overall trend.

Remote-capable, on-site employees have experienced the largest drop in engagement since 2020. These individuals have a job that could be performed with remote flexibility, but instead, they are required to work on-site every workday. On-site employees whose job is not remote-capable have the lowest engagement.

Gallup Hybrid Work Study, USA, February 2025

Are there downsides in adopting remote work?

No approach to organizational design is perfect. As much as there are big advantages, it’s also important to notice that the shift to remote work requires adopting new techniques and processes to make sure the limitations are actively compensated for.

36% of remote workers found their onboarding experience at a new remote company confusing.

US Career Institute (2024) 50 Eye-Opening Remote Work Statistics for 2024

It’s important to recognize that we can’t just reshuffle normal office onboarding in remote scenarios. The processes need to change with a combination of clearer guidance, better documentation, more explicit connections with existing employees, and in-person meetups to get to know the team.

Problems with fully remote work arise when it’s not managed well.

K. Crawford (2024) Study finds hybrid work benefits companies and employees

This has been proven now a few times: while there can be rare exceptions, most of the times remote work doesn’t show positive results only when the company expects to do it without any change or effort. Onboarding, training, promotions, processes, all things that need to change in a remote environment.

Top Challenges:
31% less access to work resources and equipment
28% feel less connected to my organization’s culture
24% decreased collaboration with my team
21% impaired work relationship with coworkers
18% reduced cross-functional communication and collaboration
17% disrupted processes

Gallup Hybrid Work Study, USA, February 2025

This study is relevant as it outlines the challenges — mind, challenges, not blockers — to remote work. A good remote-first organization will assess this internally and identify strategies to address each of these areas accordingly.

Resources

]]>
2754
The five levels of engagement for your organization’s AI strategy https://intenseminimalism.com/2025/the-five-levels-of-engagement-for-your-organizations-ai-strategy/ Mon, 31 Mar 2025 13:10:33 +0000 https://intenseminimalism.com/?p=2741 A lot of organizations and decision makers these days are facing challenging choices in their strategies, having to balance all the variables and unknowns linked to the current developments in AI (LLMs, GenAI, etc). It’s very easy to be overwhelmed, or get swayed by news and the pace of innovation.

This is very understandable: not having invested enough, not having the right skills and competences — and thus being beaten by competitors and new startups — is driving a lot of fear and it is leading to a lot of reactive strategies. This is nothing new in the space of tech and more broadly innovation, but it’s at the forefront these days in the field of AI given how quickly it happened and how fast money is being invested.

As different companies have different needs, it’s important to define different levels of engagement. The one I’m outlining here is a quite pragmatic one that I found effective in helping move forward strategic discussions. While not groundbreaking as it applies to many kinds of innovation, it’s still a solid foundation to start with. From here then different organizations can work to define their own goals, path, and level of granularity.

The image shows a diagram of the levels:
Level 0 — No engagement
Level 1 — Consumer
Level 2 — Build with Libraries
Level 3 — Build and Extend
Level 4 — Research

Levels:

  • Level 0 — No engagement
  • Level 1 — Consumer
  • Level 2 — Build with Libraries
  • Level 3 — Build and Extend
  • Level 4 — Research

Level 0 — No engagement

The organization has decided to not use anything AI related. This includes both internal tooling for employees, as well as building anything in the product or services that the company delivers.

When is this a good choice?
It’s when after a thorough assessment and analysis the company has decided that the current AI tools have no benefit neither in the short nor long term, and similarly that no current AI tools exist in the market that can boost employee productivity.
It’s also possible that some companies might decide to wait for the market winning strategies to emerge and invest later to be more careful and efficient in investing.

What to pay attention to

  1. While this can be seen by some people as an organization being in denial, there can be reasons for this choice and I’d always inquire about these before making a judgement call. The important thing is that the decision was based on solid ground, and not by being reactive or skeptic at a personal level.
  2. Have periodic checks that this decision is still the right one.

Level 1 — Consumer

The organization has approved the use of new AI tools, like coding assistants, general LLMs, etc, for general use to employees. There’s however no active plan to include anything AI in the product or service of the organization.

When is this a good choice?
This is likely a good option for a lot of organizations, especially ones that don’t build software, as various tools are being rolled out and integrated in many productivity suites already being used.

 What to pay attention to

  1. Keep focus on productivity, not adopting everything for the sake of adoption (but experimentation is good).
  2. Don’t just open access, but support the adoption with onboarding and proper learning documentation that is specific for the organization needs and ideally each individual role.
  3. Assessing risks is important, but try to strike a balance between the requirements of security and privacy, and the ability of employees to explore and adopt effective tools.

Skills needed
As this is at a more consumer level, it’s likely that all the tools introduced have their own learning and onboarding. As such, there aren’t major skills required. However, it’s important to have people inside the company that have a deeper practical understanding and can support the internal onboarding strategy.

Examples

  • GitHub Copilot
  • General use LLMs (i.e. Claude, Gemini, ChatGPT, …)
  • Built-in augmentation of existing tools (i.e. Miro AI, …)

Level 2 — Build with Libraries

The organization has decided to incorporate some degree of AI tools into their products, specifically using existing products and libraries. In this scenario the product teams have identified good customer-centric use cases and have made a cost/benefit assessment and decided that existing libraries fulfill the core needs.
In this case we can either mean that external APIs are being used (i.e. a contract with a third party) or a library is deployed in-house and used largely as-is.

When is this a good choice?
There can be many reasons, but two are more common. In one case, the existing libraries are an ideal or almost ideal match for the customer use-case. In another case, the cost of developing a more custom solution, while potentially better, has a cost that can’t be justified.

 What to pay attention to

  1. Take some time to build a prototype and test out the libraries for the actual design that is going to be delivered.
  2. Don’t get too deeply paired up with a specific library or API: build with the flexibility to switch to a different one later.
  3. Make sure that the extra features are properly isolated when built, and possibly discuss if it’s valuable to have a user-facing switch to turn it off (can be relevant for some users of companies to have it off for compliance reasons).
  4. Especially with LLM, be careful about content moderation for the output.
  5. Note that deploying LLM in-house compared to using a third-party API call has a whole different set of considerations cost-wise. That’s why there are solutions that host and run LLMs for you if you don’t want to use a specific one (i.e. Claude API, ChatGPT API, etc).
  6. The ethics of the company providing the API, and possible legal implications.

Skills needed
Everyone in the product side needs to know or learn how to build with the specific AI tool that has been identified. Product people need to get to know how it works in detail, designers to explore the specific design patterns and edge cases, and engineers explore libraries and integrations. Some general ML knowledge is ideal, but the libraries often provide enough guidance to implement them as the plan is to not have any heavy customization.

Examples

  • Claude and other provider’s APIs
  • Third-party LLM providers (i.e. Lambda AI, Groq AI, …)
  • RAG libraries to optimize the LLM output (i.e. LlamaIndex)
  • Open standards for interoperability (i.e. ONNX, …)

Level 3 — Build and Extend

The organization is including more R&D and composing existing approaches, for example using a foundational model and training it for a specific purpose. Libraries and APIs are likely still used, but there’s a deeper understanding of ML, AI in general, and the necessary tooling.

When is this a good choice?
This is ideal when there’s a competitive advantage in developing more custom solutions or that’s needed for the specific user need, and the cost/benefit analysis shows this to be the right option.

 What to pay attention to

  1. Extending requires much deeper AI and ML expertise, so while specialists are good to have at previous levels, at this stage they are definitely a requirement.
  2. Assessing the cost/benefit can be difficult if there isn’t pre-existing expertise in the topic, both at strategic and implementation level.
  3. On-premise deployments are expensive and should be evaluated carefully against the target scale and performance of the platform, with attention to quantized models.
  4. The ethics of the data that created the foundation model used, and of any new training data used, and possible legal implications.

Skills needed
At this level it’s needed a deeper expertise in AI toolsets and theory, especially on the development side. Ideally not just developers but also adding data scientists with a specialization in this area.

Examples

  • Hosted LLM providers (i.e. Lambda AI, RunPod…)
  • ML libraries (i.e. TensorFlow, PyTorch, …)
  • Orchestration of LLM (i.e. dstack, …)

Level 4 — Research

Not many organizations will be able to have the funding and skills to operate at this level. This is where the company is actively competing to develop new models and solutions, either in specialist fields (i.e. healthcare) or generic ones (i.e. generic agents). They are investing in the innovation of the AI field itself, and likely competing to get top AI talent.
These companies are the usual big names: Anthropic, OpenAI, DeepSeek, Google, Meta, etc.

When is this a good choice?
Unless the company is competing directly in pushing the boundaries and innovation in the AI field, it’s unlikely to be a good idea. As this is pure R&D work, with the current hardware and experts costs, it’s a very expensive bet and needs to be backed by a strategy that can afford to invest in it.

 What to pay attention to

  1. The ethics of the data used, and possible legal implications.
  2. Expenses to build the infrastructure.
  3. Supply-chain planning to be able to deliver in time.
  4. Competitor innovation and announcements.
  5. Papers published by institutions and academies.

Skills needed
Best in class researchers, data scientists, engineers, with deep expertise in AI. Likely also PhD in the field or similar.


Identifying the right level of engagement, and where necessary a plan to move from one level to another is very important to cut through the noise of the hype cycles. The notes above try to summarize a very complex and evolving space: as such, it’s important to acknowledge there are limitations in such an abstraction, both in the summarization as well as in the level of knowledge needed.

At the same time, it provides a pragmatic scaffolding to ground medium to long term strategic work in the field. You can build on it.

Thanks to Erlend Davidson for reviewing the content.

]]>
2741
Learning and leveraging AI as interaction material in your product https://intenseminimalism.com/2025/learning-and-leveraging-ai-as-interaction-material-in-your-product/ https://intenseminimalism.com/2025/learning-and-leveraging-ai-as-interaction-material-in-your-product/#comments Fri, 28 Mar 2025 15:11:33 +0000 https://intenseminimalism.com/?p=2736 While there are many ways to work with different AI tools, one that originates from design can be particularly effective: AI as interaction materials.

What do we mean by materials?

While the knowledge of the materials used is prominent in all kinds of designs in the physical space, this is a topic more in the background for the digital space. At the same time, Google’s Material Design has made any search on this difficult. Yet, I’ve always believed that with the right framing it’s an extremely useful concept. 

An interaction material is an abstraction of the technical components of a digital interaction where we focus on the cognitive properties (how we perceive it), technical capabilities (what it can do), and technical limitations (what it can not do).

This isn’t a general article so I won’t get into too much detail, but for example these can all be considered interaction materials:

  • Web browser rendering engines (Firefox, Chromium, WebKit, …)
  • Libraries to perform searches (SQL, Elasticsearch, Lucene, …)
  • Programming languages (JavaScript, PHP, Java, …)
  • Animation libraries (GSAP, Lottie, Three.js, …)
  • And of course: LLMs.

Why interaction materials matter

The simplest way to explain interaction materials is to think of a brick. To effectively use a brick to build things we need to know its properties and its limitations (i.e. max load, temperature limits, etc). Knowing different kinds of bricks allows us to pick the right one for the thing we are building right now. We don’t need to know how a brick is made to know its properties and use it effectively in designing a building.

In the same way good architects don’t need expertise in building craft, product leaders don’t need to know every technical detail about how their materials are made to leverage them effectively. A good stopping point is when we reach the ability to know something as an interaction material: cognitive properties, technical capacities, and technical limitations.

We don’t need to know how to build LLMs or other AI tools to be able to make good decisions about using them. We do need to know them as an interaction material. Anything shorter than that, isn’t enough.

How to learn about AI as a material

It might not be intuitive how to get a grasp of something at an interaction material level given it’s something intangible. But there are many techniques to gain the knowledge needed:

  • Try it out as many variations as possible
  • Try it out… on something real(ish)
  • Read resources at one level deeper
  • Check the libraries
  • Talk with people with direct experience

Try it out as many variations as possible

Especially now that we are in what can be framed as the pioneering years of GenAI, it’s important to try out as many tools as possible to find exactly how they work and how they are solving problems (and possibly find some that fit your needs). Check their free plans, check the open-source options, see directly how to go from zero to using them.

To explore their cognitive properties, try to use them a bit. Follow their tutorials and documentation to start with, and ideally some writing by people that have used them before. See what choices they made in how to interact with these models and tools. Is it text only? Guided? Other modes? How fast? How does it feel using them? Try to get a sense of the choices they made to reach this specific interface.

To explore capacities and limitations try to do things that are very specific, or very generic. See what the tool does. Try things you wouldn’t do if you had a goal. The goal right at this phase is stress-testing to see if it breaks, and if it does, in which ways.

Try it out… on something real(ish)

This might seem obvious, but the next step is to try the model or capability on something real in the product’s context, but not directly on your product. This means thinking: ā€œHow would I use that for something actually productive?ā€. You can reach out and see how other people are using it or look for examples in the industry. Try to find people that explain exactly how they use it in your context. Focus on real examples with documentation, not people that just tell what would be ideal or stay too abstract.

While the goal is similar to the previous one, trying to contextualize in your own work is important to get a grasp of the nuances. Often I try things in a generic way and think ā€œoh yeah, it does Xā€ but then when I actually try to apply ā€œXā€ to something real… that’s where I find the limits of an LLM or Gen AI product.Ā 

Even if you don’t plan to actually use the LLM or AI product long term, it’s a good thought experiment to find ways to think about horizons and strategic improvements. If you are working on a product, you can also think about personas and customers of that product and imagine how they would use AI or things outside your product to solve their problems.

Read resources one level deeper

While going all the way on how to ā€œmakeā€ AI tools is not feasible for everyone, often it’s good to go one level deeper. Move from the ā€œuserā€ to the ā€œlast mile builderā€. How are they building it in their own products? Can you find articles of people that are implementing libraries that explain how to do it? Can you read forums where they are discussing the challenges they are finding?

The key here is not to learn how to make the AI tool (again: if that’s your thing, please do!) but to see the rough edges just under the surface. To understand how the thinking works. Something that might look easy and simple for the final user might be a lot of swearing on developer forums and months of work to fix it instead of just a plug-in library.

Check the libraries

This is something you’ll likely find out by doing the previous step: what are the libraries people are using? Look into them, and the specific beginner tutorials in their documentation on how to get started.

Talk with people with direct experience

This can happen earlier, but it’s also important to talk to people that are using these tools, not just people that write about these tools. Is there any team inside your company that is experimenting already? How are designers that have done it before thinking about it? Engineers? Product Managers? Yes discuss also with more senior people that define strategies of course, but don’t miss the people that have direct experience.


There are of course many other ways to explore that can lead to learning a new interaction material like AI, what’s important is that you focus on its three key factors: cognitive properties, technical capabilities, and technical limitations.

It could also be useful to prepare a short learning plan before starting. The above might get very meandering given how open the discovery process can be, so having an idea of the specific actions you want to take (i.e. ā€œTry 4 different LLM customer-care SaaS companiesā€, ā€œIdentify 2 libraries I could discuss with my engineering teamā€) and maybe time-box them can be very effective.

I also assume a lot of the people reading this article might already be half-way through some similar learning path. In that case my advice is to use the three key factors and the learning steps above and see if there are any gaps you feel might be useful to cover.Ā 

The important thing is to reach a rounded understanding of it: the knowledge of an interaction material.

Thanks to Saielle DaSilva for reviewing this article. 

]]>
https://intenseminimalism.com/2025/learning-and-leveraging-ai-as-interaction-material-in-your-product/feed/ 1 2736
A deep dive into trust with the trust equation https://intenseminimalism.com/2025/a-deep-dive-into-trust-with-the-trust-equation/ Thu, 13 Feb 2025 11:42:55 +0000 https://intenseminimalism.com/?p=2705 We need to improve our trust. We are an organization based on trust. Trust is one of our values. Safe teams are based on trust. These are all claims that happen often in organizations. Many people intuitively realize the importance of trust, and similarly many leadership books and theories require trust to work.

55% of CEOs consider a lack of trust in business a key threat

PwC (2016) Redefining business success in a changing world

It’s one of the soft skills — and yes, let’s reclaim the term — that we need the most for people to work together. And as all soft skills go, by definition, they are hard to measure.

Yet, trust is one of the hardest things to define accurately.

Measuring trust and its impact

Gallup’s take is to use ā€˜engagement’ as a proxy for ā€˜trust’. It’s easier to measure, and at a large enough scale it’s possible to get interesting data. They don’t use the word trust, avoiding it almost to an extreme, but it’s obvious that the things they measure are all underpinned by it: employee-leader relationship, employee-employee relationship, employee-customer relationship.

The differentiating factor in this variability is the quality of management, which explains 70% of the variance in team engagement.
Top-quartile business units achieved 23% higher profit than bottom-quartile units

J. Harter (2024) World’s Largest Ongoing Study of the Employee Experience

One of the elements that comes out of it, is that money can’t buy it. So trust, performance, output, as highly important as they are, won’t change by promising bonuses.

Engagement cannot be created through financial incentives.

R. Pendell (2023) Employee Engagement Strategies

The results indicate that the association between salary and job satisfaction is very weak. There is less than 2% overlap between pay and job satisfaction levels

T. Chamorro-Premuzic (2013) Does Money Really Affect Motivation?

Even more, incentives have a negative effect. To be clear: by incentives in this context we mean things like bonuses: if you reach X you’ll get Y, if you are in the high performance bracket you get a an increase. Not base salary or foundational perks.

For every standard deviation increase in reward, intrinsic motivation for interesting tasks decreases by about 25%

T. Chamorro-Premuzic (2013) Does Money Really Affect Motivation?

But back to trust.

Compared with people at low-trust companies, people at high-trust companies report: 74% less stress, 106% more energy at work, 50% higher productivity, 13% fewer sick days, 76% more engagement, 29% more satisfaction with their lives, 40% less burnout.

P. J. Zak (2017) The Neuroscience of Trust

Those working in high-trust companies enjoyed their jobs 60% more, were 70% more aligned with their companies’ purpose, and felt 66% closer to their colleagues

P. J. Zak (2017) The Neuroscience of Trust

While it’s not information directly usable in a workplace, we have found that it’s oxytocin driving social and trust behaviour. And oxytocin lowers with stress and raises with a number of factors.

The eight behaviours to foster trust

The research on oxytocin led P. J. Zak, and further studies, helped identify eight factors that help foster trust in organizations and groups:

  1. Recognize excellence — make sure for recognition to happen close to the goal achieved, and that’s tangible, unexpected, personal, and public. While not everyone might enjoy public recognition tho (check with them) find the right form for each person.
  2. Assign soft challenges — while a lot of stress is bad, we know from the flow theory that there’s a sweet spot in a challenging yet achievable goal.
  3. Give people autonomy — it should be obvious nowadays that micromanagement doesn’t work, and yet, here we are. Autonomy is very tightly linked to trust. It’s also important for managers to acknowledge there are many ways to reach a goal. Unless they are mentoring someone, they should let people go and help them keep focused on the end goal.
  4. Allow teams to self-organize — this is autonomy applied to teams. This is also the parts that embraces diversity and makes it a powerful driver for performance. The team should be able to let people shine in their top skills, creating a mesh where everyone picks the parts of the job that are better for them.
  5. Embrace transparency — hiding things is intuitively something that leads to distrust, and yet so many organizations have defaults that make documents ā€œlockedā€ and inaccessible. Change protocols, share openly, make everything but personal sensitive data / regulated data open. You all work for the same company, there’s no need for documents to be locked.
  6. Support socialization — it’s important to be intentional in relationship building. This is true at an individual level, but also organizations can put effort to allow people connect. It’s especially important for remote workers where offices tend to be passive about it because they just ā€œbumpā€ in each other.
  7. Facilitate growth beyond work — this is a hard one for many people that think that work and personal life should be separate. While this is true in the sense that one should not negatively affect the other, it’s also very myopic of organizations and managers to just care about the person for as long as they stay with the company. Growth is beyond the role and the workplace, and oddly, retention is increased for managers that care more about the people reporting to them.
  8. Be vulnerable — this is a tough one as lots of old management literature teaches the opposite, but here’s the thing: trust thrives in vulnerability. Only secure people ask for help, only good organizations are safe to allow people to ask for help (and not be marked as ā€œunderperformingā€). This is anchored on one of the deeper human impulses of cooperation.

These factors represent a more granular take on the self-determination theory by L. Deci from 1970s: autonomy, relatedness, competence, motivation, fears. This is unsurprising, and more importantly it’s excellent cross validation of both approaches.

The trust equation

If you prefer something shorter and easier to apply, and you feel the self-determination theory isn’t specific enough about trust, you can use the trust equation from S. Drozdeck (2003). While not based on the same extensive research as the above behaviours and theory, it’s still a useful shorthand to think and explore good practices.

Credibility is how much the person seems to have the skills and experience to perform the task or support needed. It’s a pretty simple criteria, but sometimes people with the right skills can get stuck here as they might not be perceived as having these skills. The action-perception gap also usually grow larger in bigger organizations, as well as remote organization. For credibility it’s not just a matter of doing the work, but being seen doing the work. This is because trust is a relational dimension, not a transactional one. This links to ā€˜competence’ in the self-determination theory.

Reliability is the connection between saying something and completing it. Sometimes people think this means 100% success rate, but that’s not the case: raising a blocker or an issue with the team or the manager is also part of reliability. Adjusting and adapting is too. It’s also often underestimated how it’s a two part things: a manager that doesn’t set clear expectations and tasks is an issue as much as someone not being reliable in completing it. Reliability is impossible if the people aren’t aligned. Reliability is also not always the ā€œbigā€ things. If someone always completes the big tasks (i.e. the big quarterly project has been completed), but keeps dropping the smaller ones (i.e. they forgot to update the title of that guideline you briefly discussed), it might not have an impact on the overall productivity, but they certainly will not be seen as someone that can be counted on. This links to ā€˜autonomy’ in the self-determination theory.

Authenticity is probably the most nebulous, but also something that people can pick on intuitively and often unconsciously. It’s also often confused with the misleading idea of ā€œbring your whole self to workā€ (a risk for many minorities). This variable is about being aligned with oneself, showing a personality that is consistent between work and socialization, being transparent in their actions, etc. This links to ā€˜relatedness’ in the self-determination theory.

Self-interest is a peculiar variable as it’s meant to express how the actions of an individual are done for the common good or for personal gain. In short: selfishness. There’s no trust possible if the suspicion of self-interest makes its way between two people.

Mind that all these variables aren’t absolute judgements for the people involved, they are all about perception. It’s not a matter of being credible, being reliable, being authentic, or being self-interested. It’s how others perceive that trait. One could be doing all of these traits in an exemplar way, but if they aren’t perceived as such by others, all is in vain and it’s effectively as if they are not.


As it’s often said, it’s also harder to gain trust than to lose it. Losing it in some situations can take a moment, but re-gaining after such event takes much more effort. The thought of the betrayal will likely linger in people minds for long, and that leads to diverging interpretations when things are in the gray area. And most things are there. For example, forgetting to reply in time in a high-trust relationship is likely to be forgotten quickly and in practice lead to a reminder; but forgetting to reply in a low-trust relationship validates the patterns and likely won’t get a reminder, with the person instead doing it themselves.

]]>
2705
Performance and shadow of brilliance https://intenseminimalism.com/2025/performance-and-shadow-of-brilliance/ Mon, 10 Feb 2025 13:18:08 +0000 https://intenseminimalism.com/?p=2687 One pattern that sometimes emerges in feedback and when assessing performance with more senior people is what I define the ā€œshadow of brillianceā€. What happens is that during feedback or reviews it emerges that there’s something to work on: a negative feedback, something that isn’t going quite well, or maybe something that didn’t land well with others. So far, nothing special. With the right support and in the right environment, these are good opportunities to grow.

Here’s the catch: this isn’t negative feedback like any other. When looking closely, it becomes clear that the issue is tied to one of their most developed skills, something they truly and deeply excel at.

Some examples I’ve encountered, anonymized:

  • A designer is wonderful in organizing workshops. People are almost excited when they run them, they really move the work ahead by miles, and the outcomes are always really good overall. At the same time, this person got some feedback that they seem producing very limited design output, and some would appreciate more visually detailed blueprints.
  • A business analyst is incredibly detail oriented. They work very efficiently too, and they can produce powerful and insightful analysis in very short amounts of time. At the same time, this person got some feedback that they seem to be missing the big picture, and sometimes their work overwhelm people.
  • A CPO is extremely efficient in reviewing processes and writing. Their work saved the company lots of time that people reporting to him can use for more important things. Their preference for documentation made sure transparency is at all times high. At the same time, this person got some feedback that they felt distant and sometimes dismissive when an answer to a question was already in writing somewhere.

At a first glance, these all seem easy to solve problems. Get the designer prepare more visuals, have the analyst summarize more, and the CPO stop writing so much. These would be however all superficial fixes, that could turn detrimental as they would negatively impact their most powerful skills.

Let’s dive deeper.

How to tell if it’s a shadow of brilliance

First of all, it’s important to notice that this usually emerges with senior people. The key difference is that normal feedback, when reviewed together with the full picture of the person, stands on its own and can be effectively addressed without impacting any positive skill. Even more, if the person isn’t performing well at all, then fixing and growing is what’s needed.

Instead, feedback that is a shadow of brilliance, is feedback that in the full picture shows itself linked as a byproduct of one of their most developed skills.

In short, two criteria:

  • There’s a skill where the person excels
  • There’s negative feedback on something that is linked to that skill

Sometimes the link is evident, sometimes it takes more digging and more experience working together, but once surfaced, it becomes clear how they are related.

What to do with a shadow of brilliance

It seems a case of ā€œif the only tool you have is a hammer, treat everything as if it were a nailā€. But is the solution always just taking the hammer away or stopping using it in some instances?

The first thing is to never try to fix this negative feedback in isolation. Doing so can make the skill everyone is happy about less effective in ways that can be unpredictable. Instead, when discussing possible ways to address this together, keep in check their skills and make sure that any change isn’t negatively affecting it.

The second possible approach is to just embrace it. This advice is not something some people, especially managers, like to hear because it’s counter-intuitive: the simple approach is that if it’s a problem, fix it. But unless the issue is major and critically affecting people, are you really willing to trade a marginal improvement for lost brilliance in a desirable skill?

Let’s go back to our examples above:

  • To the designer, we can pair them with someone whose skill is to create detailed designs, or if not possible, try to see if there are ways to create the kind of output some people expect or… review with the people that provided feedback what they are missing from the work. Sometimes even just a chat can solve a problem! At the same time review if there’s a reason why they shy away from detailed designs, or if it’s just external perception.
  • To the business analyst, we discuss and it surfaces that they wouldn’t feel they have done a good job if they didn’t explore all the detailed possibilities. So, instead of cutting their work short, we devise better ways to organize content, even maybe splitting that in two documents, in order to not diminish the depth, but reframe for different audiences. Instead of limiting their ability, we developed their writing and communication strategy.
  • To the CPO, we have run a workshop with their reports to identify the different preferences and styles of collaboration, and elicit a mutual understanding of skills. At the same time, the CPO also started doing open office hour slots that people can book, and instead of always letting their report take the stage when presenting, they started to MC these events.

As these examples show, none of these actions requires them to become smaller and limit their brilliance: they instead find strategies, some direct and some indirect, to find solutions that everyone is happy with. Yes sometimes it might mean to use that specific skill less, but it’s not always a given.

You can also see from the examples that not always these actions are fixes for the problem, nor they grow their skills. Some are simply lateral thinking ideas to address the other person feedback in the best possible way.

Ultimately, the most important thing is to not dim the bright light, but let it shine and support it. Rarely things are zero sum games.

Thanks to Tutti Taygerly for giving me the spark for this article.

]]>
2687
What are soft skills? Let’s claim back the term https://intenseminimalism.com/2025/what-are-soft-skills-lets-claim-back-the-term/ https://intenseminimalism.com/2025/what-are-soft-skills-lets-claim-back-the-term/#comments Sun, 09 Feb 2025 13:03:44 +0000 https://intenseminimalism.com/?p=2679 The discussion about ā€œsoft skillsā€ being a bad word choice keeps coming up over and over. Sometimes it’s coming from a good place, sometimes it comes from just a place of misunderstanding. These are often paired with suggestions of finding different definitions, different words. But to everyone that think this, I’d like to invite to a deeper self-reflection on why they believe that “soft” is a bad word.

In short:Ā let’s claim soft skills back.

I’ve a few reasons to suggest this, that I hope will sound compelling to you as well. Let’s get into it.

Soft is a good thing. Soft pillows. Soft landings. Soft touch. Soft approach. In many different fields and industries soft is not just a good thing, but a very desirable thing.

Soft is more difficult than hard. I know… WHAT!? this seems like a mind-twister, but that’s where the definition came from: soft and hard have been paired in 1972 because hard things are rigid, static, fixed, they are easy to measure! But soft ones? Try to measure the circumference of a pillow or the diameter of a cloud. Soft was picked exactly because it was difficult. They didn’t know how to measure it, to the point that they even snarked:

ā€œIn other words, those job functions about which we know a good deal are hard skills and those about which we know very little are soft skillsā€

Soft is human. Why would we want to deny that? Isn’t it worrying that we are trying so hard to deny that humans are soft, squishy, and definitely not metallic robots? Being soft is a wonderful human trait. Let’s embrace it.

Soft personalities are better. All things equal, you rather have someone soft to deal with, or hard to deal with? Most people will reply soft, and the others will likely be asking ā€œdo you mean soft as spineless?ā€ and ā€œdo you mean hard as sincere?ā€. Sure exceptions exist, but there’s a reason why if you search for ā€œsoft personalityā€ it’s all about vulnerability, humanness, and understanding, and if you search for ā€œhard personalityā€ you find challenges, difficulties, and drama. Which one would you like to work with?

Soft is vulnerable. And vulnerability is powerful. People that show their vulnerabilities are better managers, better at dealing with people, better at creating a great climate in a group. People that are hard and put up an armor? Not so much. Everything becomes a struggle. Sharing is difficult. Feedback is difficult. Growth is difficult. Teamwork is difficult.

And finally, have you ever wondered why nobody ever blinked about ā€œsoftwareā€ (even to the point of claiming ā€œSoftware is eating the worldā€) and yet here we are talking about finding a different name for ā€œsoft skillsā€? Ask yourself: what’s different between the two? Here’s an hypothesis:

Soft is considered feminine. Woah, I know. Some of you will be ā€œnahā€, some will be ā€œduhā€, and some will be ā€œwhy are you making this one about feminism too?!? leave me alone!ā€. Yet, we can’t ignore this aspect. A lot of people find ā€œsoft skillsā€ feminine and ā€œhard skillsā€ masculine, and that unfortunately brings in all the sexism and toxicity with it.

Some people reach this point and they decide to change the term. They don’t want to deal with toxicity, and they’d rather focus on the content of soft skills. It’s fair, I can’t blame them. Some of them start adopting the term Emotional Intelligence (EI) or Emotional Quotient (EQ), but even there, I’m not sure it’s considered more neutral.

And you might not think this is a reason, so this point doesn’t matter to you. But if you believe it’s right, then I would note that this is then a small part of the same argument for equality. We won’t achieve equality by working around the toxicity — insert here the poignant essay by Ursula K. Le Guin.

We achieve equality even with small choices of words, by raising awareness that these skills are good. And they are soft.

This article was originally published 1 Jul 2020 on Medium.

]]>
https://intenseminimalism.com/2025/what-are-soft-skills-lets-claim-back-the-term/feed/ 1 2679
A quick primer on designing with AI https://intenseminimalism.com/2024/a-quick-primer-on-designing-with-ai/ Sat, 12 Oct 2024 16:31:30 +0000 https://intenseminimalism.com/?p=2637 Where I need to say ā€œAIā€ because it seems that almost the entire world has forgotten that AI is the field, while LLM is the technology, but really: this article is about LLMs.

A short background

I’ve been working with chatbots for a long time: my MSc was a framework for emotional virtual assistant, it got converted into a patent (let’s ignore for a second that it was stolen and published not under my name) that got cited over 200 times by the likes of Apple, Google, Microsoft, Samsung, Intel, etc. Also, I’ve an interest in psychology and intelligence, and my current working theory is aligned with embodied cognition.

While I haven’t done ā€˜building’ work in the current wave of LLMs, I still try to be as up-to-date as I can on the field, reading sometimes complex papers or their explanation written by many good people, with a product and design angle. This is because I believe that to design it’s important to know the material we work with. And LLMs are a new material.

So what are LLMs?

This question can have many many answers, but for the goal of using it as a material I think one good definition is: LLMs are language black boxes that output a prediction to everything that was written before.

This is why I prefer this phrasing:

  • Language — they work because they were trained on a lot of text data of people talking with each other (i.e. web and chats)
  • Black Box — people that are deep in the work with LLM know how they work (even if the understanding of how everything contributes to the output isn’t quite there, i.e. explainability), but for any practical product use case — for people that build ā€œonā€ and not build ā€œinā€ — it’s better to think of it as a black box.
  • Prediction — at its core LLMs are predictive: the next statement is a best guess on what it’s expected to come (expected by the training data) after what has been written. Think of this: generative models are all predictive, it changes only what they are predicting (i.e. what comes after your text, what image comes up from the noise, etc).
  • Written Before — the reason why LLMs took the shape of chatbots is almost incidental, due to the training data being a lot of people talking with each other. But it’s in many ways a ā€œtrickā€: in a single conversation you can imagine it’s a single flow of text that occasionally stops predicting and allows you to add another piece of text before continuing the prediction.

A term that I find very effective is also stochastic parrot. This is because LLMs don’t really know what they are doing. They are just predicting, repeating a blurb of words from the training data. There’s no intelligence or reasoning in it (at least, in the 2024 generation of LLMs, maybe in the future they will find a way as it’s the current next problem they are trying to solve).

Another way to think about it is to realize that LLMs don’t give answers (they can’t reason), but they give examples of how an answer to that question would look like. It’s like someone creating an impressive prop for a theatre play. It might be so close to the real thing to be used in place of the real thing, but occasionally it becomes obvious it’s just a prop.

Designing with LLMs

Before starting

The question any designer or product person should ask before adding LLMs into a product is: does it add real value to our customers?

This might be challenging as in some cases this feature has been pushed to the forefront by market forces, expectations, and marketing, but in general we should always try to connect to the real value they can add.

The core LLM design principle

The first and core principles to design with this technology is: LLMs hallucinate.
(well lots of things based on neural networks do, but let’s stop here for the scope of this article).

This means that any solution that leverages this tool needs to account that the output might not be accurate, which usually means two possible things:

  • Make sure to hand the output back to the user to review before doin any action.
  • Make sure that errors in the outcomes are as inconsequential as possible, can be undo’ed cleanly and easily, and won’t cause distress to the user.

If we understand this, we realize that we need to set the right expectations with our users, as well as be accountable for any error that might be done.

When we say that they hallucinate we mean that the core technology (LLMs) hallucinates. Then there are of course approaches that different products do trying to create guardrails. For example search engines verify the output statements with their own search engine queries to show sources, generative outputs use the framework of the software itself to limit its range (i.e. dashboard outputs, or design systems, or theming modules, etc), automation integration show draft to review before activating, etc.

The other LLM design principles

  • Quick Undo — make it safe to try things and roll back (i.e. a clear undo action, or a drafting space, or an unpublished item).
  • Enhance Features — instead of thinking as LLM as the product on its own, think how it could be instead augmenting existing features (i.e. have LLMs write a filter for you in the filter feature, not LLM as a chat interface for your existing product).
  • Show Workings — sure LLMs are almost like ā€œmagicā€ when they work, but the trust is low, so it can be useful to show how it reached that decision if possible (i.e. show source links for an answer)
  • Quick Feedback — have an effective feedback loop where people can quickly mark outputs that aren’t effective and have a pipeline to review them internally (i.e. dislike button on an answer, then review monthly to improve the answers).
  • Output/Refine Loop — for more complex problems, don’t provide one-and-done actions but create iterative loops where the output can be refined with multiple back and forth (i.e. generate an item, then ask to change and tweak that item until it’s good enough).

One LLM, many LLMs

There are different LLMs, they can be trained on different data, with different performance, and work differently. They can be augmented, fine-tuned, pre-prompted, etc. This is the step after: once you designed the experience you want, the kind of interaction and use, you are likely to work with a specialist that is able to create the black box best suited for that design — and of course, this can be an iterative approach between design and engineering (or R&D). This is also why it’s advisable to try out different LLMs and see how they respond — again, understand the material.

Why we add AI?

In general, try to remember that apart from the hype, people want solutions to their problems, save time, and get results. LLMs and anything else that will come out of the AI field in the future is a tool to get there, not the end story. Yes in the short term it might be valuable to highlight ā€œAI featuresā€, leave that to marketing.

And as always, work closely with good engineers that have a grasp of this material and prototype in short loops.

Further readings

Thanks to Erlend Davidson for reviewing the AI background of this article.

]]>
2637
The Myth of the ‘Missing’ Remote Work Culture https://intenseminimalism.com/2024/the-myth-of-the-missing-remote-work-culture/ https://intenseminimalism.com/2024/the-myth-of-the-missing-remote-work-culture/#comments Thu, 26 Sep 2024 19:31:36 +0000 https://intenseminimalism.com/?p=2628 The data about the effectiveness of letting people choose remote work if they want to is at this point overwhelmingly in favor of remote work. Yet, a lot of managers and organizations like to say ā€œit’s better for the company cultureā€ if people are in the office together. They can claim that because culture is inherently a word that has no specific definition, and while everyone has a general idea of what it means, ultimately it can’t be measured: this makes difficult to prove them wrong.

Yet, it’s also obviously wrong to anyone able to look beyond the short-sighted boundaries of old organizations. We have now decades of evidence how online communities exist, and they exist at every scale: from the small guild of friends that meet exclusively online to play to major cultural phenomena that travel across the whole planet. Remote culture can obviously thrive.

A quick primer on culture

While I don’t want to dive into a deep exploration on the meaning of culture, given the topic it’s also important to give it at least some scope, in a way that can be used effectively in organizations.

First of all, it’s important to realize that when we talk about ā€œcompany cultureā€ we don’t mean the general meaning of culture. That’s just too wide. We need something useful within the scope of an organization.

Lots can be said, but here’s a simple way to frame culture in an organization:

  • People relationships — cuture need connections to spread
  • Implicit and explicit shared knowledge — culture shares information (memes)
  • Implicit and explicit shared beliefs and values — culture needs foundational principles to base itself on
  • Implicit and explicit behaviours — culture is how people act and interact, not just information

It’s important the emphasis on both implicit and explicit: this is because there might be a major disconnection between the two (which is a problem), like a company that says they care about people but they then reduce their flexibility and doesn’t invest in them.

With this definition we can more clearly see what we can work on to support the creation and spread of it.

Building remote culture

Let’s start with a critical point: remote community and culture is different from office community and culture. I know it sounds like an obvious statement, but in my work as a remote consultant I often still find people going ā€œbut we used to have almost 100% participation in our ā€œFriday Fun Hourā€ and now people barely show up. Yes, it’s because before they were captive in the office. Remote culture can’t be fostered in the same ways, even if some of foundational aspects are the same.

Remote also doesn’t have the decades (centuries?) of evolution of the office space in its favor, so we are still working in shaping these ideas. One of the main aspects is that offices are designed also to facilitate encounters and discussions — we can open a sad note on open spaces, but you get the point.

So going beyond the part of culture that is implicitly created by working together, remote culture needs to be curated more explicitly. In this sense there are many things to work with. The general advice is to frame everything as ā€œexperimentsā€, run them for a sensible amount of time to assess the success, run a retro, and then tweak, proceed, or dismiss. And on to the next.

Here are some ideas to play with:

  1. Run ā€œteam formingā€ workshops where principles, values, and commitments are established by them. Of course you’ll be guiding this and writing the final copy, but the idea is that people feel part of the team and committed by having co-created the team foundation.
  2. Be very explicit about distinguishing work time and time for socialization. Remote needs explicit socialization time, while in small offices this often happens naturally for the people filling the gap when they are not working.
  3. Run a weekly meeting where you can check work and so on, but where there’s also some time for socialization (5-10 minutes).
  4. Use some automation for ā€œsocialā€ messages. This could be for example a Donut automation that runs a question every week. Adoption might not be great at the beginning, but sometimes builds up later (took one of my teams about 6 months before they started replying… now they reply every week).
  5. Run a monthly ā€œConnectā€ activity. This could be half an hour with some lightweight activity that makes people share who they are (like ā€œWould you ratherā€ questions, just make them work appropriate). Often this can just be general chat for people that are already well connected, or can have some guided activity.
  6. Get people to connect 1:1. This can happen naturally through work, or might be through a tool like Donut again, but it also helps building some connections across the team. For example for me I’ve a Lead that is really good at socializing and I’ve nudged him to have regular 1:1s with everyone in the team. It’s been very effective to make everyone feel more connected as they are more comfortable when he’s around.
  7. Create ā€œsurpriseā€ one-offs (not surprise in the sense of sudden, but in the sense of non-regular). For example right now I’m planning a ā€œget to know how I like workingā€ activity with the team. We did it a while ago, but now it’s time to do it again. I’m going to shuffle its content tho: goal is the same, but with different structure and questions will be interesting for everyone (last one we also shared our ā€œdesktop photoā€ which then people talked about and it’s been great so replicating that likely).
  8. Acknowledge neurodiversity. Not everyone likes video, not everyone likes writing, not everyone likes drawing. Mixing up type of activities would be great. For example at some point we had an activity where we created big boards in Miro with background photos taken of landscapes and interior spaces and people were asked to ā€œdecorateā€ them in groups, discussing what to do. It usually takes a turn for the fun/absurd, but it’s very open and allows for people to add cropped images from the web like a collage (excellent for people ā€œnot good at drawingā€).
  9. Organize meetups. Ideally, once a quarter or three times per year. Remote teams work better when they can meet in person a few times a year. The idea is dual: dedicate 50% of the time to some kind of socialization activity, and 50% of the time to work that is most effective in person (not any work, be focused and purposeful).

Note also that sometimes some people just… won’t be interested. That’s ok. It’s not that we are doing anything wrong. It’s just that people are people, that’s all. Even if it might feel difficult when we are doing it and we aren’t getting the expected response. In this case I often reach out and have that conversation: ā€œHey I noticed you aren’t much engaged… is there anything we can do to make these things interesting for you, or you’d rather stay on the side because that’s your preference? That’s ok either way, but it would help for me to knowā€.

All of this, also will start compounding. A short activity here, a recurring sharing there, a meetup, and over time things start happening. Don’t expect to do one ā€œbigā€ thing and call it done. It’s the compound contribution of a lot of small things… like bumping into each other during a coffee break.

]]>
https://intenseminimalism.com/2024/the-myth-of-the-missing-remote-work-culture/feed/ 1 2628
Existing Calls are the Primary Obstacle to Adopt Async Ways of Working https://intenseminimalism.com/2024/existing-calls-are-the-primary-obstacle-to-adopt-async-ways-of-working/ Mon, 09 Sep 2024 08:08:48 +0000 https://intenseminimalism.com/?p=2625 In my work with organizations trying to move to remote work and specifically async work, I’ve noticed they often encounter the same initial challenge: getting rid of calls.

Here’s what usually happens: the company is heading toward being remote first, and has decided to adopt async work practices to get more flexibility, efficiency, work-life balance, and get ready to scale up further. They have good discussions, identify good tools, reshape some guidance, identify incremental steps to introduce the change, and maybe they have already a pilot program that has been successful. Then… the adoption stalls.

What is happening is that people have tried to adopt async practices without first identifying and reducing the amount of sync practices. Which usually means: meetings and calls. This is because meetings due to their sync nature take precedence: the people present live get all the attention. The meetings are already on the calendar. Recurring meetings keep recurring. Some people have days where they have so many meetings they are already struggling to do the work they are expected to complete.

Ironically, the very thing that async will help solve is the very thing standing in the way. It’s a vicious loop, where a call follows another call, and steals all the time needed to read and respond to async requests.

How to get over the calls obstacle

If your organization or team is at this stage, we can assume that there’s some desire to make the change. Unfortunately, there’s no silver bullet. Fortunately, there are many techniques that can be adopted — always with consent and discussions — in order to facilitate the adoption of async work practices.

  • Reserve Time — Give everyone the right to reserve time each day where no call can be scheduled. Some tools, like Google Calendar, can also auto-decline any invite in these times.
  • Revise Recurring Calls — Run a workshop with the team and create a list of all recurring calls, no matter how trivial. For each of these, make a group assessment on which ones the team is willing to remove first. While it could be tempting to switch everything at once to async, it might be wise to do this in phases to help the team get better over time.
  • Calls Type Mapping — Do some research and collaborate to identify different categories of calls, and then create a map on what async practice (if any) can replace them.
  • Calls to Text — I’m not a fan of ā€œtranscriptsā€ or ā€œnotesā€ for calls, but I think every call should have at least ā€œdecisionsā€ and ā€œnext actionsā€ as outputs written somewhere, and not left to the people involved to remember them. Once people realize that now they are spending time to do the call and then to write the outcomes, and that needs time, they will also start to think that maybe they could have done everything in a discussion thread.
  • Enabling Call Etiquette — Can anyone in the company say ā€œnoā€ and refuse a call if they don’t think they should be included? If not, empower people to reject calls they don’t think are useful to them, and instead make sure that all the people invited get the output of the call decisions and next actions (I’m not a fan of transcripts and notes, I think it’s a waste of time).
  • Top-Down Example — Leadership should keep reminding people of the change, and set themselves the example. To be clear: by setting the example I don’t mean they need to be perfect, but they need to be trying and pushing. Change isn’t easy, and the example is in the progress not in perfection.

This is not an exhaustive list, but should give an idea of a menu of activities you can take and remix for your use. Be incremental. Be patient. But not let it slide. Slowly people will start see the benefits, and will want more.

]]>
2625
In defense of corporate lingo https://intenseminimalism.com/2024/in-defense-of-corporate-lingo/ Wed, 04 Sep 2024 10:41:13 +0000 https://intenseminimalism.com/?p=2623 It’s absolutely true: corporate speak, or corpspeak, sounds often ridiculous from the outside and it really deserves the level of mocking it receives. Not just for the language itself, but of course for what it represents beyond the signifier.

That said I think that — as true for many things — there’s a balance in everything. I think there’s a use for purposeful language inside a company that helps clarifying and increases precision in the work.

In this sense, we can simplify and assess different levels:

  • Corpspeak to feel part of the group
  • Corpspeak to increase accuracy and clarity
  • Corpspeak to look like the part

Corpspeak to feel part of the group

We shouldn’t ignore that humans by their own nature have a strong pull for group dynamics and belonging, and separating the world into ā€œin groupā€ and ā€œout groupā€.

Organizations, just because they have a more organized boundary (marked by hiring and contracts) aren’t less affected by group dynamics. For this reason, it’s not just reasonable but also quite natural that people adopt the peculiarities of how other people speak in their group.

Trying to strip out this level would be not just hard, probably even impossible.

Corpspeak to increase accuracy and clarity

This is where a lot of the language specificity can help to a huge amount. Using a specific acronym that is used only in one company can feel obnoxious, especially when heard from the outside or as a new started, but on the other hand, it can really help communication clarity.

Let’s imagine that this company has a very specific moment in the way they work where they want to make sure customer support is involved before a major release. If the company is trying to hard to avoid a corpspeak language, the outcome can be that they call it something like ā€œcheckpointā€ and they use it. But then you are going to have discussions like ā€œOh, have you reached the checkpoint yet?ā€ and you’re not sure if they are talking about ā€œaā€ checkpoint or ā€œthatā€ specific checkpoint. Thus, a clarification is needed to be able to move the conversation forward, with the additional risk that they mean in their head two different checkpoints and they get ā€œyesā€ an an answer even if that’s not the case.

If instead in this example they called that special checkpoint something like ā€œMarketing Greenlight Checkā€ (MGC), when two people are talking the precision is high: ā€œOh, have you reached the MGC yet?ā€ now either gets a clear and precise answer, or… ā€œWhat the hell is a MGC?ā€ prompting the people involved to share knowledge.

And yes this might lead to sentences that are horrible to human ear like ā€œWe should make sure to include the SLT in our next MGC to make sure we are clear for FY25 L1 OKRsā€ā€¦ but these are also precise statements.

So when is corpspeak actually a problem? Well…

Corpspeak to look like the part

This is what I consider the problematic side of corpspeak. This happens for two possible reasons: either the person is just trying to look knowledgeable so they are effectively using corpspeak to obscure the message, or the person is defaulting to corpspeak language even when it’s not necessary.

Both scenarios, in hindsight, are easy to spot: they are both lowering the clarity instead of increasing it.

If it’s the first of the two reasons, you might be able to spot it because when you ask ā€œWhat the hell is a MGC?ā€ you get a passive-aggressive answer, or get scoffed at for not knowing.
If it’s the second, it’s likely a situation where using the more generic word would still convey the same level of clarity, as the specific word wasn’t needed. This is the case for example sentences like ā€œwe should make sure these mission-critical projects are completed in timeā€: are these actually mission-critical, or they just meant these are important?


I personally think it’s essential to create a distinction between these differences because there’s corpspeak and specialistic corporate lingo that can be useful, increase clarity and precision, and save time. Which means that attempts to ā€œspeak more common languageā€ would be misguided in this sense.

As long as neologisms, acronyms, and keywords:

  1. Increase specificity
  2. Share key knowledge
  3. Have an easy way to be looked up

Then it’s likely these are having a positive impact on the organization. As long as we aren’t synergizing but we are collaborating, it’s all good.

]]>
2623