Ben At Work https://benatwork.cc “I have no special talent. I am only passionately curious.” — Albert Einstein Wed, 26 Feb 2025 21:31:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://benatwork.cc/wp-content/uploads/2013/06/cropped-ucubeCool-2-32x32.png Ben At Work https://benatwork.cc 32 32 Agile Discovery & Delivery Toolkit https://benatwork.cc/agile-discovery-delivery-toolkit/ https://benatwork.cc/agile-discovery-delivery-toolkit/#respond Mon, 21 Jun 2021 21:20:14 +0000 https://benatwork.cc/?p=2720 Releasing a toolkit for agile integration of research and design insights

Context & Contributions

This project was born out of the realization that our organization’s agile practice was not lining up with our research and design philosophy. We wanted to be talking to customers constantly, bringing new research insights back to our teams on a more frequent basis, but more importantly, we wanted a way to bridge the gap between research and design on one hand, and the engineering process on the other.

PROJECT DETAILS

ROLE: LEAD RESEARCHER
PROJECT TYPE: PROCESS & PRACTICE
TEAM MEMBERS: 5
DURATION: 3 MONTHS

The team consisted of our design discipline lead, the product owner discipline lead, two other designers, and myself. My role in this project as the research lead was to determine and influence the ways in which researchers were involved in our proposed process, to help design templates and deliver workshops to teams onboarding to the agile discovery and delivery process, and to help build and create content for the wiki space where the toolkit lives.

Problem & Hypothesis

As a research discipline, one of our tenets was for research to ‘feel fast’ — to be able to deliver actionable insights quickly is core to organizational agility. We had processes in place, but they emphasized spinning up each research project individually as needed, when the time was ‘right’. This resulted in research taking a long time to get to users and back to the teams, largely due to the time it takes to find and recruit customers.

Our hypothesis was that if we inverted the process — pre-scheduling customer contact with every sprint — and simply brought whatever research questions, prototypes, etc. to the customer, we could create a system that was more in tune with the rest of our product development practice. We would be able to speak to more customers, with more frequency, meaning more insights about the most relevant work. Fortunately, we were not forced to reinvent the wheel, as someone within Autodesk, John Schrag, had co-developed a process called ‘dual-track’ agile years before, that did just this. Our job then became to adapt it to our organization and support it with structure, artifacts, and a rollout plan — we called it Agile Discovery & Delivery (D&D).

An intro slide from my cross-company research presentation on the Agile Discovery & Delivery framework

Methods and Process

When attempting to gather organizational buy-in for a new process, especially one that changes (some) ways in which people work, a careful and structured plan and process is crucial to success. With dual-track agile as our blueprint, we sought to adapt it to how our organization was structured, and asked ourselves what we needed to do in order to have teams feel confident in adopting this new set of methodologies. This essentially boils down to three stages:

  • Communication — deliver presentations to teams to help communicate our vision and intent
  • Scaffolding — create supporting materials for the different rituals within the process
  • Rollout — define a rollout plan for teams to onboard onto the new process
Communication

In order to get buy-in from product teams, we knew we were going to have to make it clear what we were trying to do, and how it would benefit them. We produced a video and slide deck providing a high-level introduction to the process and the toolkit, held Q&A sessions with interested teams, and most importantly, we picked a team to be the pilot and first adopter so we could test and evaluate each step of our rollout plan.

Scaffolding

From the beginning of the project we planned to make a toolkit, not just a process. The tools in this case would live on a wiki, and would be tied towards specific phases in the Agile D&D process. My responsibilities in this area focused on steps where research would be heavily involved — things like defining research questions and associated risks, taking the important questions resulting from that and turning them into hypotheses with measurable criteria for success, and devising instrumentation strategies for ensuring that teams were getting the data they needed to evaluate themselves against the metrics they created.

Rollout

We knew that simply dumping a bunch of information on a wiki was not a recipe for successful adoption, nor did we want to take a lot of precious time away from teams doing their jobs — so instead, we walk them through the entire process using their own backlog of issues and user stories. This keeps things moving forward for them, while introducing the process in a high-touch way that helps ensure understanding and consistency across multiple teams. Because of the high-touch nature of onboarding, we then ask teams who have gone through the process to help onboard 1-3 other teams.

A process diagram with real-life examples to help teams understand the process

Results

As of writing (June 2021), the process has begun rollout to 5 squads, with plans to onboard the rest of our organization (~600 people), by the end of the year. We’ve been able to present this process at several cross-company meetings, including for the entire Autodesk UX Research organization.

]]>
https://benatwork.cc/agile-discovery-delivery-toolkit/feed/ 0
Choosing a UX Research Method https://benatwork.cc/choosing-a-ux-research-method/ https://benatwork.cc/choosing-a-ux-research-method/#comments Wed, 16 Jun 2021 00:08:41 +0000 https://benatwork.cc/?p=2711 A simple flowchart as an exercise in understanding

Early in my time as a UX Researcher at Autodesk, there were only two of us on the team (there are 10 at the moment I’m writing this), and we needed ways to engage folks in research and help our stakeholders understand it better. I was teaching myself Figma at the time, so I deciding that creating a (admittedly incomplete and oversimplified) flowchart might be a good way to ‘get the word out’ that our research practice was expanding and we could help guide your research efforts — even if we didn’t always have the bandwidth to lead the research ourselves.

I started with some research — of course. How had others visualized this? What frameworks or methodologies could I leverage to organize the information? Most of you will of course notice the double diamond below. Which techniques and research methods should I include? I created a messy mood board in Figma to help organize my thoughts, and began connecting pieces together.

A mood board of influences and early thoughts

After some iteration and critique, I had my final version, shared below. In the internal version at Autodesk, each of the methods are hot-linked to example and explanation of that particular method.

A UX Method Flowchart, by me, circa 2019

If you’d like a copy or to use it in your organization, here’s a PDF link – my only ask is that you leave me a comment below to let me know a bit about your use case – I’m a researcher, after all!

]]>
https://benatwork.cc/choosing-a-ux-research-method/feed/ 4
Cloud Platform Analytics https://benatwork.cc/cloud-platform-analytics/ https://benatwork.cc/cloud-platform-analytics/#respond Tue, 15 Jun 2021 23:28:20 +0000 https://benatwork.cc/?p=2691 Laying a robust groundwork for platform analytics across an entire organization

Context & Contributions

Autodesk’s Cloud Platform (Forge) has several applications and services. Understanding how our customers work, how they use the applications or features is accomplished by using analytics — instrumenting the applications and services to collect data so that we can generate reports, slicing and dicing the data in different dimensions to gain important insights into user behavior.

This data allows us to develop applications and services better, to create and validate product changes and assumptions, to understand patterns, and to make decisions regarding future product changes or improvements. This is critical for several operations, like monetization, where we need to report customer usage of specific APIs for business reasons.

As Forge matures as an organization, we will be building a business and a platform based on Forge. As a business, we must be ready to be able to measure our performance — having to develop the instrumentation to do so reactively would be disastrous. This research focuses on what exactly we need to do as an organization in order to provide proper metrics, analytics, and insights for the products, services, and customers using our platform.

For this project, I was the lead researcher on a team of five – with 2 product managers, 1 designer, and 1 other researcher. I was responsible for creating the moderator guides for interviews, conducting interviews, taking notes, performing secondary research, analyzing transcripts, extracting job stories, performing synthesis, and presenting our results to stakeholders, along with the rest of the team.

PROJECT DETAILS

ROLE: LEAD RESEARCHER
PROJECT TYPE: FOUNDATIONAL RESEARCH
TEAM MEMBERS: 5
DURATION: 4 MONTHS

Problem

Forge has several different systems we support. Each of these systems have different levels of analytics, if they have them at all. As it stands now, our Product Manager, Marketing, and Development teams are not able to gain customer insights or to validate assumptions. Regardless of whether or not these are public-facing services, each team needs to instrument their services as if they were. We needed to align all of our internal and external systems so that we can easily compare ‘apples to apples’ across all of our products and services.

Hypotheses

In order to arrive at a solution, we believe we needed to understand the analytics needs of our organization. From senior leadership through every individual contributor, different individuals need different levels of analytics, at differing levels of fidelity, tracking a wide variety of different metrics. By understanding these needs, we would then be able to develop a coherent, prioritized roadmap and recommendations for how to organize our analytics.


Methods & Process

One of our first tasks was making sure we were all speaking the same language

As with most foundational research, we started by understanding the problem space by conducting interviews and performing secondary research by looking at how other organizations structured their analytics practices, what kinds of metrics and data we needed to pay particular attention to, etc. By analyzing and synthesizing this qualitative data, we moved towards defining several ‘meta’ personas or categories of analytics users. This allowed us to prioritize different use cases and develop a roadmap towards analytics excellence, with recommendations on how to get there.

Understanding

  • Explore existing research and literature about analytics organizations
  • Interview key stakeholders across our organization to gain a baseline understanding of their analytics needs

Refining

  • Analyze the interview data, grouping common statements and attitude
  • Synthesize the data into meta groupings the typify different personas of analytics users

Delivering

  • Prioritize the persona groupings to inform the project roadmap
  • Construct a set of recommendations to achieve analytics excellence with the organization
  • Deliver a report detailing our process, findings, and recommendations to key stakeholders

Understanding

Due the wide scope of the project, needing to define an analytics strategy for a 600 person organization, we conducted a total of 32 hour-long interviews, with stakeholders in many different roles across the company, from our SVP on down.

In each of these interviews, we asked questions like:

  • What are the key analytics insights you use related to your business/product/team in order to drive important outcomes?
  • Why are these insights important as opposed to others?
  • How do these insights impact business outcomes?
  • When do you need these insights?
  • How do you get the data you need?

We also asked about what an ‘ideal’ analytics solution might look like from their point of view, in order to give us some features to investigate and usability guidelines to shoot for. We paired this with secondary research concerning analytics to create a roadmap of analytics ‘maturity’ for our organization, which helped us better describe where we were now and what things would look like several years down the road.

Through a number of secondary sources, we compiled a roadmap for analytics ‘maturity’

The interviews and secondary research provided us with a solid collection of quotes, use cases, and needs statements from which to begin our analysis.


Refining

We took the interview data and started breaking it up by the user’s role / division within the company and looked for the key needs statements from that user, using the Jobs To Be Done (JTBD) framework. We wrote job stories, captured quotes, kept track of related metrics and data sources and ended up with a wealth of insights about how analytics was currently being done at Autodesk.

Using the initial grouping we did by role / division, we noticed a flow of analytics information between groups. We wanted to understand these relationships between organizations better, so we made a rough concept map:

Our rough concept mapping of the analytics flow within our company

What emerged from the concept map were the distinct ‘meta-personas’ representing a group of users with similar Jobs To Be Done, whose relationships in consuming and producing different analytics artifacts could be understood in relation to the rest of the organization.

This led to us being able to define six ‘meta job stories’ representing the different types of analytics usage across the company:

  • Business Health Monitoring & Strategic Planning
  • Customer Acquisition & Support
  • Cost Management and Resource Planning
  • Product Development
  • Product Operations
  • Internal Team Efficiency

We were then able to categorize the job stories according to these personas, which allowed us to flesh out the users, specific analytics needs, key metrics, and data types for each persona.

The six meta job stories, with associate jobs to be done, key metrics, etc.

Delivering

With our six meta job stories and associated information, we were able to look at the different stories with an eye towards creating a roadmap. Since each of our six meta stories related to certain kinds of analytics data, we asked ourselves and our stakeholders which ones were most urgent — they told us (in no uncertain terms) that the data and metrics related to strategic planning and business health monitoring were the most critical.

This included key metrics about the health of our cloud platform, such as:

  • Number of unique applications that have made at least 1 API call (day/month)
  • Top 10 unique applications by number of API requests
  • Number of calls per API end point
  • Request success rate
  • Average request time

Our next step was to deliver recommendations and roadmap on how to get there.

Findings & Results

Based on our analytics maturity model (see below), we needed to collaborate with engineering leads to architect how we would get from our current, siloed system, to at least a minimum viable product for our first use case (business strategy analytics).

Given what we knew about the jobs that needed to be done with this set of analytics, we were able to trace back where all those pieces of information lived, and work with engineering to create a map of how a more unified data flow might be constructed, stage by stage, and what kind of resources we might need to make it happen.

We were then able to propose to leadership the creation of a engineering team to be responsible for building out this new streamlined infrastructure and delivering the associated data and metrics that our interviews had uncovered as necessary.

This was approved by leadership, the team was created, and the first version of the ‘Forge Business Analytics Dashboard’ was recently released (April 2021), with a planned iteration coming in July 2021.


]]>
https://benatwork.cc/cloud-platform-analytics/feed/ 0
The 10-Step Customer Research Lifecycle https://benatwork.cc/the-10-step-customer-research-lifecycle/ https://benatwork.cc/the-10-step-customer-research-lifecycle/#respond Thu, 10 Jun 2021 20:16:20 +0000 https://benatwork.cc/?p=2666
Defining the UX research process for Autodesk’s Cloud Platform group

Context & Contributions

I was the second member of a two-person research team when I joined Autodesk’s cloud platform group. Part of our charter was to build out the practice of research across the (~600 person) organization. Part of this effort involved defining what exactly our research process was, step-by-step.

While my manager and I discussed the steps broadly, the timeline, artifacts, tips, visual design, and content were delivered by myself.

PROJECT DETAILS

ROLE: LEAD RESEARCHER
PROJECT TYPE: PROCESS & PRACTICE
TEAM MEMBERS: 1
DURATION: 2 MONTHS

Problem

The research team was tiny, and the organizational awareness of who we were and what we did was minimal. We needed to define our process for ourselves and for our stakeholders. For ourselves, so that we could deliver a consistent style of research, and as our team grew, have an easy way to communicate the ways we worked. For our stakeholders, so they know what to expect when they engage in research with our team, and to give them a deeper understanding of the research process.

Hypotheses

Our hypothesis was that by solidifying our own process and producing a visual artifact for others, we would be able to not only deliver a more consistent and comprehensive research approach to our stakeholders, but that our stakeholder would be able to better understand our process and their role within it, thus leading to better research outcomes across the organization.


Methods & Process

The process was fairly straightforward, and can be broken down into:

Define

  • Examine our working patterns, draw on past experience and market analysis
  • Brainstorm our views as to what the customer research process should be

Iterate

  • Refine the different steps and stages in the journey
  • Create a visual framework for communicating the lifecycle
  • Visualize each step of the lifecycle
  • Enrich the lifecycle with additional information (e.g., deliverables/artifacts, tips and tricks)

Deliver

  • Present the artifact to some initial stakeholders
  • Revise the lifecycle based on feedback
  • Build supporting materials to scaffold the process (e.g., templates, wiki pages)
  • Finalize the visual artifact and supporting documents and deliver them to stakeholders and new team members

Define

Drawing on our past experiences as researchers as well as some comparative analysis of other research practices, we broke down our process into ten discrete steps:

  • Pre-planning
  • Kickoff Meeting
  • Logistics Review
  • Session Prep
  • Customer Sessions
  • Debrief
  • Topline Report
  • Analysis
  • Research Summary
  • Shareout
An early draft of the lifecycle, showing the 10 stages of customer research

Iterate

Refine the different steps and stages in the journey

After defining the steps, I went to work iterating on what exactly each step meant. What happens during this step? What is the expected output? Who is involved? How do you know you’ve completed each step? What are some best practices and ‘gotchas’ relating to each step?

Create a visual framework for communicating the lifecycle

Using the more robust definitions of each step, I was able to create a visual scaffolding to help transform the list of steps we had into something that felt more like a process or a framework. For each step I included:

  • The step name
  • Tagline
  • Actions (what happens at each stage)
  • Deliverables (the artifacts / documents produced)
  • Tips & Tricks (common mistakes & best practices)

I also realized that the 10 step process could be abstracted into four phases: preparation, execution, distillation, and communication. These four themes helped give the process a narrative, and provided a deeper level of meaning behind each step.

Visualize the lifecycle

For the visual design, I used simple iconography-style visuals and some color to indicate phase groupings, but mostly worked on refining the language for each piece of text to be as concise (yet descriptive) as possible. Through this process it became clear that I was essentially creating a reference guide, one that needed more supporting material to augment and enrich the content, since there was so much that couldn’t fit on a single visual.

The final visual artifact

Enrich the lifecycle with additional information

To support the visual reference, I created a wiki page for each of the steps, a main page explaining more about the process, and a series of templates and examples of the artifacts referenced in the lifecycle (e.g., kickoff meeting guide, moderator guide, sample debrief workshops, research summary templates, etc.)


Deliver

Build supporting materials to scaffold the process (e.g., templates, wiki pages)

After presenting the initial work to stakeholders and iterating based on their feedback, we were able to build out and deliver the final set of wiki pages, templates, guides, examples, and visual reference.

A page from the wiki, providing more information and context around each step

Findings & Results

This set of materials had been presented to our larger organization and to numerous stakeholders across our organization and across the company. It has also become part of how we onboard new members and interns to the research team – they are not only given a presentation of the lifecycle as part of their training, but are encouraged to contribute to it and to challenge it. The lifecycle is a key artifact in telling our story, and having new eyes on it frequently has kept it fresh and engaging.

A research analysis template for the ‘Analysis’ step of the lifecycle, added during subsequent revisions of the lifecycle

]]>
https://benatwork.cc/the-10-step-customer-research-lifecycle/feed/ 0
The Six Islands of Developer Experience https://benatwork.cc/the-six-islands-of-developer-experience/ https://benatwork.cc/the-six-islands-of-developer-experience/#respond Wed, 09 Jun 2021 16:19:08 +0000 https://benatwork.cc/?p=2631 A meta-study to make three years of research understandable, important, and actionable

Context & Contributions

The main focus for this project was Autodesk’s cloud platform (Forge) and better understanding the body of research that had been done on the experience of developers using Forge.

My contributions to this project include conceptualizing the six islands framework, including the visual design and final interactive infographic, as well as the initial journey maps for all the islands, finding and matching the customer quotes and job stories, drafting the prioritization framework and hierarchy of needs, drafting the high-level recommendations and filling in the hypotheses and measures of success.

PROJECT DETAILS

ROLE: TEAM LEAD & LEAD RESEARCHER
PROJECT TYPE: META-STUDY
TEAM MEMBERS: 7
DURATION: 6 MONTHS

Problem

The research and design teams had done a significant amount of research regarding the developer portal, but much of the research knowledge was scattered throughout the organization with different teams.

Consequently, we were repeating a lot of work, asking the same questions and finding the same results over and over. No one had tried to look at the entire corpus of work, produce an easily comprehensible artifact, and communicate out to the organization.

Hypotheses

By performing a meta-study on the developer experience, we believed we could produce an artifact which was compelling enough to finally move the needle on a number of important initiatives which were currently impeding the developer experience, but were not being given the attention they deserved.


Methods & Process

Our process consisted of three main stages:

Stage 1: Synthesize

  • Gather and review previous research on developer experience
  • Synthesize the findings into a common form

Stage 2: Visualize

  • Extract the major touchpoints of the developer journey
  • Create a visual framework for understanding (the six islands)
  • Visualize each step of the developer experience using the framework
  • Enrich the journey map with direct insights from customer research

Stage 3: Strategize

  • Distill our learnings from each island, prioritize areas of need
  • Create strategic recommendations to address greatest weaknesses
  • Draw out tactical implementations, hypotheses, and measures of success
  • Plan an initial area of focus to implement our recommendations

Synthesize

Gather and review previous research on developer experience

One of the major drivers for this effort was the knowledge that our collective organization had a lot of existing research about the developer experience of our platform, yet the research was failing to move the platform in the right direction — not because of the quality of the research, but because the research insights were living in silos within teams scattered throughout the organization. So naturally, our first step was to gather and review every bit of research relating to the developer experience that we could get our hands on.

Synthesize the findings into a common form

We used both Mural and Figma to help us organize our thoughts — some screenshots of our work-in-progress are below — we discussed what the general structure of our deliverable should be, what the ‘data types’ or ‘currency’ we were operating with, and how best to categorize previous research insights in a way that would allow us to present a cohesive point of view.


Visualize

Extract all the significant touchpoints of the developer journey

As we continued to cluster and organize the previous research findings, we were able to start mapping the precise steps our customers were taking.

Create a visual framework for understanding

Our clustered journey maps became ‘islands’ of experiences – but as parts of the same overall experience, we needed to map the journeys between islands – what factors were causing movement amongst the islands?

Visualize each step of the developer experience using the framework

For each of the six islands we defined, we iterated on defining the steps in the journey, the flows and branches within the island, as well as the bounds that led users away from our service and the connections to the other islands.

Enrich the journey map with direct insights from customer research

Finally, wherever possible we found direct evidence to place next to each journey step, such as customer quotes and job stories, research data and insights from our own analysis. We also pulled out hero quotes and summary statements communicating a few of our conclusions.

This quote is emblematic of the kinds of feedback we were encountering. We decided we couldn’t stop with just our current map. We had to form a plan to help improve the developer experience across our organization.


Strategize

Distill our learnings from each island, prioritize areas of need

After completing the initial map representing the current state of our developer experience, we moved towards thinking strategically about what changes should be made to address the numerous pain points we uncovered. We started by focusing in on the most problematic sub-section of each island, and extracting a simple customer problem and summary needs statement from that.

We then thought about the problem in terms of pervasiveness — how many of our developers were likely to encounter this particular problem — and mapped it onto a hierarchy of needs (a la Maslow). This gave us a way to think systematically about which areas to prioritize.

Create strategic recommendations to address greatest weaknesses

With our prioritized areas in mind combined with our research data, we drafted five recommendations we felt would address the majority of the issues we were aware of. We mapped these recommendations back to the prioritized needs to communicate where in the process certain needs should be addressed.

Draw out tactical implementations, hypotheses, and measures of success

For each of our high-level recommendations, I drew out some examples of the recommendation in action — tactical implementations. We also stated our hypotheses about what would change if our recommendations were followed as well as how we ought to measure them.

Plan an initial area of focus to implement our recommendations

From here it became clear that we needed a testing ground — a specific area where we could implement our recommendations, measure the results, and evaluate whether or not we were correct in our hypotheses. We put together a project roadmap with a particular team in mind, with the initial goal of dramatically improving the self-serviceability of their API’s documentation in a six-week sprint.


Findings & Results

We put together an interactive infographic in Figma with all of our island maps, analysis, recommendations, and roadmap as a means to have a living artifact for others to digest the work. (If you cannot access the infographic below, please contact me for access).

So far, the research has gained some momentum and been presented at both our company-wide research meetup and our larger discipline (UX, research, and PM) quarterly meeting. It has also been incorporated into how we tag and classify new research insights within our research database.

As of June 2021, we are planning the launch of our focused six-week sprint for next quarter, and the project has large organizational support and attention from higher level management. I will update this post as results from that project become available.

]]>
https://benatwork.cc/the-six-islands-of-developer-experience/feed/ 0
Project Soli https://benatwork.cc/project-soli/ https://benatwork.cc/project-soli/#respond Tue, 06 Aug 2019 20:08:17 +0000 http://benatwork.cc/?p=2590

Project Soli is a radar sensor platform enabling gestural interaction with embedded devices. I worked on the team for about 18 months as a UX researcher and prototyper. My personal responsibilities on the team ranged from user research of core needs (think aloud, wizard of Oz, surveys) to building coded (C++, openFrameworks) prototypes of new features based on user research. Other responsibilities included data analytics for UX testing, protocols for sensor integration (motion capture, depth cameras), managing research initiatives (such as building an anechoic test chamber for signal validation) and contributing to Project Soli’s software development kit.

https://atap.google.com/soli/

]]>
https://benatwork.cc/project-soli/feed/ 0
Photon SIK https://benatwork.cc/photon-sik/ https://benatwork.cc/photon-sik/#respond Sun, 27 Nov 2016 21:39:16 +0000 http://benatwork.cc/?p=2473 The SparkFun Inventor’s Kit for Photon is an introductory Internet-of-Things experiment kit, with a Particle Photon compatible microcontroller, breadboard, and a variety of sensors. Along with other members of SparkFun’s IoT taskforce, we designed the kit, the BOM, the experiments, and the instruction guide from top to bottom. Below are a few of the experiments I was personally responsible for: a color-picker integration with Twitter, and a micro-OLED pong game with a web-based scoreboard.

]]>
https://benatwork.cc/photon-sik/feed/ 0
Particle Photon Shields https://benatwork.cc/particle-photon-shields/ https://benatwork.cc/particle-photon-shields/#respond Sun, 27 Nov 2016 21:26:56 +0000 http://benatwork.cc/?p=2471 As part of a taskforce investigating how SparkFun could support the Internet-of-Things, our group designed and released six add-on shields for the Particle (formerly Spark) Photon board. Of the six, I was personally responsible for the three pictured and described below.

From SparkFun.com:

The Photon IMU Shield gives the Photon motion-sensing ability by connecting it to an all-in-one 9DOF IMU (a lot like the LSM9DS0). With this shield, the Photon will be able to sense linear acceleration, angular rotation, and magnetic fields. The Photon OLED Shield connects the Photon up to a blue-on-black OLED. The display is small, but it’s perfect for visualizing IMU data or printing readings from any of the other shields. Finally, no ecosystem is whole without a Prototyping Shield. This simple little board adds some prototyping space in proximity to the Photon’s I/O pins and power buses.

]]>
https://benatwork.cc/particle-photon-shields/feed/ 0
Makers in American Spaces — State Department Webcast Series https://benatwork.cc/makers-in-american-spaces-state-department-webcast-series/ https://benatwork.cc/makers-in-american-spaces-state-department-webcast-series/#comments Fri, 24 Apr 2015 15:43:23 +0000 http://benatwork.cc/?p=2332 Over the past few months, I’ve been lucky enough to be invited by Kylie Peppler to participate in a couple of webcasts for the state department (the Bureau of International Information Programs at the U.S. Department of State, to be precise) as part of a series called “Makers in American Spaces”. Kylie, myself, and other guests discuss the ins and outs of creating and maintaining a maker community under a variety of circumstances and constraints.

Here’s one we did with the state department in Moscow:

And another one with some folks for Taipei:

]]>
https://benatwork.cc/makers-in-american-spaces-state-department-webcast-series/feed/ 2
Jumbotron! https://benatwork.cc/jumbotron/ https://benatwork.cc/jumbotron/#respond Thu, 23 Apr 2015 15:30:59 +0000 http://benatwork.cc/?p=2334 So, SparkFun has these awesome 32×32 RGB LED panels. They also have a cheap webcam and a powerful-enough microcontrollers (a Teensy 3.1 in this case) to potentially drive live video from a webcam onto this RGB LED array. I set off to see if it could be done.

Turns out, it can!

Here’s the finished project (see above) — the little blue thing clamped to the stick is the webcam, facing down at the improvised SparkFun flame. As you can see, a fairly good representation of the flame is on the LED panel. Success! At some point I’ll take some video to prove it works in real-time.

I put up a full tutorial on SparkFun’s Learn website, including a parts list, code examples, and even a hardware hookup guide. Check it out: https://learn.sparkfun.com/tutorials/rgb-panel-jumbotron

]]>
https://benatwork.cc/jumbotron/feed/ 0