lean - Stringfest Analytics https://stringfestanalytics.com Analytics & AI for Modern Excel Tue, 11 Oct 2016 23:31:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://i0.wp.com/stringfestanalytics.com/wp-content/uploads/2020/05/cropped-RGB-SEAL-LOGO-STRINGFEST-01.png?fit=32%2C32&ssl=1 lean - Stringfest Analytics https://stringfestanalytics.com 32 32 98759290 The Confidence Interval Economy: Mistakes and Career https://stringfestanalytics.com/the-confidence-interval-economy-mistakes-and-career/ Sat, 01 Oct 2016 15:45:29 +0000 http://georgejmount.com/?p=2763 Excellent piece yesterday by Rob Collie at PowerPivotPro about seeing yourself as a Michelangelo of data. This is a topic that I discussed many times that I am totally on board with. I have argued to look at Excel as a medium of expression and a way to find beauty at your cubicle.  Rob points […]

The post The Confidence Interval Economy: Mistakes and Career first appeared on Stringfest Analytics.

]]>
paintings-316440_960_720

Excellent piece yesterday by Rob Collie at PowerPivotPro about seeing yourself as a Michelangelo of data.

This is a topic that I discussed many times that I am totally on board with. I have argued to look at Excel as a medium of expression and a way to find beauty at your cubicle. 

Rob points out that back in the day painting was only for the elites.

But once the cost of paint fell, art became possible for the masses.

The same is happening with data. And data is the analyst’s paint. 

It used to take massive computing power and technical know-how to analyze data. Now computing power is nearly free. 

A corollary is that if computing power is cheap, then the cost of making mistakes is cheap.

In a way, making mistakes is the cost of creativity. Which brings me to the confidence interval.   

We live in the confidence interval economy

One of the amazing things I learned in my first statistics class is that manufacturers don’t aim to make every product perfect. 

Instead, they agree on an acceptable error rate and confidence interval. They accept that for greater things in the business, nothing can be perfect.

This is a mindset that spreadsheet reporters (and their managers) need to adopt.

Bean-counting is not bean-predicting

There is the “bean-counting” euphemism. Everything has to balance, or it is wrong.  

What if we aren’t bean-counting, instead “bean-predicting” or “bean-analyzing?” Different exercise.

Some analysts want reductive or predictive models to have the same accuracy of full-blown financial reports. But these reports are not the same thing. They actually lose relevance and usefulness the more complex they become. 

So what does this mean for your career?

A rambling post, but this idea of data as a medium of expression with low cost of making mistakes should shape one’s career path. 

1 See your job as a creator

You are a Michelangelo of data. Stop with the “I’m an analytic type. I am not creative.” You are a designer whose medium of expression is the spreadsheet.  

2 Use rapid computing to your advantage

Don’t spend too much time building the perfect solution. Use rapid prototyping to your advantage. I don’t wait to write the perfect blog post. Instead I release the “minimum viable product,” and test to see how it does.

If I see positive trends, I expand on them. If not, I pitch the result. Don’t get too hung up on duds or mistakes — it’s part of the process.  

3 Find a boss who thinks in confidence intervals, not equilibria

The tricky part. Get a boss who understands how statistics works and the role of the error term. Bean-counting is quite different than bean-predicting. 

 

The post The Confidence Interval Economy: Mistakes and Career first appeared on Stringfest Analytics.

]]>
2763
Go Researching Waterfalls https://stringfestanalytics.com/go-researching-waterfalls/ Thu, 15 Sep 2016 09:30:52 +0000 http://georgejmount.com/?p=2530 One of my courses this year is a seminar on information systems. Before each class, we each write a “conversation starter” incorporating our thoughts on the papers. A topic that has greatly interested me lately is waterfall vs. agile development. “Waterfall” projects hinge on all-or-nothing strategy: let’s spend weeks and months developing something, then release […]

The post Go Researching Waterfalls first appeared on Stringfest Analytics.

]]>
waterfall-204398_960_720

One of my courses this year is a seminar on information systems. Before each class, we each write a “conversation starter” incorporating our thoughts on the papers.

A topic that has greatly interested me lately is waterfall vs. agile development. “Waterfall” projects hinge on all-or-nothing strategy: let’s spend weeks and months developing something, then release in one fell swoop (like a waterfall building at the top and then crashing down.).

In contrast, agile looks at how to release smaller bits of product to be tested by the audience and incorporated back in an iterative process.

I argue that academic publishing is still built largely on the waterfall approach (let’s spend all our time on one big paper submission and hope it gets in) rather than lean (let’s use digital media to develop a minimally viable research proposal.).

I develop this (and other questions of IS research) below:

——-

Keen in “MIS Research: Reference Disciplines and a Cumulative Tradition” offers a definition of the field from MIT’s Center for Information Systems Research: “a study of the effective design, delivery, and usage of information systems in organizations.” Given that widespread computing in organizations is new, we should not be too surprised that MIS is still finding its bearings as a research discipline. But is this novelty stifling the development of a unique discipline with its own frameworks and agendas?

Yes and no. The adoption of information systems has made good organizations better and bad organizations worse. Information systems can deliver timely, actionable data or keep bright people mired in the weeds of information. So these problems of information and organization are not new – but digitization of information has exacerbated the effects.

IS as a discipline can answer how organizations can use information as an asset – but how do we do this? This requires that IS frames itself both inside and outside academia.

My first job after graduation was in demand planning at a national specialty retailer. I was ready to be the best forecaster in the company’s history. Exponential smoothing, EOQ – you name the theory, I would be using it.
It turns out that very little of my job was built on theories of finance and economics but instead on principles of information. What data do we need? How are we going to get it? Much of my time was spent on fruitless reporting, unheeded requirements analyses, and so forth. I ended up leaving the demand planning job to go into operations finance at a public hospital, where the information problem was so much worse! It was this frustration of how companies use information that in part led me to the program.

A theme of the readings is how to place IS within the “practitioner-scholar” continuum. Can research be rigorous and not esoteric? How can IS establish itself as a discipline with a well-defined research agenda while still benefitting non-academics? One model I see here is economics. As the field has become more rigorous, it has been less understood by practitioners. I have noticed that many of the pioneers of MIS have a background in economics. Maybe they were frustrated with the way things were going in that profession? I was – when looking at graduate programs, so little of the research I saw appeared as useful social science.

So maybe we need to model IS on economics with caution. But the comparison with IS to law and medicine in Davenport and Markus can only go so far. The boundary between professional and user is not as distinct in IS. Most people with direct contact with the legal or medical professionals are themselves lawyers or doctors.

Not so information – everyone is in contact with management information systems. And most of them do not want to be a data architect. They want to get back to their jobs of being an accountant, a non-profit coordinator, a teacher, etc. Those people may also have valuable insights on information and organization, while perhaps not being so knowledgeable about the “IS artifact.” So it is more difficult to see IS as a profession than law or medicine.

This tension of how to fit IS inside management research becomes even more difficult with Big Data. There is an implicit conclusion here that “big = more.” But Ackoff had a great retort to this, even sixty years ago. Do managers really need all the information they think they do? Many managers, under the false pretense of doing “big data,” have simply added more variables to their models. And the more complex these calculations become, the more effort it takes to maintain it. Again, we see people without the necessary skill (or passion) doing “shadow IS” work under a misunderstanding of Big Data.

So, how do these tensions resolve? One meeting point between academia and trade could be publishing. While peer-reviewed journals absolutely have their place, maybe we need to look at media to help the profession outside academia. I see traditional publishing as a “waterfall” form of development where all the work is grouped into one result, which will either be acceptance or rejection. Can we adopt more “agile” based research development via digital media, consulting, practitioner-focused conferences, etc.? Maybe the breakthroughs won’t come from a blog post, but they could be a place for non-academics to collaborate. This allows practitioners to learn from and collaborate with academic research, while setting the boundaries and guidelines for IS academics.

The post Go Researching Waterfalls first appeared on Stringfest Analytics.

]]>
2530