Archive for the 'people & systems' Category

Lean Summit in Noordwijk

Tuesday, June 22nd, 2004

I’m going to the Lean Summit in Noordwijk tonight, hoping to meet practioners from other disciplines than Software Development. After reading draft’s of Lean Software Development me and other people in the Benelux got interested in lean manufacturing and product development (as you could see from the Value Stream Mapping exercise we did at the most recent xp-nl).

One of the interesting things for me, is that eXtreme Programming already pre-eliminates some ‘waste’ for you, and helps in decreasing batch size (amount of functionality that can be planned and delivered) and increasing quality without adding quality-checking-after-the-fact. I’m curious to see in the future, how we can further improve speed, quality and customer satisfaction by applying lean techniques to our project (ehm, oh yes, that’s becoming products now).

Mary Poppendiecks’ talk at XP2004 was also quite interesting in that respect – if you want your software project to live a healthy long life, it is probably wise to look at it as product development, and manage it as such.

Interesting stuff at the end of XP2004

Friday, June 11th, 2004

As XP2004 was drawing to a close, many interesting ideas came by. Ideas that stuck so far were:

  • solving problems by daydreaming
  • research on presencing that is performed by MIT, and is supposed to be a kind of ‘sixth discipline’ for the learning organisation, being the ‘flow of the learing organization’, which is called Presencing
  • Many german companies are reverting back to becoming an ‘unlearning organization’ – they hire army officers as managers in order to fight their crisis with a more directing, command-and-control style of management. Someone at lunch was actually working for such an organisation.
  • the importance of cultural training, or at least cultural awareness in organisations. Organisational departments such as sales, software development, systems support, operations, all have a distinct (sub) culture, including preferred ways of communication.

Moralistic Programming

Wednesday, June 9th, 2004

I am still enjoying the XP2004 conference. One of the tenets of Extreme Programming is doing the simplest thing that could possibly work. Which means amongst other things, that is the responsability of a programmer to focus on what the customer really needs, and choose the simplest tool available to get the job done.

Today I was reminded again of a kind of reason for not using or creating the simplest possible software. The Shoulds are so powerful, that even experienced Extreme Programmers can find it hard to resist. Examples of the shoulds are: separate content from presentation or its opposite (from ‘naked objects’ all objects should be representationally complete, everything is an object (which has been a favourite of mine for a long time) or they are not using design patterns.

The previous instances of moralistic programming are somewhat easy to recognize, because the ‘should’ or universiality assumption is explicit in the language. It becomes harder to recognize in situations where e.g. a database is added to a program, because the programmers in question always use databases (so it is an assumption), or the choice for a programming language is not taken, because it is more convenient to always use the same tool for the job.

I attended a session on XP Tools. It is easy to assume that such a session is on software tools, whereas XP has re-introduced a number of very powerful non-software tools, such as index cards and face-to-face communication. Many people apparently are still using ANT (an automated build tool)

To a certain extent, I don’t think it really matters very much which tool one chooses, as long as the choice is conscious, and the tradeoffs for choosing one over the other are clear. Unlearning moralistic programming, and doing situational programming more often is a lengthy process for me. What I think helps is to keep on trying new things (I am exploring the SeaSide web-application framework again, which in an interesting way different from other ways to develop web-applications), listen to what other people are doing (conferences provide an excellent opportunity for that) and regularly starting new projects, so it is more often time to select a technology and process that is appropriate for the problem at hand.

I’m off to XP2004

Wednesday, June 2nd, 2004

Until june 21st I’ll be in Germany, first going to XP2004 in Garmisch Partenkirchen, where I’ll be hosting a Systems Thinking workshop together with Marc Evers (and attend other interesting sessions :-) ). After that, I’ll be taking a brief holiday. If I can hook up my laptop to the web, I might just keep you posted.

Value Stream Mapping Explored

Thursday, May 20th, 2004

at the most recent xp-nl (the dutch Extreme Programming Users’s group) I facilitated session on value stream mapping. This was an interesting case of team learning – if everyone brings a piece of the puzzle it is possible to get a complete picture!

None of the participants, including myself, had much prior experience with Value Stream Mapping. Everyone involved was just very keen to learn.

After a 5 minutes introduction of the technique, we just starting by doing, working on a value stream from one of the participants. Everyone chipped in, and as we went along we developed our ‘diagramming technique’. Since our new office didn’t even have a flipchart yet, we settled on using index cards to create our map.

value stream map with index cards

As we went along, we clarified the meaning of the stations and queues, discussed about using average times, probability distrubutions or numbers from just one project.

At the end all of us understood much more about value stream mapping (you can read our findings in Dutch.

You can talk to me, and You can solve all your problems

Wednesday, May 12th, 2004

I often get stuck in my work, and then I need someone to ask a question to – I’m not afraid to ask for help if I need it ;-) .

It often happens to me, especially when sending an e-mail with a question, that I come up with the answer right after having asked the question. If that happens to you, you don’t really need a person to ask the question to, you could do just as well ask a teddy bear for help. Often the answer becomes clear when you have articulated the question properly.

Recently, my colleague Thijs Janssen presented Erik and me with a teddybear. a teddy bear wearing a t-shirt with the cq2 logo and the words 'talk to me and _you_ can solve your problems'

Erik and I often thank Thijs for helping us solve our problem, while actually, we solved it ourselves right after asking him a question…

Applying Value Stream Mapping to software

Wednesday, May 12th, 2004

Value Stream Mapping is a technique that is already used in the context of Lean Manufacturing. At OT2004 I attended the workshop “Understanding the Software Value Stream” by David Harvey and Peter Marks.

Borrowing metaphors from other domains, and applying them to the creation of software can be, from my experience, a dangerous thing. Even adding the word ‘development’ after ‘software’ is dangerous, since that also implies a metaphor. I consider borrowing from manufacturing especially dangerous – I consider software ‘development’ to be a creative activity best carried out inside a ‘living company’ not a kind of machine.

Value Stream Mapping seems worth exploring though, so the main question that remained in my head after the workshop was: how can we map Value Stream Mapping to software ‘development’, without implicitly taking over the ‘production metaphor’.

As described on http://www.mamtc.com/lean/building_vsm.asp:

Value Stream Mapping is a method of visually mapping a product’s production path (materials and information) from “door to door”. VSM can serve as a starting point to help management, engineers, production associates, schedulers, suppliers, and customers recognize waste and identify its causes. The process includes physically mapping your “current state” while also focusing on where you want to be, or your “future state” blueprint, which can serve as the foundation for other Lean improvement strategies. Interesting results that could be achieved with the above are:

  • Enabling all stakeholders in the process to see the whole picture. In my experience people usually only see their own part of the process. Optimizing one part without considering other parts might cause the whole process to deliver less value instead of more.
  • Both materials and information is taken into account. In software development we work with information only, but since information is supported by VSM, this could be sufficient.
  • Focusing on ‘what is’ and ‘what could be’. Reflecting on process is a useful, and often difficult activity since assumptions have to be made explicit. Imagining what could be without taking the current situation into account, however, is much more difficult. There are other techniques (such as Diagrams Of Effects) that are useful, VSM could be a useful addition, since it takes information flows (rather than actions) into account.

Continuing the quote:

A value stream is all the actions (both value added and non-value added) currently required to bring a product through the main flows essential to every product:

  • the production flow from raw material into the arms of the customer
  • the design flow from concept to launch

The idea of adding ‘non value added’ steps to a value stream is to identify steps that are wasteful and can be eliminated. Adding everything makes brainstorming about the process easier, because different stakeholders might find different activities valuable. Separating production and design flows is not something that is generally applicable to software.

For instance, in my daily practice of software development, there is no visible distinction between design flow and production flow, because each step in a process based on Extreme Programming encompasses both production and design. One of the things that came up after analyzing the value stream we created at OT, was that not only is value produced at multiple steps in the process, also very early on in the process value can be realized. Ordinarily, for a production line, I would think value is created at the end of the production process, when a product is handed to a customer and (possibly) money is exchanged.

In a process with short iterations, typical of agile software development, value is also created during planning meetings with the customers. For instance, customers get to re-think their business process while brainstorming their next wishes for software (and, as a consequence, perhaps changing the value ). At the next xp-nl meeting (http://www.xp-nl.org/Wiki/XpBijeenkomst4.5 ) we’re probably going to explore Value Stream Mapping further.

Preparations for XP Day Benelux 2005 started

Thursday, February 26th, 2004

We had a programme committee meeting for XP Day Benelux 2005 this week. Ideas from Agile Open are already cross-pollinating into the XP2005 workshop selection process, as well as (even more strongly) XP Day Benelux 2005.

For XP Day Benelux 2005 we are working on a more agile session selection process. So far, we had a semi-traditional process, where we put a lot of thought in the text of the call for sessions, made sure the candidate sessions received adequate feedback on the description as well as the supposed working of the session. The selection itself was BDUF (Big Design Up Front).

You could send in a session (I’m looking for a different word than ‘submit’ which suggests submission to the conference organisers – we are slowly and surely coming down from our mountain ;-) ), and then you’d wait for a month or two and hear wether your session was accepted or not (with an indication of why and why not). After that, we shepherded the accepted sessions.

Now we are trying to come up with a more SDUF-like process. The S can stand for Small as well as Strategic. We want to start shepherding long before the session deadline, and we want to open up the review and shepherding process, so session organisers don’t feel they are judged by an invisible elite. Last year we invited quite a number of people from outside the program committee. This year, we’re taking this one step further, by inviting everyone who sends in a session to be a reviewer as well. So we have four roles per session , three of which can be assumed by anyone involved: one or more organisers, a shepherd and three reviewers.

There is but one restriction: anyone can fulfil only one of these roles per session at the same time. Shepherds will announce their availability, interests and skills on a wiki somewhere (location to be defined yet). Session organisers are free to ask any shepherd they want – as long as the shepherd agrees. Reviewers can choose any sessions they wish to review – as long as there is no conflict of interest.

The fourth role is that of program committee member. Since the determination of quality has already been made in the review process prior to session selection, the program committee can focus on creating a balanced program. The program committee is semi-open. We’ve accepted a few new members this year after they asked to participate.

In a way, we are open sourcing the workshop creation pipeline we’ve been having with the people from Agile Systems. For a few years we’ve been collaborating on each others’ sessions. It is happening this weekend again, as the deadline for XP2005 workshops and tutorials is approaching. I can hardly explain how great it is to work with like-minded people on workshops, e.g. recently with Rob Westgeest on Rightsizing your unittests and Marc Evers on a series of Systems thinking workshops. This weekend I’m co-working on a session on congruent communication and interactions with Nynke Fokma and Marc Evers, as well as a session on value stream mapping with Marc. I may tag along on some other sessions as well. I hope we can scale this up through Agile Open and the new XP Days session generation process, so more people can experience it, and we get an even greater diversity of workshop and training material.

What we envision for the process itself is to start ten weeks before the final acceptance deadline. In these ten weeks, we offer session organisers the possibility to organise three iterations around their sessions. We are going to recommend organisers take a period of three weeks, and do all the iterations in this period. Such a short timespan enables the organisers to focus on their session, and not have to swap thoughts about the session too much. Three iterations is in my opinion a good amount. If it doesn’t work after three iterations it is probably best to leave it and try out another idea – not all ideas translate well into sessions. If the idea does work, after three iterations the organiser, shepherd, reviewers and program committee can have a sufficient idea of how it will work. Fleshing out the session further can be done between acceptance and running the actual session.

At the end of the first three weeks, we’ll already have some sessions in the conference pipeline, and the session organisers already have some idea if their session stands a chance or not, since it is already reviewed. At the end of ten weeks, we hope to have more than enough sessions for the programme. We’ll find other venues for the high quality sessions we can’t fit in the programme (e.g. we moved a session to an Agile Seminar last year). That is the end of the pipeline – finished, high quality sessions come out, and only scheduling remains.

I’m curious to see how this will develop. Worse comes to worst, we can always revert to old big up-front session acceptance. Something tells me we’re not going to have to – as the new process involves everyone much more.

What happens in year three of “Good Software Takes Ten Years”…

Monday, February 23rd, 2004

I recently stumbled across Good Software Takes Ten Years from Joel Spolsky’s Blog. In this article, he recites from his own experiance and the brief history of software development a number of stories and common pitfalls of succesful commercial software products. Granted, these applications might not be my favourites to use (Lotus Notes and Microsoft Word), but they were succesful in the sense that they are used by millions of people.

Lotus Notes for instance, was released five years after development, and it took another six years before the user base started to really grow. As an example from my own experience, we’re using Linux on our laptops and servers, an open source variant of Unix. Unix exists since at least the 70′s. In the first twenty years of its’ existence, high-priced unix workstations and servers were mainly to be found in universities and corporate research and development environments. Now, after over thirty years, through Linux (and to a lesser extent, FreeBSD and its’ derivative, MacOS-X) it is spreading to a much larger user-base.

Starting last December, I am programming on the e-laborateproject. Not exactly a long-running project. It is, however, based on i-tor, open-source software that has developed through several projects starting January 2002. So, I-tor is now in its third year. Having been involved as a coach and programmer, I can recognize several of the pitfalls Joel Spolsky managed in the project.

Having been there, I can say, that it is possible to survive those pitfalls, even if you’ve stepped right in them.

I’m off, again…

Monday, December 1st, 2003

I’m on my way to xp-day london (I never get up before 5 AM otherwise..). The remainder of the week will be filled with giving a course on a mixture of XP and (change) management.