Skip to content

Blogs

Scrum and Kanban – Which one to use?

Agilitrix - Michael Sahota - Mon, 11/10/2014 - 21:28

Scrum is the most popular Agile methodology with Kanban a growing second choice. Learn about the core parts of each one as well as how they differ so that you can find the best fit for your team or organizational context. For example, Scrum is great when you want to shake up the status quo and transform the way you work. Kanban is great when small changes are a better fit for the environment. Learn how they work and how you can use them in your environment.

Scrum and Kanban – Getting the Most from Each from Michael Sahota How to Choose Between Scrum and Kanban Choosing Between Scrum And Kanban

Scrum will be more successful in environments where it’s requirements are met. If you have all six, Scrum is great. When you start loosing important bits of context it becomes more difficult.

Kanban is much more fault-tolerant and work in many more contexts.

The post Scrum and Kanban – Which one to use? appeared first on Catalyst - Agile & Culture.

Related posts:

  1. Beyond Roles in Scrum In this post we will explain how we can move...

Related posts brought to you by Yet Another Related Posts Plugin.

Categories: Blogs

Navigating Politics in Agile Teams

TV Agile - Mon, 11/10/2014 - 20:07
Have you ever wondered why, when you’ve finally got Agile/Lean working nicely in your software engineering team, managers or ‘outsiders’ can come in and create politics ‘un-necessarily’, despite your best efforts? Have you ever sat there in meetings frustrated and exasperated at the seemingly unnecessary dramatics some participants go through? Or, when you start an […]
Categories: Blogs

Scrum saves lives

Henrik Kniberg's blog - Mon, 11/10/2014 - 19:18

I was deeply moved by this letter. I’ve seen how Scrum and similar pull-based approaches not only improve productivity, but reduce stress and improve quality of life for people, and this is a powerful example. I asked the sender if I may share it with the world, and thankfully he agreed. Here it is:

Recently I picked up a version of your free online edition of “Scrum and XP from the Trenches, How we do Scrum” and I have to say it changed my personal and professional life.

I have been and software developer on interactive voice response systems for close to 20 years now.

A few months ago I was speaking with a colleague and mentor of mine about his efforts to become a certified Scrum master.  Up until this point I had never really been exposed to Agile and Scrum in detail and only knew some of the jargon.   My colleague suggested that I research and learn more about the Agile philosophy and in particular Scrum.  Since I have been suffering from a poor work life balance almost my whole career I decided to pay it some attention.

I read your paper on a Saturday night and decided that Sunday that I would implement Scrum start on Monday morning.  So I quickly pulled together a spread sheet with what I had that night and formalized the excel sheet that was our product backlog. That Monday I held my normal morning meeting with my development team and the rest is history.

The short of it is that my team is finishing up its 3rd sprint next week and we all love it.  A lot of the stress that was keeping me up at night has completely gone away.  I feel in complete control when I come to the end of my work day.  In the past two months I have even hung out with my family on Saturdays and Sundays.  I have begun to add more of a balance back to my life.

I really wanted to thank you for writing this paper and putting it out in the world for free.  The tips that your paper offered have literally saved my marriage and probably my life.

Thank you again for you effort.

 

 

Categories: Blogs

Meilensteine und Scrum

Scrum 4 You - Mon, 11/10/2014 - 08:36

Um ihren Produktentstehungsprozess planbar zu machen, setzen viele Unternehmen eine Form von Meilensteinplanung ein. Im Detail gibt es hier viele Unterschiede, aber das Prinzip ist immer das gleiche: Die Meilensteine sind Punkte entlang einer zeitlichen Achse, zu denen das Entwicklungsprojekt intern evaluiert wird. Die Zeiträume zwischen den Meilensteinen bilden häufig die Entwicklungsphasen nach dem V-Modell ab. In solchen Fällen folgt auf eine Machbarkeitsstudie die Konzeptionsphase, gefolgt von Design-, Verifikations- und Validationsphasen. Um einen Meilenstein zu “passieren”, wird anhand von festgelegten Dokumenten und Reviews überprüft, ob die vorangehende Phase erfolgreich abgeschlossen wurde.

Zugleich entscheiden sich immer mehr produktentwickelnde Unternehmen (längst nicht mehr nur in der Softwareentwicklung) für den Einsatz agiler Rahmenwerke wie Scrum. Hier kommt dann sehr schnell die Frage auf, ob und wie Scrum mit einer Planung in Meilensteinen “vereinbar” sei oder zumindest “kompatibel” gemacht werden könne. Aus meiner Erfahrung beruht allein schon diese Wortwahl auf der falschen Vorstellung, dass Scrum und Meilensteine zwei Prozessvarianten seien. Wenn das so wäre, müsste man sich für eines von beidem entscheiden – oder eben einen Kompromiss finden, der ein wenig von dem einen und ein wenig von dem anderen realisiert (was auch immer das dann konkret bedeutet).
Scrum ist eine Methode. Scrum sagt, dass Produkte iterativ (in regelmäßig wiederkehrenden Sprints) und inkrementell (Feature für Feature) entwickelt werden, damit die Organisation nicht erst zum Ende des Projektes, sondern kontinuierlich lieferfähig ist. Meilensteinpläne schreiben vor, welche Dokumentation wann im Laufe eines Projektes erzeugt und freigegeben werden muss. Scrum ist darauf ausgerichtet, die Entwicklung am Bedarf des Marktes auszurichten. Meilensteine sind hingegen interne Revisionsmechanismen, um Projekte auf der Spur zu halten.

In meinem aktuellen Projekt sind wir zum Start folgendermaßen vorgegangen:

  • Zunächst sind wir die Bausteine der Meilensteinplanung einzeln durchgegangen und haben uns jeweils gefragt, ob diese für eine erfolgreiche Produktlieferung zwingend erforderlich sind.
  • Jene Bausteine, die für eine erfolgreiche Produktlieferung zwingend erforderlich sind (z.B. Risikoanalyse, Produktvalidierung, Test der elektromagnetischen Verträglichkeit – EMV), kommen in unsere Definition of Done. Darin klären wir, auf welcher Ebene (Story, Sprint, Release) wir sie einhalten können und wen wir dafür wann heranziehen müssen (z.B. QA beim Schreiben von Testfällen in der Sprintplanung).
  • Das hat in der Regel zur Konsequenz, dass die erforderlichen Bausteine deutlich früher geliefert werden, als sie nach der Planung in Meilensteinen gefordert wären.
  • Die Bausteine, die für eine erfolgreiche Produktlieferung nicht zwingend erforderlich sind (z.B. Erstellung Lasten- und Pflichtenheft), lassen wir außen vor und betrachten sie nicht weiter.

Zusammenfassend: Es wäre naiv, Meilensteinpläne komplett zu ignorieren. Manche der dort vorgeschriebenen Deliverables sind sehr wohl für eine erfolgreiche Produktlieferung wesentlich. Genauso naiv aber wäre es, die Vorschriften 1:1 umzusetzen, nur weil es der Plan so besagt. Vor allem bietet Scrum die Chance, die starre Abarbeitung von Meilensteinen dadurch überflüssig zu machen, indem die kritischen Projektaspekte schon zu Beginn angegangen werden, so dass der Zweck der Meilensteinplanung – die Projektabsicherung – ohnehin erfüllt wird.

Categories: Blogs

Upcoming SPC Certifications

Hi Folks,

My next upcoming SAFe SPC Certifications are as follows:

Dec 9-12, 2014, Boulder, Colorado (with Alex Yakyma)

January 27-30, 2014, Boulder, Colorado (with Alex Yakyma)

March 2-5, Vienna, Austria (with Michael Stump)

March 10-13, Boulder, CO (with Alex Yakyma)

Also, I’ll be delivering an SPC certification in Sydney, Australia, the week of June 22, but that one isn’t yet open for registration. Please ping Garren Watkins (garren.watkins@scaledagile.com) if you’d like to pre-register.

I look forward to meeting some of you in person at these upcoming events.

In addition, in response to increased market demand, we’ve also increased our capacity for 2015, with classes to occur about weekly world-wide. The first half of 2015 has been scheduled now, as you can see at Scaled Agile Academy.

Come join the league of over 1,700 SPCs who are making a huge impact in the world with SAFe!

Categories: Blogs

Scott Prugh on DevOps, Lean and SAFe in Legacy Environments

Agile Product Owner - Sun, 11/09/2014 - 16:58

We have a lot of fun with SAFe exploring how theory and practice come together in the field to get business results. One example is how we have worked with Scott Prugh at CSG since well before SAFe was SAFe. In many ways, we’ve grown up together with Scott, Mark Fuller and the CSG team. Scott is a contributor to SAFe, and is the author of the Continuous Delivery Guidance article.

In this new 20 minute Video from The DevOps Enterprise Summit 2014, Scott describes how they have applied SAFe, and more importantly the Lean and Flow principles that underlie it, to substantially improve productivity and throughput from development to deployment.

If you have ever wondered how, specifically, principles like cadence and synchronization, cross functional teams, visualizing work, backlog management, reducing batch size, synchronized release planning and more, can increase the quality, throughput and delivery of large scale software in a seriously complex, legacy environment, you have to watch this video!

Screen Shot 2014-10-31 at 11.38.32 AM

 

Categories: Blogs

Upcoming SPC Certifications by Dean Leffingwell

Agile Product Owner - Sun, 11/09/2014 - 16:52

Hi Folks,

My next upcoming SAFe SPC Certifications are as follows:

Dec 9-12, 2014, Boulder, Colorado (with Alex Yakyma)

January 27-30, 2014, Boulder, Colorado (with Alex Yakyma)

March 2-5, Vienna, Austria (with Michael Stump)

March 10-13, Boulder, CO (with Alex Yakyma)

Also, I’ll be delivering an SPC certification in Sydney, Australia, the week of June 22, but that one isn’t yet open for registration. Please ping Garren Watkins (garren.watkins@scaledagile.com) if you’d like to pre-register.

I look forward to meeting some of you in person at these upcoming events.

In addition, in response to increased market demand, we’ve also increased our capacity for 2015, with classes to occur about weekly world-wide. The first half of 2015 has been scheduled now, as you can see at Scaled Agile Academy.

Come join the league of over 1,700 SPCs who are making a huge impact in the world with SAFe!

Categories: Blogs

R: dplyr – Ordering by count after multiple column group_by

Mark Needham - Sun, 11/09/2014 - 11:30

I was recently trying to group a data frame by two columns and then sort by the count using dplyr but it wasn’t sorting in the way I expecting which was initially very confusing.

I started with this data frame:

library(dplyr)
 
data = data.frame(
  letter = sample(LETTERS, 50000, replace = TRUE),
  number = sample (1:10, 50000, replace = TRUE)
  )

And I wanted to find out how many occurrences of each (letter, number) pair exist in the data set. I started with the following code:

> data %>% count(letter, number, sort = TRUE)
Source: local data frame [260 x 3]
Groups: letter
 
   letter number   n
1       A      4 205
2       A      9 201
3       A      3 197
4       A      1 195
5       A     10 191
6       A      2 189
7       A      8 184
8       A      7 183
9       A      5 181
10      A      6 173
..    ...    ... ...

As you can see it’s only showing A’s which is interesting as I wouldn’t expect there to be a bias towards that letter. Let’s filter out the A’s:

> data %>% filter(letter != "A") %>% count(letter, number, sort = TRUE)
Source: local data frame [250 x 3]
Groups: letter
 
   letter number   n
1       B      8 222
2       B      9 212
3       B      5 207
4       B      6 201
5       B     10 200
6       B      7 192
7       B      2 189
8       B      3 189
9       B      1 187
10      B      4 181
..    ...    ... ...

Now all we see are B’s and we can see that both (B,8) and (B,9) have a higher ‘n’ value than any of the A’s.

I put the code back into the more verbose form to see if it was the count function that behaved unexpectedly:

> data %>% group_by(letter, number) %>% summarise(n = n()) %>% arrange(desc(n))
Source: local data frame [260 x 3]
Groups: letter
 
   letter number   n
1       A      4 205
2       A      9 201
3       A      3 197
4       A      1 195
5       A     10 191
6       A      2 189
7       A      8 184
8       A      7 183
9       A      5 181
10      A      6 173
..    ...    ... ...

Nope, still the same behaviour.

At this point I vaguely remembered there being a function called ungroup which I hadn’t used and wondered if now was the time.

> data %>% group_by(letter, number) %>% summarise(n = n()) %>% ungroup() %>% arrange(desc(n))
Source: local data frame [260 x 3]
 
   letter number   n
1       L      2 236
2       V      1 231
3       Y      8 226
4       J      4 225
5       J     10 223
6       Q      7 223
7       B      8 222
8       O      9 222
9       Q     10 221
10      Z      9 221
..    ...    ... ...

Indeed it was and now we can go back to our original version of the code using count and handle the sorting afterwards:

> data %>% count(letter, number) %>% ungroup() %>% arrange(desc(n))
Source: local data frame [260 x 3]
 
   letter number   n
1       L      2 236
2       V      1 231
3       Y      8 226
4       J      4 225
5       J     10 223
6       Q      7 223
7       B      8 222
8       O      9 222
9       Q     10 221
10      Z      9 221
..    ...    ... ...
Categories: Blogs

R: Refactoring to dplyr

Mark Needham - Sun, 11/09/2014 - 02:11

I’ve been looking back over some of the early code I wrote using R before I knew about the dplyr library and thought it’d be an interesting exercise to refactor some of the snippets.

We’ll use the following data frame for each of the examples:

library(dplyr)
 
data = data.frame(
  letter = sample(LETTERS, 50000, replace = TRUE),
  number = sample (1:10, 50000, replace = TRUE)
  )
Take {n} rows
> data[1:5,]
  letter number
1      R      7
2      Q      3
3      B      8
4      R      3
5      U      2

becomes:

> data %>% head(5)
  letter number
1      R      7
2      Q      3
3      B      8
4      R      3
5      U      2
Order by numeric value descending
> data[order(-(data$number)),][1:5,]
   letter number
14      H     10
17      G     10
63      L     10
66      W     10
73      R     10

becomes:

> data %>% arrange(desc(number)) %>% head(5)
  letter number
1      H     10
2      G     10
3      L     10
4      W     10
5      R     10
Count number of items
> length(data[,1])
[1] 50000

becomes:

> data %>% count()
Source: local data frame [1 x 1]
 
      n
1 50000
Filter by column value
> length(subset(data, number == 1)[, 1])
[1] 4928

becomes:

> data %>% filter(number == 1) %>% count()
Source: local data frame [1 x 1]
 
     n
1 4928
Group by variable and count
> aggregate(data, by= list(data$number), function(x) length(x))
   Group.1 letter number
1        1   4928   4928
2        2   5045   5045
3        3   5064   5064
4        4   4823   4823
5        5   5032   5032
6        6   5163   5163
7        7   4945   4945
8        8   5077   5077
9        9   5025   5025
10      10   4898   4898

becomes:

> data %>% count(number)
Source: local data frame [10 x 2]
 
   number    n
1       1 4928
2       2 5045
3       3 5064
4       4 4823
5       5 5032
6       6 5163
7       7 4945
8       8 5077
9       9 5025
10     10 4898
Select a range of rows
> data[4:5,]
  letter number
4      R      3
5      U      2

becomes:

> data %>% slice(4:5)
  letter number
1      R      3
2      U      2

There’s certainly more code in some of the dplyr examples but I find it easier to remember how the dplyr code works when I come back to it and hence tend to favour that approach.

Categories: Blogs

R: dplyr – Group by field dynamically (‘regroup’ is deprecated / no applicable method for ‘as.lazy’ applied to an object of class “list” )

Mark Needham - Sun, 11/09/2014 - 00:29

A few months ago I wrote a blog explaining how to dynamically/programatically group a data frame by a field using dplyr but that approach has been deprecated in the latest version.

To recap, the original function looked like this:

library(dplyr)
 
groupBy = function(df, field) {
  df %.% regroup(list(field)) %.% summarise(n = n())
}

And if we execute that with a sample data frame we’ll see the following:

> data = data.frame(
      letter = sample(LETTERS, 50000, replace = TRUE),
      number = sample (1:10, 50000, replace = TRUE)
  )
 
> groupBy(data, 'letter') %>% head(5)
Source: local data frame [5 x 2]
 
  letter    n
1      A 1951
2      B 1903
3      C 1954
4      D 1923
5      E 1886
Warning messages:
1: %.% is deprecated. Please use %>% 
2: %.% is deprecated. Please use %>% 
3: 'regroup' is deprecated.
Use 'group_by_' instead.
See help("Deprecated")

I replaced each of the deprecated operators and ended up with this function:

groupBy = function(df, field) {
  df %>% group_by_(list(field)) %>% summarise(n = n())
}

Now if we run that:

> groupBy(data, 'letter') %>% head(5)
Error in UseMethod("as.lazy") : 
  no applicable method for 'as.lazy' applied to an object of class "list"

It turns out the ‘group_by_’ function doesn’t want to receive a list of fields so let’s remove the call to list:

groupBy = function(df, field) {
  df %>% group_by_(field) %>% summarise(n = n())
}

And now if we run that:

> groupBy(data, 'letter') %>% head(5)
Source: local data frame [5 x 2]
 
  letter    n
1      A 1951
2      B 1903
3      C 1954
4      D 1923
5      E 1886

Good times! We get the correct result and no deprecation messages.

If we want to group by multiple fields we can just pass in the field names like so:

groupBy = function(df, field1, field2) {
  df %>% group_by_(field1, field2) %>% summarise(n = n())
}
> groupBy(data, 'letter', 'number') %>% head(5)
Source: local data frame [5 x 3]
Groups: letter
 
  letter number   n
1      A      1 200
2      A      2 218
3      A      3 205
4      A      4 176
5      A      5 203

Or with this simpler version:

groupBy = function(df, ...) {
  df %>% group_by_(...) %>% summarise(n = n())
}
> groupBy(data, 'letter', 'number') %>% head(5)
Source: local data frame [5 x 3]
Groups: letter
 
  letter number   n
1      A      1 200
2      A      2 218
3      A      3 205
4      A      4 176
5      A      5 203

I realised that we can actually just use the group_by itself and pass in the field names without quotes, something I couldn’t get to work in earlier versions:

groupBy = function(df, ...) {
  df %>% group_by(...) %>% summarise(n = n())
}
> groupBy(data, letter, number) %>% head(5)
Source: local data frame [5 x 3]
Groups: letter
 
  letter number   n
1      A      1 200
2      A      2 218
3      A      3 205
4      A      4 176
5      A      5 203

We could even get a bit of pipelining going on if we fancied it:

> data %>% groupBy(letter, number) %>% head(5)
Source: local data frame [5 x 3]
Groups: letter
 
  letter number   n
1      A      1 200
2      A      2 218
3      A      3 205
4      A      4 176
5      A      5 203

And as of dplyr 0.3 we can simplify our groupBy function to make use of the new count function which combines group_by and summarise:

groupBy = function(df, ...) {
  df %>% count(...)
}
> data %>% groupBy(letter, number) %>% head(5)
Source: local data frame [5 x 3]
Groups: letter
 
  letter number   n
1      A      1 200
2      A      2 218
3      A      3 205
4      A      4 176
5      A      5 203
Categories: Blogs

Airbnb and Uber – From Batch to One Piece Flow

Evolving Excellence - Sat, 11/08/2014 - 21:42

Some of the more interesting internet-driven companies these days are the likes of Airbnb and Uber. They call themselves part of the “sharing economy.”

But let’s take a look at the word “share.” From the MacMillan dictionary, share is “to allow someone to have something you own.” Is that what’s happening here? Not really. It’s not free.

There is a small unit of demand in the form of a single room or a single car ride, and there’s a small unit of availability in the form of a room in a house, an apartment, an empty cab, or even someone with a car going in the same direction. The are matched, a value is being transferred, and there’s also a financial transaction representative of that value. That’s really micro supply and demand management.

Previously such small units of supply and demand were never taken advantage of, let alone optimized, unless aggregated. Now internet connectivity and computational power can dynamically and efficiently track, match, and transact such small units.

The results are rather astounding.

Since its founding, in 2008, Airbnb has spearheaded growth of the sharing economy by allowing thousands of people around the world to rent their homes or spare rooms. Yet while as many as 425,000 people now stay in Airbnb-listed homes on a peak night, the company’s growth is shadowed by laws that clash with its ethos of allowing anyone, including renters, to sell access to their spaces.

Over 400,000 rooms – on a single night. That’s the equivalent of over 3,000 average size hotels. Empty space that was being wasted, now going to good use, eliminating the need to build 3,000 hotels. Think about the positive impact on the environment, urban sprawl, energy use, and so forth.

Similarly, in March 2014, seven months ago and a lifetime in the timescale of a hypergrowth company, Uber was providing 1.1 million rides per week. In this case it is only partially displacing the required capacity of the old business model, taxis, as many taxi drivers are switching to the Uber platform. Still, think about the impact of that optimized micro capacity and demand utilization on the required supply of taxis – and hence steel, plastics, and gas.

I happen to be a big fan of Uber, and use their service almost every time I travel. The speed, convenience, and ease of transaction creates significant value.  Yes, their business practices may make me hold my nose a bit.

Other companies are looking at similar concepts. Amazon is looking at using micro units of delivery capacity in the form of taxis and Uber competitor Flywheel to provide same day – and perhaps same hour – delivery.

Very large numbers of previously wasted supply units being matched with demand in a very efficient manner. The batch unit of a large delivery truck, a bus, or a hotel is being broken down into units approaching one.  Obviously any change like this doesn't come easily, and cities - often dependent on bed taxes - are pushing back on Airbnb while traditional taxi services are pushing back on Uber.  But value is being created in an efficient and popular manner, therefore change will occur.

Where have I heard about that concept before?

Categories: Blogs

Here Be Dragons – Scaling Agile

ScrumSense.com - Peter Hundermark - Sat, 11/08/2014 - 08:38

I gave a talk at the global Scrum Gathering in Berlin and two weeks later this revised (and improved?) version at the regional Scrum Gathering in Cape Town.

Scaling agility in any organisation requires attention to both cultural and structural dimensions. The culture aspect rests in providing modern leadership and change management. I provide only a passing reference here to this dimension. My presentation here is focussed on the structural and process dimension.

I describe 3 “laws” of scaling. These are observations or empirical laws. Then I set out 10 patterns that I have found helpful over nearly a decade of doing this with many organisations as a consultant and coach.

The post Here Be Dragons – Scaling Agile appeared first on ScrumSense.

Categories: Blogs

Dual-Speed IT Drives Digital Business Transformation and Improves IT-Business Relationships

J.D. Meier's Blog - Fri, 11/07/2014 - 19:24

Don’t try to turn all of your traditional IT into a digital unit.  

You’ll break both, or do neither well.

Instead,  add a Digital Unit.   Meanwhile, continue to simplify and optimize your traditional IT, but, at the same time, add a Digital Unit that’s optimized to operate in a Cloud-First, Mobile-First world.

This is the Dual-Speed IT approach, and, with this way, you can choose the right approach for the job and get the best of both worlds.

Some projects involve more extensive planning because they are higher-risk and have more dependencies.

Other projects benefit from a loose learning-by-doing method, with rapid feedback loops, customer impact, and testing new business waters.

And, over time, you can shift the mix.

In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned from companies that are Digital Masters that created their digital visions and are driving business change.

Build Digital Skills Into One of Your Existing Business Units

You can grow one of your existing business units into a Digital Unit.  For example, marketing is a pretty good bet, given the customer focus and the business impact.

Via Leading Digital:

 

“Changing the IT-business relationship is well worth the effort, but doing so takes time.  Your company may not have the time to wait before starting your digital transformation.  Rather than improving the IT unit, some companies try to build digital skills into another unit, such as marketing.  They try to work around IT rather than with it.”

Don’t Mix Your Digital Unit with Your Traditional IT

Don’t throw away your existing IT or break it by turning it into something it’s not, too quickly.   Instead, leverage it for the projects where it makes sense, while also leveraging your new Digital IT unit.

Via Leading Digital:

“Although building digital skills is useful, trying to work around IT can be fraught with challenges, especially if people do not understand the reasons for IT's systematic, if sometimes ponderous, processes.  This kind of flanking action can waste money, make the digital platform more complex, and even worse, open the company to security and regulatory risks.”

Create a Dual-Speed IT to Support Both Traditional IT and Faster-Speed Digital Transformation

You can have the best of both worlds, while both evolving your traditional IT and growing your Digital Unit to thrive at Cloud speed.

Via Leading Digital:

“A better approach is to create a dual-speed IT structure, where one part of the IT unit continues to support traditional IT needs, while another takes on the challenge of operating at digital speed with the business.  Digital activities--especially in customer engagement--move faster than many traditional IT ones.  They look at design processes differently.  Where IT projects have traditionally depended on clear designs and well-structured project plans, digital activities often engage in test-and-learn strategies, trying features in real-life experiments and quickly adding or dropping them based on what they find.”

Optimize the Digital Unit for Digital World

Your Digital Unit needs to be very different from traditional IT in terms of the mindset and the approaches around the people, processes, and technology.

Via Leading Digital:

“In a dual-speed approach, the digital unit can develop processes and methods at clock-speeds more closely aligned with the digital world, without losing sight of the reasons that the old IT processes existed.  IT leaders can draw on informal relationships within the IT department to get access to legacy systems or make other changes happen.  Business leaders can use their networks to get input and resources.  Business and IT leaders can even start to work together in the kind of two-in-a-box leadership method that LBG and other companies have adopted.”

Choose the Right Leadership Both in the Business and in IT

To make it work and to make it work well, it takes partnerships on both sides.   The business and IT both need skin in the game.

Via Leading Digital:

“Building dual-speed IT units requires choosing the right leadership on both sides of the relationship.  Business executives need to be comfortable with technology and with being challenged by their IT counterparts.  IT leaders need to have a mind-set that extends beyond technology to encompass the processes and drivers of business performance.  Leaders from both sides need to be strong communicators who can slide easily between conversations with their business- or IT-focused people.”

Great IT Leaders Know When to Choose Traditional IT vs. the Digital Unit

With both options at your disposal, Great IT Leaders know how to choose the right approach for the job.   Some programs and projects will take a more traditional life-cycle or require heavier planning or more extensive governance and risk management, while other projects can be driven in a more lightweight and agile way.

Via Leading Digital:

“Dual-speed IT also requires perspective about the value of speed.  Not all digital efforts need the kind of fast-moving, constantly changing processes that digital customer-engagement processes can need.  In fact, the underlying technology elements that powered LBG's new platform, Asian Paints' operational excellence, and Nike's digital supply chain enhancements required the careful, systematic thinking that underpins traditional IT practices.  Doing these big implementations in a loose learning-by-doing method could be dangerous.  It could increase rework, waste money, and introduce security risks.  But once the strong digital platform is there, building new digital capabilities can be fast, agile, and innovative.  The key is to understand what you need in each type of project and how much room any project has to be flexible and agile.  Great IT leaders know how to do this.  If teamed with the right business leaders, they can make progress quickly and safely.”

Dual-Speed IT Requires New Processes within IT

It takes a shift in processes to do Dual-Speed IT.

Via Leading Digital:

“Dual-speed IT also takes new processes inside IT.  Few digital businesses have the luxury to wait for monthly software release cycles for all of their applications.  Digital-image hosting business Flickr, for example, aims for up to ten deployments per day, while some businesses require even more.  This continuous-deployment approach requires very tight discipline and collaboration between development, test, and operations people.  A bug in software, missed step in testing, or configuration problem in deployment can bring down a web site or affect thousands of customers.”

DevOps Makes Dual Speed IT Possible

DevOps blends development and operations into a more integrated approach that simplifies and streamlines processes to shorten cycle times and speed up fixes and feedback loops.

Via Leading Digital:

“A relatively new software-development method called DevOps aims to make this kind of disciplined speed possible.  It breaks down silos between development, operations, and quality assurance groups, allowing them to collaborate more closely and be more agile.  When done properly, DevOps improves the speed and reliability of application development and deployment by standardizing development environments.  It uses strong methods and standards, including synchronizing the tools used by each group.”

DevOps Can Help IT Release Software Better, Faster, Cheaper, and More Reliably

DevOps is the name of the game when it comes to shipping better, faster, cheaper and more reliably in a Cloud-First, Mobile-First world.

Via Leading Digital:

“DevOps relies heavily on automated tools to do tasks in testing, configuration control, and deployment—tasks that are both slow and error-prone when done manually.  Companies that use DevOps need to foster a culture where different IT groups can work together and where workers accept the rules and methods that make the process effective.  The discipline, tools, and strong processes of DevOps can help IT release software more rapidly and with fewer errors, as well as monitor performance and resolve process issues more effectively, than before.”

Driving Digital Transformation Takes a Strong Link Between Business and IT Executives

In order for your Digital Transformation to thrive, it takes building better bridges between the business leaders and the IT leaders.

Via Leading Digital:

“Whether your CIO takes it upon himself or herself to improve the IT-business relationship, or you decide to help make it happen, forging a strong link between business and IT executives is an essential part of driving digital transformation.  Strong IT-business relationships can transform the way IT works and the way the business works with it.  Through trust and shared understanding, your technology and business experts can collaborate closely, like at LBG, to innovate your business at digital speeds.  Without this kind of relationship, your company may become mired in endless requirements discussion, filing projects, and lackluster systems, while your competitors accelerate past you in the digital fast lane.”

If you want to thrive in the new digital economy while driving digital business transformation without breaking your existing business, consider adding Dual-Speed IT to your strategies and shift the mix from traditional IT to your Digital Unit over time.

You Might Also Like

10 High-Value Activities in the Enterprise

Cloud Changes the Game from Deployment to Adoption

Drive Business Transformation by Reenvisioning Operations

Drive Business Transformation by Reenvisioning Your Customer Experience

How To Improve the IT-Business Relationship

Management Innovation is at the Top of the Innovation Stack

Think in a Series of Sprints, Not Marathons

Categories: Blogs

NeuroAgile Quick Links #6

Notes from a Tool User - Mark Levison - Fri, 11/07/2014 - 17:52
NeuroAgileInteresting reports from the world of Science that can be applied (or not) to Agile Teams

 

 

Categories: Blogs

Retrospective Exercise: Vital Few Actions

Ben Linders - Fri, 11/07/2014 - 15:03
The aim of an agile retrospective is to define actions for the next iteration that will improve the way of working and help teams to deliver more value to their customers. This retrospective exercise can be used within agile frameworks like Scrum, SAFe, XP or Kanban to have teams agree upon the vital few improvement actions that they will do. Continue reading →
Categories: Blogs

The Agile Reader – Weekend Edition: 11/07/2014

Scrumology.com - Kane Mar - Fri, 11/07/2014 - 05:59

You can get the Weekend Edition delivered directly to you via email by signing up http://eepurl.com/0ifWn.

The Weekend Edition is a list of some interesting links found on the web to catch up with over the weekend. It is generated automatically, so I can’t vouch for any particular link but I’ve found the results are generally interesting and useful.

  • Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • Leading Scrum Experts, Braintrust Consulting Group, Return to Memphis to Host … – IT Business Net #Agile #Scrum
  • RT @yochum: The Guide on the Side #agile #scrum
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • 8 Ways To Overcome Resistance To An #Agile Process Rollout via @jonathanlevene #scrum
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • Are you #Agile? Seriously who puts a scrum shirt on a baby #geekout
  • Planning, Tracking & Managing Agile Web Development Sprints w/ Scrum & Intervals by @intervals http://t.co/nhBGUjXO3R
  • RT @iranfleitas: Infographic about “The Scrum framework in 30 seconds” #agile #scrum @ScrumAlliance http://t.co/1Ypa…
  • Less than 20 tickets left for the 6th annual GIVE THANKS FOR SCRUM event 11/25 in #Boston: #scrum #agile #innovation
  • #Scrum was born in #Boston. And that is why we GIVE THANKS FOR SCRUM every year here: #agile #lean #collaboration
  • RT @DanielMezick: #Scrum was born in #Boston. And that is why we GIVE THANKS FOR SCRUM every year here: #agile #lean…
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • Lookin for a #ProductOwner in #WilmingtonDE that has #agile and #scrum experience check out for info #career #job
  • The dangers inherent when Key Performance Indicators (KPIs) are used as a target to drive behavior #Scrum #Agile
  • Xebia Blog: Mutation Testing: How Good are your Unit Tests? #agile #scrum
  • SolutionsIQ: Automating Application Development, SAFe, and Other Takeaways from AgilePalooza Seattle #agile #scrum
  • RT @yochum: SolutionsIQ: Automating Application Development, SAFe, and Other Takeaways from AgilePalooza Seattle #ag…
  • RT @yochum: Xebia Blog: Mutation Testing: How Good are your Unit Tests? #agile #scrum
  • RT @yochum: SolutionsIQ: Automating Application Development, SAFe, and Other Takeaways from AgilePalooza Seattle #ag…
  • RT @yochum: Xebia Blog: Mutation Testing: How Good are your Unit Tests? #agile #scrum
  • RT @yochum: SolutionsIQ: Automating Application Development, SAFe, and Other Takeaways from AgilePalooza Seattle #ag…
  • Confirmado @dbassi no #AgileTourBH 2014 #Agile #PMOT #AgileBrazil #AgileBR #ExtremAgile #SCRUM http://t.co/QHqbq0MCY1
  • Scrum Expert: User Stories for Agile Requirements #agile #scrum
  • RT @AgileTourBH: Confirmado @dbassi no #AgileTourBH 2014 #Agile #PMOT #AgileBrazil #AgileBR #ExtremAgile #SCRUM http…
  • Studie: Agile Methoden im Höhenflug – #Scrum #Kanban #DesignThinking via @heiseonline
  • FIRST LEGO League Team Sponsored By Scrum Alliance In Virginia – PR Newswire (press release) #Agile #Scrum
  • RT @scrum_coach: #Agility In All Things #scrumterms #agile #mentalagility #physicalagility #strategicagility
  • RT @yochum: Xebia Blog: Mutation Testing: How Good are your Unit Tests? #agile #scrum
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • RT @iqberatung: Studie: Agile Methoden im Höhenflug – #Scrum #Kanban #DesignThinking via @heiseonline
  • RT @yochum: On Software Development, Agile, Startups, and Social Networking – Isaac Sacolick: Agile Data Sci… #agi…
  • Uzility now prompts you on your team’s activity, so you can track progress even easier. Check it out #agile #scrum
  • Agile by McKnight, Scrum by Day is out! Stories via @BLupano @AgileUniversity
  • Continuing the mission… and continually improving #Scrum #Agile
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • RT @MasterScrum: What Does QA do on the First Day of a Sprint? #agile #scrum http://t.co/180ciIpkkt
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • Categories: Blogs

    People over Process – Win with People

    Agilitrix - Michael Sahota - Fri, 11/07/2014 - 04:44
    Success comes from Valuing People

    Woody Hayes, PeopleWhen we simplify the Agile Manifesto’s “Individuals and Interactions over Processes and Tools” we get “People over Process”. Agile is about people. It’s about a people-first culture.

    Sadly, many organizations are mired in organizational debt: mistrust, politics and fear. Changing the process won’t fix this. We need to go to the root of it – to find a way to talk about and shift to a healthier culture: one that values people.

    The VAST (Vulnerability, Authentic Connection, Safety and Trust) shows us how we can make our workplaces more human.

    We outline a fundamentally different approach for organizational change: one where valuing people is integral to building lasting success.

    Slides from my Keynote at Lean Into Agile Conference

    Video Summary (7 minute PechaKucha)

    The post People over Process – Win with People appeared first on Catalyst - Agile & Culture.

    Related posts:

    1. Letting Go of Agile (Culture) “If you want something very, very badly, let it go...
    2. WholeHearted Manifesto: We Value People The WholeHearted Manifesto consists on one value statement: We Value...
    3. Transformation Case Study – Video Interview At Agile 2014, many people were inspired by this case...

    Related posts brought to you by Yet Another Related Posts Plugin.

    Categories: Blogs

    R: Joining multiple data frames

    Mark Needham - Fri, 11/07/2014 - 03:29

    I’ve been looking through the code from Martin Eastwood’s excellent talk ‘Predicting Football Using R‘ and was intrigued by the code which reshaped the data into that expected by glm.

    The original looks like this:

    df <- read.csv('http://www.football-data.co.uk/mmz4281/1314/E0.csv')
     
    # munge data into format compatible with glm function
    df <- apply(df, 1, function(row){
      data.frame(team=c(row['HomeTeam'], row['AwayTeam']),
                 opponent=c(row['AwayTeam'], row['HomeTeam']),
                 goals=c(row['FTHG'], row['FTAG']),
                 home=c(1, 0))
    })
    df <- do.call(rbind, df)

    The initial data frame looks like this:

    > library(dplyr)
    > df %>% select(HomeTeam, AwayTeam, FTHG, FTAG) %>% head(1)
      HomeTeam    AwayTeam FTHG FTAG
    1  Arsenal Aston Villa    1    3

    And we want to get it to look like this:

    > head(df, 2)
                    team    opponent goals home
    HomeTeam     Arsenal Aston Villa     1    1
    AwayTeam Aston Villa     Arsenal     3    0

    So for each row in the initial data frame we want to have two rows: one representing each team, how many goals they scored in the match and whether they were playing at home or away.

    I really like dplyr’s pipelining function so I thought I’d try and translate Martin’s code to use that and other dplyr functions.

    I ended up with the following two sets of function calls:

    df %>% select(team = HomeTeam, opponent = AwayTeam, goals = FTHG) %>% mutate(home = 1)
    df %>% select(team = AwayTeam, opponent = HomeTeam, goals = FTAG) %>% mutate(home = 0)

    I’m doing pretty much the same thing as Martin except I’ve used dplyr’s select and mutate functions to transform the data frame.

    The next step was to join those two data frames together and with Nicole’s help I realised that there are many ways we can do this.

    The functions that will do the job are:

    We decided to benchmark them to see which was able to transform the data frame the fastest:

    # load data into data.frame
    dfOrig <- read.csv('http://www.football-data.co.uk/mmz4281/1314/E0.csv')
     
    original = function(df) {
      df <- apply(df, 1, function(row){
        data.frame(team=c(row['HomeTeam'], row['AwayTeam']),
                   opponent=c(row['AwayTeam'], row['HomeTeam']),
                   goals=c(row['FTHG'], row['FTAG']),
                   home=c(1, 0))
      })
      do.call(rbind, df)
    }
     
    newRBind = function(df) {
      rbind(df %>% select(team = HomeTeam, opponent = AwayTeam, goals = FTHG) %>% mutate(home = 1),
            df %>% select(team = AwayTeam, opponent = HomeTeam, goals = FTAG) %>% mutate(home = 0))  
    }
     
    newUnion = function(df) {
      union(df %>% select(team = HomeTeam, opponent = AwayTeam, goals = FTHG) %>% mutate(home = 1),
            df %>% select(team = AwayTeam, opponent = HomeTeam, goals = FTAG) %>% mutate(home = 0))  
    }
     
    newJoin = function(df) {
      join(df %>% select(team = HomeTeam, opponent = AwayTeam, goals = FTHG) %>% mutate(home = 1),
           df %>% select(team = AwayTeam, opponent = HomeTeam, goals = FTAG) %>% mutate(home = 0),
          type = "full")  
    }
     
    newMerge = function(df) {
      merge(df %>% select(team = HomeTeam, opponent = AwayTeam, goals = FTHG) %>% mutate(home = 1),
           df %>% select(team = AwayTeam, opponent = HomeTeam, goals = FTAG) %>% mutate(home = 0),
           all = TRUE)  
    }
    > library(microbenchmark)
     
    > microbenchmark(original(dfOrig))
    Unit: milliseconds
                 expr   min    lq  mean median    uq max neval
     original(dfOrig) 189.4 196.8 202.5    201 205.5 284   100
     
    > microbenchmark(newRBind(dfOrig))
    Unit: milliseconds
                 expr   min    lq  mean median    uq   max neval
     newRBind(dfOrig) 2.197 2.274 2.396  2.309 2.377 4.526   100
     
    > microbenchmark(newUnion(dfOrig))
    Unit: milliseconds
                 expr   min    lq  mean median   uq   max neval
     newUnion(dfOrig) 2.156 2.223 2.377  2.264 2.34 4.597   100
     
    > microbenchmark(newJoin(dfOrig))
     
    Unit: milliseconds
                expr   min    lq  mean median   uq   max neval
     newJoin(dfOrig) 5.961 6.132 6.817  6.253 6.65 11.95   100
     
    > microbenchmark(newMerge(dfOrig))
    Unit: milliseconds
                 expr   min    lq  mean median    uq   max neval
     newMerge(dfOrig) 7.121 7.413 8.037  7.541 7.934 13.32   100

    We actually get a 100 time speed up over the original function if we use rbind or union whereas with merge or join it’s around 35 times quicker.

    In this case using merge or join is a bit misleading because we’re not actually connecting the data frames together based on any particular field – we are just appending one to the other.

    The code’s available as a gist if you want to have a play.

    Categories: Blogs

    Continuing our journey together to focus on YOUR One Shiny Object…

    Mike Vizdos - Implementing Scrum - Thu, 11/06/2014 - 21:25
    www.implementingscrum.com -- Cartoon -- May 6, 2008

    Hi.

    It’s been a while [again] since posting anything new at www.implementingscrum.com.

    I see people are sharing the information (all around the world) in cubicles and Scrum team rooms.  People are linking comic strips from their blogs, using them in presentations, and including them in books.

    The Chicken and Pig cartoons here are now firmly established as folklore in the Scrum and Agile communities today.

    People either love it or hate the story; however, an amazing thing is happening — people still talk about them AND they are still used to help start tough conversations in software development.

    Cool.

    And yet I ask myself, why am I not posting *regularly* here anymore?

    The answer is pretty simple.  I’ve moved on and evolved my interests over the years.  The comic strips are obviously not my area of *passion* anymore; they don’t get me out of bed excited to take on the world daily anymore.

    And.

    That. Is.  OK.

    There is still a boat load of information out here to share, consume, and learn with one another.

    So.

    Heading into 2015 (and beyond), I’ll still be maintaining this site and will continue to push out regular reminder postings with the old comic strips (and possibly some new information as a topic is relevant).  If you want to help me on this, please let me know (the ball is in your court on this one, and YES I am serious with this invitation).

    Where else am I headed into 2015?

    A concept I am calling, “One Shiny Object.”

    I’ve been on working with people (all around the world) for many years and have found that my passion today is helping people figure out their own concept of a, “One Shiny Object.”

    It’s about keeping things simple.  Removing complexity.

    Very similar to the concepts we live in the Scrum and Agile world.  It’s beyond that now (and probably has always been… I’ll be using the tools in that box heading out of just the software development world).

    Out on twitter (@mvizdos there) you can catch me doing this (almost daily) now.  At conferences, this is something I am passionately talking to audiences about.

    With clients, it’s all about this:

    Focus.  #deliver

    Interested in following me (Mike Vizdos) on this new journey?

    Please jump over to www.OneShinyObject.com and let me know your name and e-mail address; from there, we’ll branch off from the topic of this blog about Implementing Scrum and then you’ll see what we can do together to focus on your One Shiny Object.

    Amazing things are already happening.  Join me (still FREE!).

    You’ll still be getting information here about Implementing Scrum, and I reiterate my humble request that you to get involved here to keep things relevant in your world of Scrum today.

    It’s all a journey!

     

    Categories: Blogs

    Advanced Topics in Agile Planning

    TV Agile - Thu, 11/06/2014 - 18:27
    Velocity is perhaps the most useful metric available to agile teams. In this session we will look at advanced uses of velocity for planning under special but common circumstances. We will see how to forecast velocity in the complete absence of any historical data. We will look at how a new team can forecast velocity […]
    Categories: Blogs