Created
January 30, 2023 09:59
-
-
Save cote/3ddacea1ad6ceab59b9816ff5f51bb77 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Jonathan Regehr: One of the huge reasons that Garmin started down the journey with Tanzu Application Service and adopted that platform, was because we knew our deployment frequency was way too low. We knew we needed to bring it up, and we wanted to track that. | |
And we found that this was a lot more difficult to track than we wanted. I think one of our audience members also asked, “how do you track the source data residing in multiple places?” That’s actually one of our problems. We had some teams that were making tickets when they would release and they would detail all the things they did. And other teams weren’t doing that. And we’re following a little bit more of an agile flow. And so it is very difficult to track deployment frequency, especially if you think of the agile methodology where to truly be agile, your process is constantly changing. As that process changes, you have to constantly figure out how you’re going to track what a deployment is and what is in a deployment. There are challenges to doing all that, and it does make deployment frequency difficult to track. I think at some point, we sort of decided we were just going to look at how many times we called the CF push [the command developers use to deploy their applications to the Tanzu Application Service, which is based on Cloud Foundry]. | |
Michael: Several years ago, I had someone ask me a similar question and they were a Tanzu Application Service user. To agree with you, as they dug deeper trying to find release frequencies, they encountered exactly what you’re saying: “Well, how do we define a release? Like, there’s so many things going | |
on?” It’s difficult, especially across teams, to say, “this is a release versus this is a patch.” So definitely, you have to know what your terminology means, what the items are. Paul, do you have any reflections on these metrics? | |
Paul Pelafas: Yeah, I do. At the previous role that I had, the company had recently been through a digital transformation. So this is moving from a waterfall world to more agile methodologies and setting up some product teams that were centered around very specific domains. So as you can imagine, this was a huge investment for the company. We were wanting to know: | |
• Were we getting a return on investment? | |
• Making things better? Worse? | |
Metrics were very much at the center of trying to understand those questions. In our case, our product teams were often the guinea pigs for the new way | |
of working in the organization. So seeing how quickly and how efficiently these teams could work with some of these new tools, and just workflows and processes was really important. | |
We focused very much on DORA metrics, and how quickly we could | |
move from ideation into production and see real value from the code that our engineers were producing. We could also see how quickly we would respond to different issues that arise because in software, there’s a human element, and there are always going to be defects or bugs. How quickly we can respond to those and make changes that have a positive impact to our consumers is really, really important. We focused very much on the DORA metrics, and these were our guiding light for our product teams. | |
Tech Debt Metrics The Human Side of The (im)possible task Measurement ofmeasuringcustomer | |
experience | |
Jonathan: You mentioned cycle time, and that to me that feels a lot like the deployment frequency. You also mentioned release size and that was one of our big things. We had these huge releases, long cycle times, and then you hit production with those things. And let’s say something goes wrong. Everyone all of a sudden says, “I’ve got to figure out what the problem is.” It’s very difficult to find issues in that regard, because your release is so big that many different developers have code running in there. | |
Maybe something got lost in a merge somewhere and so it made bugs a lot more difficult to squash. So I put a high value on the short cycle time. Short cycle times benefit the customer, but they benefit your code quality as well. Because you’re that much closer to when the code was written. And therefore, you’re that much closer to remembering what the problem might be. | |
Paul: Yes, that’s a great point, Jonathan. | |
“Our organization went from quarterly releases to multiple times daily. You can’t get there overnight. So measuring how much more efficient you are over a period of time, you can see trends, numbers, and data to help influence where you’re going.” | |
Paul Pelafas | |
From "Voices of the Vanguards: Our Guide to Measuring Software Delivery Through Metrics," Dec 15th, 2022. https://tanzu.vmware.com/content/ebooks/vmware-voices-of-the-vanguards |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment