Cover image of Nicole Forsgren

Nicole Forsgren

16 Podcast Episodes

Latest 26 Nov 2022 | Updated Daily

Episode artwork

The DEVOPS Conference replay: Nicole Forsgren on making our days better with DevOps

DevOps Sauna from Eficode

HOW TO MAKE YOUR DAYS BETTER WITH DEVOPS - KEYNOTE BY NICOLE FORGSGREN How can we support our work and improve our teamwork? DevOps was originally started to take care of people doing the work while they built great software. In this talk, Nicole Forsgren discusses how DevOps practices can not only help us ship software with speed and stability, they can also reduce burnout, improve our culture, and communicate better.-The DEVOPS Conference has been organized annually as a virtual event. On November 1, 2022, the event is going to Copenhagen, Denmark, and will be live-streamed too.Join us for a full day to discuss today’s DevOps: cloud native, platform engineering, GreenOps, psychological safety and more. --Book your early bird ticket by September 12: https://hubs.li/Q01l8PZ70-Sign up for the free live streaming: https://hubs.li/Q01l8Qsy0-Watch all talks from The DEVOPS Conference 2022: https://hubs.li/Q01l9hFt0-Follow Nicole on Twitter: @nicolefv


6 Sep 2022

Episode artwork

Behind The State of DevOps Research, Favorite Aha Moments, and Where They Are Now: Interviews with The DevOps Handbook Coauthors (Part 2 of 2: Dr. Nicole Forsgren and Jez Humble)

The Idealcast with Gene Kim by IT Revolution

In part two of this two-part episode on The DevOpsHandbook, Second Edition, Gene Kim speaks with coauthors Dr. Nicole Forsgren and Jez Humble about the past and current state of DevOps. Forsgren and Humble share with Kim their DevOps aha moments and what has been the most interesting thing they’ve learned since the book was released in 2016. Jez discusses the architectural properties of the programming language PHP and what it has in common with ASP.NET. He also talks about the anguish he felt when Mike Nygard’s book, Release It!, was published while he was working on his book, Continuous Delivery. Forsgren talks about how it feels to see the findings from the State of DevOps research so widely used and cited within the technology community. She explains the importance of finding the link between technology performance and organizational performance as well as what she's learned about the importance of culture and how it can make or break an organization. Humble, Forsgren, and Kim each share their favorite case studies in The DevOps Handbook. ABOUT THE GUEST(S) Dr. Nicole Forsgren and Jez Humble are two of five coauthors of The DevOps Handbook along with Gene Kim, Patrick Debois and John Willis. Forsgren, PhD, is a Partner at Microsoft Research. She is coauthor of the Shingo Publication Award-winning book Accelerate: The Science of Lean Software and The DevOps Handbook, 2nd Ed., and is best known as lead investigator on the largest DevOps studies to date. She has been a successful entrepreneur (with an exit to Google), professor, performance engineer, and sysadmin. Her work has been published in several peer-reviewed journals. Humble is co-author of Lean Enterprise, the Jolt Award-winning Continuous Delivery, and The DevOps Handbook. He has spent his career tinkering with code, infrastructure, and product development in companies of varying sizes across three continents, most recently working for the US Federal Government at 18F. As well as serving as DORA’s CTO, Jez teaches at UC Berkeley. YOU’LL LEARN ABOUT Projects Jez and Gene worked on together before The DevOps Handbook came out. What life is like for Jez as a site reliability engineer at Google and what he’s learned. The story behind his DevOps aha moment in 2004, working on a large software project involving 70 developers. The architectural properties of his favorite programming language PHP, what it has in common with ASP.NET, and the importance of being able to get fast feedback while building something. The anguish that Jez felt when Mike Nygard’s book, Release It!, came out, wondering if there was still a need for the book he was working on, which was Continuous Delivery. “Testing on the Toilet” and other structures for creating distributed learning across an organization and why this is important to create a genuine learning dynamic. What Dr. Forsgren is working on now as Partner of Microsoft Research. Some of Dr. Forsgren’s goals as we work together on the State of DevOps research and how it feel to have those findings so widely used and cited within the technology community. The importance of finding the link between technology performance and organizational performance and why it probably was so elusive for at least 40 years in the research community. What Dr. Forsgren has learned about the importance of culture, how it can make or break an organization, and the importance of great leadership. RESOURCES Personal DevOps Aha Moments, the Rise of Infrastructure, and the DevOps Enterprise Scenius: Interviews with The DevOps Handbook Coauthors (Part 1 of 2: Patrick Debois and John Willis) The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations, Second Edition, by Gene Kim, Patrick Debois, John Willis, Jez Humble, and Dr. Nicole Forsgren Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard H. Thaler and Cass R. Sunstein Nudge vs Shove: A Conversation With Richard Thaler The Visible Ops Handbook: Implementing ITIL in 4 Practical and Auditable Steps by Kevin Behr, Gene Kim and George Spafford FlowCon Elisabeth Hendrickson on the Idealcast: Part 1, Part 2 Cloud Run Beyond Goldilocks Reliability by Narayan Desai, Google Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Jez Humble and David Farley Release It!: Design and Deploy Production-Ready Software (Pragmatic Programmers) by Michael T. Nygard DevOps Days On the Care and Feeding of Feedback Cycles by Elisabeth Hendrickson at FlowCon San Francisco 2013 Bret Victor Inventing on Principle by Bret Victor Media for Thinking the Unthinkable Douglas Engelbart and The Mother of All Demos 18F Pain Is Over, If You Want It at DevOps Enterprise Summit - San Francisco 2015 Goto Fail, Heartbleed, and Unit Testing Culture by Mike Bland Do Developers Discover New Tools On The Toilet? by Emerson Murphy-Hill, Edward Smith, Caitlin Sadowski, Ciera Jaspan, Collin Winter, Matthew Jorde, Andrea Knight, Andrew Trenk and Steve Gross PDF Study: DevOps Can Create Competitive Advantage DevOps Means Business by Nicole Forsgren Velasquez, Jez Humble, Nigel Kersten and Gene Kim Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations by Nicole Forsgren, PhD, Jez Humble, and Gene Kim DevOps Research and Assessment (DORA) on Google Cloud GitLab Inc. takes The DevOps Platform public Paul Strassmann The Idealcast with Dr. Ron Westrum: Part 1, Part 2 Building the Circle of Faith: How Corporate Culture Builds Trust at Trajectory Conference 2021 The Truth About Burnout: How Organizations Cause Personal Stress and What to Do About It by Christina Maslach and Michael P. Leiter Maslach Burnout Inventory Understanding Job Burnout at DevOps Enterprise Summit - Las Vegas 2018 Understanding Job Burnout at DevOps Enterprise Summit - London 2019 Workplace Engagement Panel at DevOps Enterprise Summit - Las Vegas 2019 Expert Panel - Workplace Engagement & Countering Employee Burnout at DevOps Enterprise Summit - London 2019 The Idealcast with Trent Green Kelly Shortridge’s tweets about Gitlab S-1 TIMESTAMPS [05:22] Intro [05:34] Meet Jez Humble [10:19] What Jez is working on these days [15:56] What inform his book, “Continuous Delivery” [24:02] Assembling the team for the project [26:30] At what point was PHP an important property [31:56] The most surprising thing since the DevOps Handbook came out [35:07] His favorite pattern that went into the DevOps Handbook [43:40] What DevOps worked on in 2021 [44:46] Meet Dr. Nicole Forsgren [50:32] What Dr. Forsgren is working on these days [52:18] What it’s like working at Microsoft Research [55:58] The response to the state of DevOps findings [59:18] The most surprising finding since the findings release [1:05:59] Her favorite pattern that influence performance [1:08:49] How Dr. Forsgren met Dr. Ron Westrum [1:11:06] The most important thing she’s learned in this journey [1:14:46] Her favorite case study in the DevOps Handbook [1:19:12] Dr. Christina Maslach and work burnout [1:20:46] More context about the case studies [1:25:32] The Navy case study [1:29:04] Outro

1hr 29mins

27 Jan 2022

Similar People

Episode artwork

MasterTips: DevOps panel with Nicole Forsgren, Kelsey Hightower and Greg Wilson

The DevRelX Podcast (ex Under the Hood of Developer Marketing)

Trick question of the week:DevOps or not?Today's Under the Hood of Developer Marketing podcast episode is called “DevOps or not?”.And, who is more suited to talk about it than the leaders in the field? This episode is part of our MasterTips series, which includes tips from industry leaders from the Future Developer Summit. It is a panel discussion from the Future Developer Summit, an exclusive event for developer marketing industry leaders.The discussion focuses on DevOps and how to implement and measure its success, from the following industry leaders:Greg Wilson, Director of Cloud Developer Relations at GoogleNicole Forsgren, VP of Research and Strategy at GitHubKelsey Hightower, Staff Developer Advocate at Google CloudSome of the topics discussed are:What is DevOps?Is DevOps something that small companies could consider too? Or is DevOps only for big ones?Where do you start the discussion to implement DevOps?What do you implement? Does it always work?What does “tooling” mean when it comes to DevOps?How can you measure success in DevOps implementation?What is your suggestions to leaders and practitioners in the field? All these questions answered in the latest Under the Hood of Developer Marketing episode “DevOps or not?”.


19 Nov 2020

Episode artwork

Devops and High Performing Teams: A Q&A with Nicole Forsgren, GitHub

Empirical Software Engineering Banter

Dr. Nicole Forsgren, VP of Research and Strategy at GitHub and author of "Accelerate:  a book on the Science of Lean Software and DevOps"  shares her journey from NLP and ethnographies of sys admins to DevOps and her studies of Team Performance.  In the Q&A, she discusses how tradeoffs between speed and stability are not made by either high or low performing teams, but high performing teams do well with both. She also discusses impact of culture on DevOps, how high performing teams may not be as susceptible to burnout and how to think about individual developer productivity.    This Q&A was recorded live as part of a workshop on Continuous Software Engineering,  at a Senior Topics Course in Empirical Software Engineering at the University of Victoria on Oct 16th, 2020.  In preparation for today's workshop on continuous software engineering we read/watched materials posted  on this page: https://github.com/margaretstorey/EmseUvic2020/blob/master/resources/contSE.md This Q&A is also available on YouTube


22 Oct 2020

Most Popular

Episode artwork

Episode 16: Dr. Nicole Forsgren & Dr. Denae Ford Robinson on Breaking Developer Persona Stereotypes

Mik + One: The Official Project to Product Podcast by Dr. Mik Kersten

In this episode, Mik is joined by two guests: Dr. Nicole Forsgren, VP of Research & Strategy at GitHub, and Dr. Denae Ford Robinson, Senior Researcher in the SAINTes group at Microsoft Research. Mik, Nicole and Denae had an energetic and insightful conversation about many topics, including: Measuring productivity, and how leaders can utilize these insights to build diverse systems and measures to strengthen inclusion and diversity at scale The importance of making the workflow, conversations and collaborations visible Creating a company culture that embraces diversity and inclusion, and encourages employees to bring their authentic selves and ideas to the workplace Breaking the stereotypes of developer personas and creating a safe working environment that encourages the intersection of personal and professional identities The social fabric of an organization and how evolving technological architecture can increase flow and visibility, and create better outcomes Subscribe to the Mik + One podcast today so you never miss an episode and don’t forget to leave your review. Follow Mik on Twitter: @mik_kersten #MikPlusOnewww.tasktop.com For more information about Dr. Nicole Forsgren & Dr. Denae Ford Robinson, to view the full list of resources featured in this episode, and to download the episode transcript, visit:https://projecttoproduct.org/podcast/dr-nicole-forsgren-dr-denae-ford-robinson/


8 Sep 2020

Episode artwork

2019 Accelerate Report with Dr. Nicole Forsgren

DevOps Chat

There are many interviews you do as part of the role as editor in chief of DevOps.com. Then there are some that make it all worthwhile. Anytime I have the pleasure of speaking with Dr Nicole Forsgren, it makes all of the other things I do worthwhile. She has a clarity of vision borne from the foundation of verifiable metrics she has measured and surveyed over the last six years. Just about every DevOps presentation you will see has some reference to her Accelerate: State of DevOps Report. And for good reason. It has become the "authority" on DevOps metrics.We sat down with Nicole to go over what she thinks are some of the key findings in this years report. Have a listen and enjoy.


6 Oct 2019

Episode artwork

How to Grade DevOps Teams with Nicole Forsgren, PhD

Screaming in the Cloud

About Nicole Forsgren, PhDDr. Nicole Forsgren does research and strategy at Google Cloud following the acquisition of her startup DevOps Research and Assessment (DORA) by Google. She is co-author of the Shingo Publication Award winning book Accelerate: The Science of Lean Software and DevOps, and is best known for her work measuring the technology process and as the lead investigator on the largest DevOps studies to date. She has been an entrepreneur, professor, sysadmin, and performance engineer. Nicole’s work has been published in several peer-reviewed journals. Nicole earned her PhD in Management Information Systems from the University of Arizona, and is a Research Affiliate at Clemson University and Florida International University.Links Referenced:  Twitter Username: @nicolefv LinkedIn URL: https://www.linkedin.com/in/nicolefv/ Personal site: nicolefv.com Company site: cloud.google.com/devops X-Team: x-team.com/cloud TranscriptAnnouncer: Hello and welcome to Screaming in the Cloud, with your host cloud economist's, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on this state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is screaming in the cloud.Corey Quinn: This week’s episode of Screaming in the Cloud is sponsored by X-Team. X-Team is a 100% remote company that helps other remote companies scale their development teams. You can live anywhere you like and enjoy a life of freedom while working on first-class company environments. I gotta say, I’m pretty skeptical of “remote work” environments, so I got on the phone with these folks for about half an hour, and, let me level with you: I’ve gotta say I believe in what they’re doing and their story is compelling. If I didn’t believe that, I promise you I wouldn’t say it. If you would like to work for a company that doesn’t require that you live in San Francisco, take my advice and check out X-Team. They’re hiring both developers and devops engineers. Check them out at the letter x dash Team dot com slash cloud. That’s x-team.com/cloud to learn more. Thank you for sponsoring this ridiculous podcast. Corey Quinn: Welcome to Screaming in the Cloud. I'm Corey Quinn. I am joined this week by Dr. Nicole Forsgren. Nicole, welcome to the show.Nicole Forsgren: Thanks so much for having me.Corey Quinn: Thank you for joining me. So, you work at Google cloud these days as a VP of Research and Strategy.Nicole Forsgren: I mean, let's call that aspirational. I'm not a VP just yet.Corey Quinn: I understand the Google's org chart is not caught up with your magnificence. Other people are willing to cut them slack. I am not. You are a VP to me. You will remain a VP, and eventually the business cards will reflect that very bright reality.Nicole Forsgren: I'll take it. Yeah. Right now, my title is research and strategy.Corey Quinn: Yes, you've done so much that it's difficult to start out, to even figure out where to start with what you've done and who you are, but so let's take it in stages. You've somewhat recently wrote the book Accelerate, The Science of Lean Software and DevOps, which is a fascinating book. I recommend that people check it out if they're at all interested in, I guess, putting a little bit of data to anecdata, but that's not where we you really began. To do that, let's go back to the very beginning. Who are you?Nicole Forsgren: That involves me just starting out in a small farm town in Idaho, but maybe we want to go farther than that. I actually started out, it's interesting because some people are like, "Oh, you're just a researcher. You're just an academic." But I'm glad you asked this because I started out as a software engineer. Well, I guess I started as a programmer. I was on mainframe systems, but then, that was a software engineer at IBM. So, I was developing systems, and then, as I swear this happened so often, I had to maintain my own systems. So, then I was sysadmin. I was running my own systems, and then I ended up doing some consulting a bit because I wanted to help other people run their systems, and build their systems, and solve more interesting problems.And then, I actually ended up in hardware for a bit. I was running RAID, which that's kind of a blast from the past, right? We don't do RAID the same way we used to do RAID.Corey Quinn: Well, not on purpose anyway.Nicole Forsgren: I know, right? And then, I ended up going to get my PhD, because I realized that kind of cycling through some of these consulting problems and even solving some of the problems in larger organizations, because I'm just bouncing back and forth between consulting and IBM for a couple of those last several years. It felt like many of the problems I was solving and many of the complex problems and organizational problems felt like I was answering some of the same problems in the same way. And in particular, when I was going to management and suggesting solutions, many times they were saying, "Oh, well that won't work here." Or, "Well, I know that worked there but that won't solve this problem."Nicole Forsgren: And I was thinking, "Well there has to be some type of way to solve this, in some way that's more generalizable." I wonder if there's a classic problem that can be solved similar ways. So, that kind of led to the PhD and doing some research.Corey Quinn: What is your PhD in?Nicole Forsgren: So, my PhD is in MIS, it's Management Information Systems. And the reason I chose MIS as opposed to computer science is, I liked the fact that I could link technology and computering things with business outcomes, right? So MIS is inherently an interdisciplinary field, and back in the day, it was unique because it really specifically was linking and tying computer science concepts to business outcomes. That really is what I've done for over a decade now, is find ways to deliver business outcomes, or organizational outcomes, or team outcomes from computer types of things, like capabilities and practices. So, I was, this is such a hipster term, but like, it's like, "I was doing it before it was called DevOps."Nicole Forsgren: And really, I kind of was, so I started doing my research in this area in '07, which is pretty parallel to a lot of the DevOps movement. And then, I finished my PhD in '08.Corey Quinn: Excellent. So, one could say almost that you've brought ivory tower academia into the streets?Nicole Forsgren: Actually, yeah. In many ways I did. And also that was in parallel with a handful of other academically rigorous research. So, there were a handful of people about the same time I was doing my research that were at IBM Watson Labs, right? So Cadigan, and Maglio, and Haber, a handful of people there, they were studying sysadmins specifically in some of their work practices. I started a bunch of my research with sysadmins as well, going to the LISA Conference a few years later, I chaired LISa. Then, I expanded my research to include developers and other engineers, software engineers, and a bunch of my work was focusing on how capabilities and practices in tooling, or automation, or process, or culture had impacts at the team, individual, and then organizational level, which if we think about it, that kind of is how we think about and define DevOps now, right?Nicole Forsgren: It's tooling and automation, it's process, and it's culture, and how that has impacts at largely the software development and delivery, and then organizational level, how we deliver value.Corey Quinn: All of that is made manifest in this year's State of DevOps report, an incredibly thorough academically researched paper except that a human being can read it. It's probably the best way to frame that from my perspective.Nicole Forsgren: Yes. I often joke that I speak two and a half languages, English, academic English, and a little bit of Spanish.Corey Quinn: Also, add math to that list.Nicole Forsgren: Yes, yes, a little bit of math, more statistics than other types of math. And what we try to do is we try to take this really academically rigorous work and translate it, not just translate it, but also make it very, very accessible to people so that they can use it. Right? So, I've been leading, and running, and conducting the State of DevOps Reports for six years now, starting in 2014, now through 2019, so these reports are super accessible. I joke it's like an adult picture book, right? Like we have large type, we have graphics, we have pictures. It's very easy to flip through. It's about 80 pages, but it's like very large print. This is not like dense text.Corey Quinn: Oh, and it's so gorgeously designed. I had to triple check to validate that you folks were still part of Google.Nicole Forsgren: I have to say my copy editor and my designer are fantastic. Cheryl Coupe and Siobhan Doyle are unbelievable, unbelievable to work with. I will say the last couple of weeks of copyedit and design are a little intense. They're a little rough, but they turn around the most gorgeously designed work and they really helped me. We worked very closely together to make sure that it's very accessible. It's easy to read, it's easy to navigate. We're working to put out a couple of pages of an executive summary as well. So, if you just want to like flip through and find something that's really quick, that's available as well.And then, in addition to this, like you mentioned, me and my co-authors for the book, Jez Humble and Gene Kim, also pulled together the first four years of the research into something that's a little more detailed, right? That includes additional descriptions about the capabilities we've researched, additional information about the outcomes that we've measured, more detailed information on the statistical methods about what it means, and the methodology and where the data comes from, and why we choose the statistical methods that we do. And then, part three included a contribution by previous Shingo winners, Karen Whitley Bell and Steve Bell on a case study out of ING Netherlands. And then, the book itself just to Shingo. And I will take them out of-Corey Quinn: Congratulations.Nicole Forsgren: Thank you. It's the first time as far as we can tell that a Shingo has ever been awarded to anything in technology. Now, I will say that came out of 2014 to 2017, so we have two more state of DevOps reports, research projects that have been published since then. So, my editor keeps pinging me, asking for a second edition. So, as soon as I take a few naps, we will work on that. And I did want to mention really quickly I highlighted the authors for the book, the authors for this year's report. So, I led, I was first author, Dustin Smith is a researcher who joined this year's report.He was fantastic. He has a PhD in top stats for five years. So, he was wonderful. And joining this year's report, Jez humble was third author, and then Jessie Frazelle joined as an author this year as well. She was wonderful, wonderful to work with.Corey Quinn: She's been a previous guest on this show, and we'll absolutely have to have her back to talk more about some of this.Nicole Forsgren: Yeah, I think she's going to join us on another podcast where we will dig into all sorts of cloud and open source excitement that we covered this year.Corey Quinn: Excellent. Excellent. So, before we dive in too far into the intricacies of this year's report-Nicole Forsgren: Oh, there are so many things.Corey Quinn: And there are, but the problem I've seen in most reviews and most discussions around the State of DevOps report is that no one starts off with a primer for someone who's never heard of it before. So, from that perspective, guide me through it. What is the Accelerate State of DevOps Report? Where did it come from? What is it for, and why do I care?Nicole Forsgren: So, what would you say it is you do here, Nicole?Corey Quinn: Exactly.Nicole Forsgren: So, the nice thing about this report and the thing that makes this so unique and so different is that this is not just another vendor report, right? We're not selling a technology, we do not talk about vendor tooling or products anywhere in the report. I think there's one line that lists a whole bunch of tools as an example, right? What we do instead is we investigate the capabilities and practices that are predictive. So, if someone says, "I'm doing the DevOps," or whatever you want to call it, find and replace, whatever your company is doing, whether it's technology transformation, or digital transformation, or DevOps. If you say, "I want to know what types of things are actually impactful, which things are actually predictive of success in a statistically meaningful way.Now, go back a little bit, right? Hit rewind on this podcast. Remember how I said I used to do consulting or I used to do these things in my organization, and my manager always said, "Ah, that's not gonna work here." Well, this helps answer that. Like it says, in a statistically meaningful way, these things will actually have an impact. There's a high likelihood this will work. So, this research takes an academically rigorous approach. So, I designed this from a research designed, PhD level standpoint. We designed this research to test a bunch of hypotheses to say, "According to the research, according to existing literature, according to lots of other things that suggest, these types of things have a good likelihood of having a difference in lots of different types of organizations. What will actually work?"Then, we collect a bunch of data, and then, we see, "Okay, what works in a statistically meaningful way? What does the evidence show? Then, I'm going to break that down a bit. I say capabilities and practices, but we don't test tools. The reason we don't test tools is because, well first of all, there's a million different tools, That's going to be too hard. Also, tools change, right? Feature sets change, capabilities change, lots of different things change. So, instead what we do is we test capabilities and practices, because then, what that does is it gives you an evaluative framework. So, then you can go back. You can go back to your organization.You can go back to your team, whether you're an IC or you're a leader and you can say, "Okay, these types of things will work. These types of things have a high likelihood of working." Okay, so when I'm doing CI, CI has a high likelihood of meaning that you will be more successful in developing and delivering software with speed and stability. What does CI mean? Also, everyone like redefined CI to be their own special thing. What does CI, in order for CI to be impactful, what does that mean? It means when you check in code, it results in a build of software. When you check in code, automated tests are run. You need to have automated builds and tests running successfully every day, and developers need to see those results every day.Those four things need to be happening. Now, anyone can go back to their CI tool set of choice and they can say, "Are these four things happening?"Corey Quinn: What I find fascinating about all of this, as I read it, it's very, again, first you brought the data. So, every time I see someone starting to argue from ... make a point, an anecdote, or pull a well actually against anything that you ever list in these reports, it's screamingly funny to me. I just immediately cringe and hide behind the tarp because there's going to be a bloody red mist where that person used to be by the time you're finished with them, metaphorically speaking, you bring the data and they're [crosstalk 00:15:58]-Nicole Forsgren: I can be polite.Corey Quinn: You are.Nicole Forsgren: But, yeah, I've got data.Corey Quinn: Yes, and what you say is right.Nicole Forsgren: And we retest many things every year. We revalidate things several ... Some things have been revalidated for six years. Now, not everything needs to be revalidated every single year. We rotate them in and out, but we also do the revalidation thing, right? So, it's like this really has been revalidated several years. You can fight with me if you want, but if it's not working for you, maybe you're not actually doing it. Maybe it's not actually automated. Maybe it's hidden behind a manual gate. Like you're putting it in service now and you're waiting for a person to click it. I love you, but I award you no points, may God have mercy on your soul.Corey Quinn: Exactly.Nicole Forsgren: Like, citing Billy Madison. What part of this thing is not actually working. What part does not match?Corey Quinn: Right. What I like about this before is that I did a lot of digging into it last year when I saw this, and really paid attention to it, is you come up with this idea of performance profiles, where you talk about high performing teams, elite performing teams, low performing teams, and I always wondered, didn't get the time-Nicole Forsgren: People get real defensive, people get real defensive.Corey Quinn: Well, that's what I wanted to ask you about, to some extent. Very few people self identify as, "Yeah, as far as performance goes, are company is complete crap. Thank you for asking." People like to speak aspirationally about their own work and unless you wind up working at Uber, generally you don't show up hoping to do a crappy job today at most companies. So, there's a question around, how do you wind up assessing whether a team is high performing, low performing, et cetera. Since this is all based on survey responses, you don't get to actually look at output of teams other than what people self-report. Correct?Nicole Forsgren: Right, or do you know what also is interesting is occasionally these bands change, and the people are like, "Why did it change? How did it change? This should be a static low, medium, high elite performance category. I need to have a goal to point to because then I can arrive and I could be done." I've had people tell me that, and I'm like, "But that's not how the world works. The industry is changing, the industry is moving. We don't make software today like we made software 20 years ago. Why would that make sense?" And so, I love this question because what we do is we collect data along four key metrics. These have been termed the four key metrics. So, we've been actually collecting this data for six years now, and it's interesting.ThoughtWorks actually started calling them the four key metrics, and enterprises around the world, across all types of industries have started tracking these and using these as outcome metrics to track their technology transformation. Now, these four metrics fall into two categories, speed metrics and stability metrics. Now, I'm going to come back to these but I'll explain the process really quickly and then we'll come back. What I do every year, so what I said is, I don't just arbitrarily decide this is low performance, this is ... like here's a line. This is medium performance and here's where you are, and this is high performance, and here's where we are.And then, it's like set it and forget it, and let everyone decide where they are, because the industry changes. So, why would it make sense for me to just make something up and let everyone set themselves according to that? We are very data-driven. We want to see what's happening. What's important is for us to set and collect the metrics that are outcome metrics. So, we use speed and stability. The reason we choose speed stability is because they are system level outcome metrics. We're talking about the DevOps, right? We're talking about pulling together groups with seemingly opposing goals. Developers want to push code as often as possible, which introduces change and possibly instability in the systems.You have operators, sysadmins, who want to have stability in systems, which means they might want to reject changes. They may want to reject code. So, can we see how does it make sense, Corey? How we may want to have these two metrics, because the goal of an organization is to deliver value, but you also want to have stable systems. So, we want to have both of those metrics in place, right? It's like a yin and a yang. So, we capture both of these, because if you're only pushing code, that doesn't help. But if you only have stable systems, if I only ever say no, then I never get changes. It's not just features, it's things like keeping up with compliance and regulatory changes.It's keeping up with security updates, keeping up with patches. So, I capture these four metrics, and what I do ... Okay, I'm going to tell you what these four metrics are. My speed metrics are deployment frequency. Okay, so, we'll keep talking about these four metrics. Here are my four metrics. I've got deployment frequency, how often I push code? This is important to developers. It's important to infrastructure engineers, right? I also have lead time for changes. How long does it take me to get code through my system? I measure this is code commit, to code running in production. Now. from the stability point of view, I've got time to restore service. So, how long does it generally take to restore a service?Anytime I have any type of service incident or a defect that impacts my service users, like unplanned outage, a service impairment, and then I've got change failure rate. That's my fourth metric, might my other stability metric. So, what percentage of changes to production result in any kind of degraded service, anytime it requires someone's attention. So, a service impairment, a service outage, anytime it requires remediation, like a hot fix, or rollback, a fixed forward, a patch. So, what I do is I take, like I mentioned, a very data-driven approach. I take these four metrics, I throw them in the hopper and I see how they group. It's called the cluster analysis because I want to see how they cluster.And what I have seen for the last six years in a row is that these four metrics cluster in distinct groupings. This year, they fell into four distinct groups. So, you've got a group at the high end, where all four metrics group well, I'll say, where they group well together. And when I say they grew up well together, that means deployment frequency is fast. You're deploying on demand, your lead time for changes is less than a day. Your time to restore a service is less than an hour. Your change fail rate is low, between zero and 15%, so your elite performers are optimizing for all four, right? So, you're going fast and your stability is good. Okay, so I've got a group up there.Then, I've got a gap. Then, I've got a group. Then, I've got a gap. Then, I've got a group, a cluster. Then, I've got a gap. Then, I've got a cluster. By the way, all of these groups, these clusters, were all statistically significant. They're significantly similar to each other and different from the other groups. So, what that tells me is that speed and stability don't have trade-offs. You don't have to sacrifice speed for stability, or stability for speed. Now, that's not necessarily what we heard for a long time. We used to think that in order to be stable, you had to slow down, but that's not what we see and that's not what we've seen for six years now. The low performance group, their deployment frequency is between once a month and once every six months.Lead time for changes to get through that pipeline, the same thing, between once a month and once every six months. So, their time to restore service is between once a week and once a month. And then, that change fail rate is in that area between 46% and 60%. Okay, so now, I'm going to get back to a question you just asked me. How can people answer these questions for me when they're survey questions? You'll notice that I'm asking things in ranges. I'm not asking for millisecond response times. I'm asking for things in a scale, in a log scale. People can tell me if I'm deploying on demand, or they can tell me if I'm deploying about once a week, or if I'm deploying about quarterly, or if I'm deploying just a couple of times a year, right?People can tell me that, or they can tell me when things go down, how long it takes us to restore a service. About a day, about a month. So, what I'm asking in those time increments that go up on a log scale, people can answer those questions. Does that answer?Corey Quinn: No, that absolutely does. The question that I have then is, when you assimilate all of that and you read this, there's an awful lot of data in here and there's an awful lot that, shall we say, inspires passion in people who are reading it. For example, last year there was a kerfuffle that generally low performing teams tend to outsource an awful lot of technology. This was hotly debated and found to be completely without merit by outsourcing companies.Nicole Forsgren: By outsourcing companies.Corey Quinn: Exactly.Nicole Forsgren: Now, I will say that it was highly correlated and we did make a careful distinction that that was outsourcing by function. And so, what happens there is it's outsourcing if you take an entire batch of something and you throw it over a wall, and you let them disappear for a while, and then, throw it back to you later. So, if you take all of development and you let them go do something and come back later, or if you take all of operations and you throw it away and you never ever see it. That is not what happens if you have a vendor partner that operates with you at the cadence of work, because what often happens then is you have introduced delay. Introducing delay, I love that you brought this up here, what we've seen is, introducing delay can introduce instability.Because what happens then is when you have delay, it causes and leads to batching up of work. Batching up of work leads to a larger blast radius, a larger blast radius when you finally push to production leads to greater instability. And when you do have that higher likelihood of downtime, that higher likelihood of downtime also means that larger piece of code or something you have pushed makes it harder to debug. So, it's harder to restore a service.Corey Quinn: You used to be a programmer, as you said at the beginning of this show, so it's always easier to think about what the bug could be in the code that broke the build three minutes ago instead of that code you wrote three weeks ago.Nicole Forsgren: Yup, exactly. And now, you've got this giant ball of mud that you pushed instead of this nice tiny little tight package that you pushed.Corey Quinn: Exactly. And this is really, I guess, the point that I'm getting to here, is if people want to read something and then feel bad and not change anything, we have something for that already. It's called Twitter. What impact do you find that these reports have in the world? What changes are companies making based upon these findings?Nicole Forsgren: So, we've seen huge impact. As I mentioned, we're actually seeing several organizations using these four key metrics as a way to guide their transformation. The nice thing is that it's actually really difficult to fully fully instrument a full metric space platform to capture and correlate metrics that reflect your full instrumentation tool chain. People are like, "Oh, we'll capture system-based metrics." That can be a two to four year journey. Capturing in broad strokes your four key metrics of deployment frequency, lead time for changes, main time to restore, and change fail rate can be at least relatively straight forward.Nicole Forsgren: You can capture these on a team level to see how well you're doing, and if you're at least generally moving in the right direction. So, that helps. And then, what you can do is you can say, "Okay, what types of things should I be focusing on to improve?" And then, you can identify the capabilities that have generally been shown to help improve, come up with that list. We actually outlined in this year's report like, what types of things ... It's sort of choose your own adventure, right? So, in this year's report we have the performance model, which this is, helping you improve your software delivery performance. And then, we have a productivity model, but start with this model. If this is software delivery performance, and that's what you want to improve, great.Then work backwards. Which types of things, which capabilities improve it? Start with that list. Once you have that list, no, that does not mean that you start working on every single capability that improves that, because that list is like, after six years of research, that list is 20 or 30 capabilities long. But that's your candidate list. This is the list of all the possible things you could improve. But, you look at that list and you say, "Which things are my biggest problems right now? So, adopt a constraint space approach. What's my biggest constraint? What's my biggest hurdle right now? Pick three or four. Devote resources there. Now, I see resources. That doesn't always mean money, although money is nice. That could be time. That could be attention. That can be anything, right?Focus there first, spend six months there, and then come back and reevaluate, "Is this still my hardest challenge?" It can be automation. It can be process like, "Am I having a really hard time with WIP limits? Am I having a really hard time breaking my work into small batch sizes? Can I deliver something in a week or less? It could be that.Corey Quinn: Well, ask any software engineer, "Oh, I can build that in a weekend." You can deliver anything in a week. It's easy. Just ask them.Nicole Forsgren: But can I do it without burning myself out?Corey Quinn: Oh, now you're adding constraints.Nicole Forsgren: I know heaven forbid.Corey Quinn: You've been doing this for six years. As you look at this year's State of DevOps Report, what new findings, or I guess old findings for that matter, surprised you the most?Nicole Forsgren: We had a couple. So, an additional thing that we asked this year was about scaling strategies. What types of things are you seeing in your organization to help you scale DevOps? That's a big question I get constantly, how do I scale? What's the best way to scale? A couple of things aren't big surprises, right? Centers of excellence, not great, big bang, not great. Big bangs are used most often by low performers. It doesn't necessarily mean that it's a bad thing, it's just that that's usually only used in the most dire of circumstances. When you really have to wipe slate clean, start over, you need to be most prepared for a longterm transformation. Something that was a bit of a surprise, but also not, can I answer it that way? A surprise, but also not a surprise is that dojo's aren't well used, aren't commonly used among the highest performers.What we see is that the highest performers, so those that are high performers and elite performers, so the top 43% of our users, focus on structural solutions that build community. So, what does that mean? What that means is that those types of solutions focus on things like building up communities of practice, building up grassroots efforts, and building up proof of concepts, because these types of things will be resilient to re-orgs and product changes. We don't see things like dojos, like training centers and centers of excellence because they require so much investment. They require so many resources. We do see them, but we only see them 9% of the time. When we share this finding with a handful of people, they're shocked because they hear about it so much.The thing is though, they only hear about it among a handful of cases that have been successful and those successful cases had tons of resources. They had entire buildings set out, they had entire education teams, they had curriculum teams, they had training teams. They also had an amazing PR.Corey Quinn: Absolutely.Nicole Forsgren: I think that was something that like at first was surprising, because I'm like, "It's so low. But then I realized I've only heard about it in a couple of cases and it's the cases where they have immense, immense resources.Corey Quinn: One of the things I always found incredibly valuable about the reports is if you go to conferences and listen to people talk about whatever it is they're talking about, they're doing at their own workplaces, everything sounds amazing and wonderful, and it's all a ridiculous fantasy. Everyone's environment is broken, everyone works in a tire fire and there's not a lot of awareness, I think, in some circles that that's the case. So, whenever someone looks at their own environment and compares it to what they see on stage, it looks terrible. This starts putting data to some of those impressions and I guess contextualizing that in the larger sense. A question that I do have, I don't know if the study gets into this in any significant depth, is it possible for different organizations to simultaneously be high performing and low performing, either along different axis, or in different divisions?Nicole Forsgren: Oh absolutely, and I'm glad you asked that, and we try to highlight this and we never do a good enough job. We do reiterate it throughout the report. The analysis and the classification for performance profiles is always done at the team level. That's because, particularly in large organizations, team performance is different throughout an organization. As I'm sure you've seen, because when you go to really large organizations, some teams are working at a super fast pace and other teams are at a very, very different place. And so, we always do the analysis at the team level.Corey Quinn: There's an entire section in the report that talks about cloud computing, which is generally what people tune into this podcast to talk about, and we're not going to talk about it today. We're going to have a second podcast episode about that.Nicole Forsgren: It's so good though. It's so good. Is this where I get to tell people that like you read, you did a pre-read on the report for me and you're like, "Hey, Nicole, you missed this whole section of nuance that you talk about in one sentence, but you have to expand it because otherwise people are gonna scream at you and I get to thank you for it."Corey Quinn: I don't think that I framed it quite that way, or if you want to say-Nicole Forsgren: It's not polite, but it's real.Corey Quinn: Or, take it the other direction. I practiced that whole statement, "Well idiot," and then went from there. Yeah, you've got to double down on those things.Nicole Forsgren: By the way, thanks.Corey Quinn: No, thank you for asking my opinion on this. I'm astonished that anyone cares what I have to say, that it isn't a ridiculous joke or a terrible pun.Nicole Forsgren: I mean, it's real though.Corey Quinn: Well thank you so much for taking the time to speak with me today.Nicole Forsgren: Yeah.Corey Quinn: There will be another episode.Nicole Forsgren: Can I get a quick teaser on the cloud stuff though?Corey Quinn: You may indeed.Nicole Forsgren: Okay, so cloud's important and it does help you develop and deliver software better, but only if you do it right. You can't just buy a membership to the gym and then not go to the gym and expect to be in amazing shape. That's what we find.Corey Quinn: Excellent. And I'm sure that the correct answer to solving that problem is to buy the right vendor tool instead.Nicole Forsgren: Something like that.Corey Quinn: Yes. So, I will put a link to the report in the show notes so people can download this wonderful work of art/science, I consider it both, and go from there. Thank you. If people care additionally beyond that, of what you have to say and how you say it, where can they find you?Nicole Forsgren: So, they can find all of DORA's research at cloud.google.com/devops, and if they want to snark on me, I am online at nicolefv.com.Corey Quinn: Excellent. Nicole, thank you so much for taking the time to speak with me today. I appreciate it.Nicole Forsgren: Hey, thanks so much.Corey Quinn: Thank you for listening to screaming in the Cloud. If you've enjoyed this episode, please leave it five stars on iTunes. If you didn't like this episode, please leave it five stars on iTunes. I'm Corey Quinn and this is Screaming in the Cloud.Announcer: This has been this week's episode of Screaming in the Cloud. You can also find more of Corey at screaminginthecloud.com, or wherever fine snark is sold.Announcer: This has been a HumblePod production. Stay humble.


28 Aug 2019

Episode artwork

Accelerate, The State of DevOps Report w Dr. Nicole Forsgren

DevOps Chat

Accelerate - The State of DevOps Report by Dr. Nicole Forsgren and the folks at Dora (now part of Google)is far and away the most widely cited research in the DevOps field. Dr Nicole and her team have brought scientific rigor to the survey over these past 6 years and the results show it.Here is your chance to shape the future of the DevOps market by taking 25 minutes of your time and take this years State of DevOps Survey. These reports are only as good as the info they gather.So have a listen to our conversation and then please go take the survey! https://bit.ly/2UzLMH2


19 Apr 2019

Episode artwork

The Science Behind DevOps with Dr. Nicole Forsgren

Real World DevOps

About the GuestDr. Nicole Forsgren does research and strategy at Google Cloud following the acquisition of her startup DevOps Research and Assessment (DORA) by Google. She is co-author of the book Accelerate: The Science of Lean Software and DevOps, and is best known for her work measuring the technology process and as the lead investigator on the largest DevOps studies to date. She has been an entrepreneur, professor, sysadmin, and performance engineer. Nicole’s work has been published in several peer-reviewed journals. Nicole earned her PhD in Management Information Systems from the University of Arizona, and is a Research Affiliate at Clemson University and Florida International University.Links Referenced:  2019 State of DevOps Survey Previous State of DevOps Reports TranscriptMike Julian: This is The Real World DevOps Podcast, and I'm your host Mike Julian. I'm setting out to meet the most interesting people doing awesome work in the world of DevOps. From the creators of your favorite tools to the organizers of amazing conferences, and the authors of great books to fantastic public speakers, I want to introduce you to the most interesting people I can find.Mike Julian: Ah, crash reporting. The oft-forgotten about piece of a solid monitoring strategy. Do you struggle to replicate bugs, or elusive performance issues you're hearing about from your users? You should check out Raygun. Whether you're responsible for web or mobile applications, Raygun makes it pretty easy to find and diagnose problems in minutes instead of what you usually do, which if you're anything like me, is ask the nearest person, "Hey, is the app slow for you?" And getting a blank stare back because hey, this is Starbucks, and who's the weird guy asking questions about mobile app performance? Anyways, Raygun, my personal thanks to them for helping to make this podcast possible. You can check out their free trial today by going to raygun.com.Mike Julian: Hi folks. I'm Mike Julian, your host for the Real World DevOps Podcast. My guest this week is Dr. Nicole Forsgren. You may know her as the author of the book Accelerate: The Science of Lean Software and DevOps or perhaps as a researcher behind the annual State of DevOps report. Of course that's not all. She's also the founder of the DevOps Research and Assessment, recently acquired by Google, was a Professor of Management Information Systems and Accounting, and has also been a performance engineer and sysadmin. To say I'm excited to talk to you is probably an understatement here. So, welcome to the show.Nicole Forsgren: Thank you. It's a pleasure to be here. I'm so glad we finally connected. How long have we been trying to do this?Mike Julian: Months. I think I reached out to you, it's March now. I reached out in November, and you're like, "Well, you know, I have all this other stuff going on, and by the way, my company was acquired."Nicole Forsgren: Well, back then, I had to be sly, right? I had to be like, "I've got this real big project. I'm sorry. Can we meet later?" And, God bless, you were very gracious and kind, and you said, "Sure-"Mike Julian: Well thank you.Nicole Forsgren: ... "we can chat later." And then I think you actually sent me a message after saying, "Oh, congrats on your 'big project'." I said, "Thank you."Mike Julian: That sounds about right.Nicole Forsgren: I appreciate it. Yeah. And then, you reached out again, and I said, "Oh, I'm actually working on another big project. But, this time ..."Mike Julian: It's not an acquisition.Nicole Forsgren: Yeah, it's not an acquisition. This time, it's a normal big project, and it's this year's State of DevOps report. And we just launched the survey, so I'm super excited we're collecting data again.Mike Julian: So we can get that right out of the way, where can you find the State of DevOps report?Nicole Forsgren: All of the State of DevOps reports are hosted at DORA's site. We still have the site up. And all of the reports that we've been involved in from, I want say we started in 2014, I'm so old I already forgot. All the reports that we've done are hosted. We'll post them in the show notes. If you can grab yourself a Diet Coke or coffee or a tea or a water, or if you want a bourbon. Get comfortable. Sit back, takes about 25 minutes. I know, right, everyone's like, "Girl, 25 minutes?"Mike Julian: That's a big survey.Nicole Forsgren: I know. It is. But it's because the State of DevOps report is scientific, right? We study prediction, and not just correlation. But sit back, get comfy and let me know what it's like to do your work. Because we're digging into some additional things this year; productivity, tool chains, additional things around burnout and happiness, and how we can get into flow, and really what that looks like. And some really great things are a bunch of people have already chimed in after taking the survey in really thoughtful ways. Also, by the way, I love you all for taking it if you have. Share it with your colleagues, share it with your peers.But they've said that just by taking the survey, they've already come away, even before the report has come out, they've already walked away with really interesting ideas and tips and insights about how they can make their work better.Mike Julian: Yeah, that's wild to think about, that the act of taking a survey actually improves my work. Because most surveys I take, I'm finished, I'm like, "Well, that was kind of a waste of time." It feels like I just gave away a bunch of stuff without getting anything.Nicole Forsgren: Yeah, and I think the reason it works that way is because we're so careful about the way we write questions that sometimes just the act of taking the survey helps you think about the way you do your work. So just the act of kind of taking some of these questions helps people think about what they're doing. And then, of course, like I joked already, it's my circle of life, the survey will be open until May 3rd and then I will go into data analysis and report writing. And we expect the report itself to come out about mid-August.Mike Julian: Well, why don't we take a few steps back and say ... Everyone loves a good origin story. I believe you and I met at a LISA many, many years ago. You were giving a joint workshop with Carolyn Rowland on-Nicole Forsgren: Oh, I love Carolyn.Mike Julian: Yes, she's also wonderful. I should have her on here.Nicole Forsgren: My twin. Yes. Absolutely.Mike Julian: So you were a professor then when I first met you. I'm like, you know that's kind of interesting that a professor's hanging out at a LISA and giving all this great advice on how to understand business value, which I thought was absolutely fascinating. Professor, hanging out in the DevOps world, how'd that happen?Nicole Forsgren: Oh my gosh. Okay so, the interesting thing is, I actually started in industry. My very first job was on a main frame, writing medical systems, and then writing finance systems. So I was a mainframe programmer. And then supported my main frame systems, right? Which is how so many of us in Ops got our start in Ops was someone was like, "Well somebody's gotta run this nonsense." Right? I was still in school, and then I ended up as a DEV, right? I was a Software Engineer at IBM for several years, and then pivoted into academia. Went and got a PhD, where I started asking questions about how to analyze systems, so I was actually doing NLP, natural language processing.Mike Julian: Interesting.Nicole Forsgren: Yeah, I was doing…Mike Julian: Yeah, that's a weird entry point into that. Definitely not what I would have expected.Nicole Forsgren: Yeah, so the crazy thing, my first year was actually deception detection.Mike Julian: I bet that's awesome.Nicole Forsgren: It was really interesting, it was super fun. But I leveraged so much of my background from systems work, right? Because what do we do? We analyze log systems.Mike Julian: Right.Nicole Forsgren: Right? We're so used to analyzing a ton of data in a messy format, many times text based, super noisy, can't always trust it, right? Right now people are like, "I can't trust surveys. People lie." Kids, so do our systems.Mike Julian: All the time.Nicole Forsgren: Right? And so, they loved me for a bunch of this work. All of a sudden, I randomly did a usability study with sysadmins. We wrote up the results, gave them back to IBM, and IBM was like, "Well what do you mean? We followed UCD guidelines, user center design guidelines. This should be applicable." And I was like, "Wait, whoa whoa whoa whoa, what?"At the time, they had one set of UCD guidelines, for all users. Super super advanced, high level advanced, sysadmins, who were doing back-up, disaster recovery, everything. And people who had bought a laptop and were using email for the first time in their lives.Mike Julian: I'm sure that went over super well.Nicole Forsgren: What? I'm like, "That's it. Changing my dissertation." Which of course, panicked my advisors. They were like, "You're gonna what?" So I start doing what, at the time, was kind of the groundwork for DevOps. Which is, how do you understand and predict information systems? And by information system, technology, automation, usage and prediction and then outcomes and impacts of the team, individual team at an organizational level.Which now, I say all that, that's big words, that's academic words, for basically what's DevOps. How I do I understand when people use automation and process and tooling and culture, and how do I know that it rolls up to make a difference and add value? Which now we're like, "Oh that's DevOps."This is late 2007.Mike Julian: Oh wow. So you were early days with us.Nicole Forsgren: Yeah. It was a really interesting parallel track, because now we look back and we're like, oh this is about 10 years ago. That was kind of the nascent origins about the same time as DevOps, right? So, so many of us kind of stumbled into it about the same time. I had no idea this was happening in industry. I kept plugging away, I kept doing it, stumbled into LISA, trying to connect data, of course, like every good academic does. Desperately trying to find data.Stumbled into, bumped into a group collecting similar things but using different rough methods. A team from a cute little configuration management startup called Puppet, right? Started working with them, invited myself onto the project. God bless them, I have so much love and respect for them because they basically let this random, random academic tear apart their study and redo it and lovingly tell these two dudes I had never met before, on the phone, named Jean and Jez, that they were doing everything wrong and that this word they were using wasn't the right word. Redid, in late 2013, the State of DevOps report, made it academically rigorous, and then that, kept going for several years, right? And then suddenly, we redid a bunch of stuff after a couple years.I left academia, walked away from what was about to be tenure, to go to another cute little configuration management startup called Chef, that was fun, right? So I'm working on the report with Puppet, and working for Chef, and continuing to do research and work with organizations and companies. And I left academia in part, because I was seeing this crazy DevOps thing make a difference. But in academia, they weren't quite getting it yet. And I wanted to make sure I could make a bigger difference, because I'd started working at tech in college in 98, 99, 2000; we lift this crazy dot com bust.And it wasn't a bust because everything crashed and the world ended like people thought but companies failed, it had huge implications and impacts for what happens to people. They lose their jobs, it breaks apart families, they get depressed, it impacts their lives, some people were committing suicide. And I was so worried about what happens when we hit this wave again and we're starting to see that hit again. So what happens if companies and organizations don't understand smart ways to make technology, because you can't just keep throwing people at the problem, or throwing the same people at the problem. And when I say throwing the same people I mean, seven day forced marches.I was at IBM when they made us do that, right? They got pulled into a class action lawsuit, you can't do that. That's not a way to live.Mike Julian: Yeah, I've been on many of those, they're brutal. And they don't result in anything useful.Nicole Forsgren: It's just broken hearts and broken lives, right? And so, some people like say, you really care about this. I'm just this nerd academic who just cares too much about what I do. And so if we really can, fundamentally change the way that people make software, because if it will in fact, actually, fundamentally make their lives better ... let's do it.And then, thank God, what we found is that it really does. Sure, it's nice that it delivers value to the business but that matters because then, what it does, is it helps them make smarter investments because then in turn, it reduces burnout. It makes people happier, it makes their lives better, and I think that's the part that's important.Mike Julian: So what you've been finding is that by a company implementing all these better practices of continuous deployment, and faster time to delivery, faster time to value ... it makes the lives of the people doing the work better?Nicole Forsgren: Yeah and John Shook has found this as well. Right? He did this great work in Lean, in that in order to change ... some people have said like, "How do you change culture?" Let's find ways to change culture. Sometimes the best ways to change culture is to change the way you do your work and I'm sure we've seen that ourselves, right? In other aspects of our lives. To change the way we feel, to change the way our family works, to change the way our relationships work. You actually physically change your lived experience, or some aspect of your lived experience.And so if we change the way that we make our software, we will change the way that our teams function, which is changing the way that the culture is. And so, said another way, if we can tell our organizations which smart investments to make in technology and process, then we can also improve the culture. We can also change the lives of the people, right? And the Microsoft Bing team found this, right? They wanted to make smart investments in continuous delivery.And in one year, they saw work life scores go from, I'm pulling this off the top of my head, but I want to say it went from 38% to 75%. That's huge.Mike Julian: That's an incredible jump.Nicole Forsgren: Right. And it's because people are able to leave work at work and then go home. You can go see your families, you can go to a movie, you can go eat, you can have hobbies, or you can go binge watch Grey's Anatomy. You can do what you want.Mike Julian: That's one of the most incredible things to me is that there's this idea of in order for a company to be successful they have to push their employees, kind of put them through the ringer. Intuitively, that's never felt right. And you actually have data that shows that's not right. Doing these things, actually makes everyone better. The business improves dramatically, the people's lives improve dramatically, and everything's awesome.Nicole Forsgren: Right and if we want to push people, that's not sustainable. And if anything, we want to push people to do things that they're good at and we want to leverage automation for things that automation is good at. So what does that mean?We want to have people doing creative, innovative, novel things. Let's have people solve problems, let's have automation do things that we need consistency for, reliability for, repeatability for, autotability for. Let's not have people bang a hammer and do manual testing constantly. Let's have people figure out how to solve a problem, do it once or twice to make sure that's the right thing, automate it, delegate that to the automation and the machines and the tooling, hand it off, be done, and then pull people back into the loop into the cycle, to figure out something new.I think it was Jesse Purcell that said, "I want to automate myself out of a job constantly." Right? Automate yourself out of your current job, and then find a new job to automate yourself out of again. We will never be out of work.Mike Julian: Yeah, I used to worry about that when I first started getting into DevOps and actually, when I first started working on automation it wasn't DevOps at the time, it was automating Windows desktop deployments at a University. And this is in the early 2000s. And one of my big worries was, well because I spend half my week doing this, if I were to automate it I'd spend an hour doing this, what am I gonna do the rest of the time? They're just gonna fire me 'cause they don't need me anymore.As it turns out, no, that's not what happened at all. Higher value of work became work because I wasn't focused so much on the toil.Nicole Forsgren: Right, and those types of things, machines and computers can't do. And the other thing, I used to tell all my friends, don't think about that in terms of job security, right? Don't try to paint yourself into a thing that no one else can ever do because then you can't be replaced, because that also means that you can never get promoted.If we always make sure that there are aspects of our job that can be automated so that there are opportunities for us to pick up new work, that only creates more opportunities for amazing things. There are always going to be problems, there are always problems for us to solve. I don't want to be stuck doing boring work.Mike Julian: Yeah, God knows that's the truth.Nicole Forsgren: Oh my gosh I know. I don't want to be stuck doing boring, repetitive work. That's just a headache. If we can find, especially really challenging, complex things, and if we can find ways to automate that, trust me, we will never dig ourselves to the bottom of that hole. That is always there.Mike Julian: So I want to talk about the State of DevOps report and I want to start off by asking a question about something you mentioned earlier. You mentioned this phrase, academic rigor. What is that, what does that mean?Nicole Forsgren: Academic rigor includes a few things, okay? So one part of academic rigor is research design. So it's not just yoloing a bunch of questions ... sorry, yolo is my shorthand for like, "Your methodology is questionable."Mike Julian: I've been seeing a lot of those surveys come out recently.Nicole Forsgren: Yeah. So one is research design. And some people say, "Nicole, what do you mean by research design?" So research design is, are the types of questions you're asking appropriately matched to the method that you're using to collect the data? Right? Are these things matched? And for some things, a survey is appropriate. A one time, so one time is cross-sectional, one slice in time survey across a whole industry. Some things this is appropriate for. Some things this is not appropriate for.One good example, a whole bunch of people really want me to do open spaces, questions, in State of DevOps report.Mike Julian: What does that mean? Like open ended questions?Nicole Forsgren: No, open spaces. So a lot of people have a lot of feels about open office spaces. Should I work in an open office space? Is open office space influence productivity? Or pair-programming ... does pair-programming affect productivity? Does pair-programming affect quality? People have a lot of feels about these things. The type of research design, employed in the State of DevOps report, is a survey that is deployed completely anonymously, across the entire industry at a single point in time, is not the appropriate research design to answer either of those questions.Mike Julian: Why is that?Nicole Forsgren: Because what you would need to do is have a much more controlled research design. So I would need to know, for example, who you were working with. I would need to know, so let's go with the peer review one, I would need to know the types of problems you're working on, the types of code problems, I would need to now the complexity of the problems, I would need to know how long it's taking you, right? If you're wanting to now productivity, right? 'Cause I would need to know a measure of productivity. I would need to know what the outcome is. So if my outcome's productivity, I would need to measure productivity, because I'm gonna need to control for perplexity, right? Because things that are more complex, we expect to take longer. Things that are less complex, I expect to take not as long, right?And then I would need to match and control. Right? So even things like open office spaces, right? Because if you're doing peer programming in an open office space versus not an open office space, if you're doing it at an office, I would need to know seniority of the person, or some proxy of seniority. I would need to now how you're paired, are you paired with someone at your approximate experience level, if not seniority experience level. I would need to know how the pair-programming works, I would need to know the technology involved, I would need to know if you're remote, or if you're actually sitting next to each other. I would need to know if you're both able to input text at the same time or if one person is inserting and the other person is not.So that when I do comparisons, I know what the comparisons are like.Mike Julian: That's an incredible amount of information. I never expected that you would have to know so much in order to get a good answer out of that.Nicole Forsgren: And that's off the top of my head. Right, I'm spit balling because you asked me a good question. And that's just on research design and then you move on to analysis, right? When you move on to analysis, then we need to get into the types of questions that you have asked. Are these types of questions, are we looking at correlation? Are we looking at prediction? Are we looking at causation? What types of data do we have available and which types of analysis and questions are they appropriate for?Again, they need to match up the right way. Some types of data, are not appropriate for certain types of analysis or questions. So you really need to make sure that each one is appropriate for the right types of things. Right? Certain types of analysis, like mechanistic, survey questions will never be appropriate for mechanistic analysis, right? Although, quite honestly, no one's every gonna be doing mechanistic analysis. Never and by the way, if anyone comes to me and says they're doing mechanistic analysis, I'm gonna sit back and listen to you very intently, very interested because I don't think anyone's doing mechanistic ... it's not a thing.Mike Julian: So when you're analyzing the results of the survey, what we're seeing is one question followed by another question, followed by another question, and you know hundreds of questions. When you're analyzing this stuff, are you looking at a question at a time, or are you looking at multiple questions and then interpreting the answers based on what you're seeing across several different questions?Nicole Forsgren: So when I'm writing up the results, when I'm writing up the report, I am writing up the results of my analysis, and my analysis is taking into account a very, very careful research design. Now what that means is, my research design has been very carefully constructed to minimize misunderstandings. It tries to minimize drift in answers. So, one way that we do that, and this is outlined in part two of accelerate if there's any stats nerds that want to read up on this, we do things called latent constructs.So, you asked about having only a few questions or several questions. One way we do this, I mentioned, is called latent constructs. If I want to ask you about culture, right, I could ask 10 people about culture and I would get 15 answers. 'Cause culture could mean so many different things, right? In general, when we talk about culture in a DevOps context, we tend to get something that ... people will say very common things like, breaking down silos, having good trust, having novelty, right?So what we do is we start with a definition, and then we will come up with several items, questions, that capture each of those dimensions. So you might want to think about a dope Venn diagram, where each of the questions is overlayed and then all of the things where they have the biggest, or the perfect overlay, that very center, that little nut, that is what the construct is. That is what culture is, that is what's represented by culture.And then each of the individual circles is each question. That's what we do in research design. One part of research design. When I get to stats analysis mode, I take all of the questions, all of the items, across, not just culture, but every single thing that I'm thinking about. So in years past I've done monitoring observability, I've done CI, I've done automated testing, I've done version control, I've done all of these things, and I throw all of them into the hopper, right?Mike Julian: Which is probably your massive Excel spreadsheet I'm sure.Nicole Forsgren: No, it's SPSS. I use SPSS but you can use several different stats tools. And we do principal components analysis. And what we do is we say, how do they load? Basically, how do they group together and do we have convergent validity? Do they converge? Do they only measure what they're supposed to measure? And do we have discriminant validity? Do they not measure what they're not supposed to measure? And do we have reliability? Does everyone who's reading these questions, read them in a very very similar way?Once we have all of those things, and there's several statistical tests for all of those, then I say, "Okay, these several items, usually three to five items, all of these items together are culture," or "all of these items together are CI," or "all of these, right these grouping of items, represent this." Okay, now, now, I can start looking at things like correlations, or predictions, or something else and then I get to the report, and now I will just talk about it like, culture.So I talk about it as one thing, but it's actually several things and then when I talk about culture, I can say, "This is what culture is," and I can talk about it in this nuanced, multidimensional way, and I know what those dimensions are because it's made up of three to five, to six to seven questions, and by the way, if one of those questions didn't fit, because I know from the stats analysis, I can toss it, and I know why. And I always have several items. That's the risk, if you only have one question or if you only have two questions. If one of them doesn't work, which one is the wrong one? You don't know. Right? Because, is it A or is it B? I don't know.At least if I start with three and one falls out, then it's probably the two that are good.Mike Julian: Yeah. Many listeners on here have taken a lot of the surveys run by marketing organizations, except the surveys are also designed by people in marketing …Nicole Forsgren: They're designed by people who want a specific answer.Mike Julian: Exactly.Nicole Forsgren: And that's the challenge.Mike Julian: Right, whereas, to make this very clear, the State of DevOps report is not that at all. There's a lot, as you said, rigor that goes into this.Nicole Forsgren: So the nice thing is that we have always been vendor and tool agnostic.Mike Julian: You're not looking for a very particular answer to come out, you want to know what is actually out there.Nicole Forsgren: And we're not looking for an answer to a product. So, in the example of CI, what is CI? I don't care about a tool. I'm saying, if you're doing CI and if you're doing CI, continuous integration, in a way that's predictive of smart outcomes, you will have these four things. The power in that, is that anyone can go back and look at this as a evaluative tool. If you are a manager, or a leader, or a developer, you can say, "Any tool that I use, any tool in the world, I should look for these four things," or "Any tool I build myself, or if I'm doing CI, I should have these four things."If you're a vendor, you should say, "If I think I'm building or selling CI, I better have these four things. Right? So that's the great thing and I've gotta say, God bless my new team. They're letting me run this the same way. It's still the same way. It's still vendor and tool agnostic, it's still capabilities focused. Every single thing you look for, whether it's automation or process or culture or outcomes, it's vendor and tool agnostic, it's capabilities focused, and again, the power is that you can use it as a evaluative tool.Is my team doing this? Is my tooling doing this? Is my technology doing this? Am I able to do this? If I'm not, what is my weakness? What is my constraint? Because if I take us back to the beginning, what is it that drives me and the DORA team, what is it that we want to get out of this? We want to make things better. And how do we do that? We can give people an easy evaluation criteria. And I'm not saying it's easy, because all of this is easy, it takes work. But if there's clear evaluation criteria, we've got somewhere to go.Mike Julian: Since I know that you love talking about what you found in your several years of doing this. What are some of the most interesting results you've come up with?Nicole Forsgren: Oh, there's so many good ones.Mike Julian: Let's pick your top three.Nicole Forsgren: Okay, I think one of my favorites is, and I'm gonna do this in cheesy marketing speak …Mike Julian: Please have at it. We have prepared ourselves.Nicole Forsgren: Someone who had a little startup and had to fake it as a marketer for a minute, we'll see how I do at this.Architecture matters, technology doesn't. Number one. Okay. So what does that mean? What that means is, we have found that if you architect it the right way, your architectural outcomes have a greater impact than your technology stack. So architectural outcomes, some key questions are: Can I test? Can I deploy? Can I build without fine grained communication and coordination?Mike Julian: What does the fine grained mean?Nicole Forsgren: Do I have to meet and work with and requisition something among, do I have to spin up some crazy new test environment or do I have to get approvals across 17 different teams? Notice, I just mentioned teams. Communication and coordination can be a technology limitation or it can be a people limitation. This harkens very much back to Conway's law.Mike Julian: One of my favorite laws.Nicole Forsgren: Right? This is very much a DevOp thing. But, it's very true. Whatever our communication patterns look like, we usually end up building into our tech. Now, I will say this is very often easier to implement in Cloud and Cloud native environments, but it can absolutely be achieved in Legacy and Mainframe environments as well. We did not see statistically significantly differences among Brownfield and Greenfield respondents in previous years.Mike Julian: That's good to know.Nicole Forsgren: Yeah, so I love that one. That one's super fun.Okay, number two. Cloud matters, but only if you're doing it right.Mike Julian: Oh, what does right mean?Nicole Forsgren: Dun dun duh. So, this was one of my favorite stats. We found that you are 23 times more likely to be an elite performer if you're doing all five essential Cloud characteristics. I guess you could say if you're doing all five essential characteristics of Cloud computing according to NIST, the National Institutes of Standards in Technology. So I didn't make this up, this comes from NIST, okay?So it was interesting because we asked a whole bunch of people if they were in the Cloud. They're like, of course we're in the Cloud, we're totally in the Cloud, right? But only 22% of people are doing all five things. So what are these five? So these five are on demand self service. You can provision resources without human interaction, right? If you have to fill out a ticket and wait for a person to do a ticket, this doesn't count. No points.Another one is broad network access. So you can access your Cloud stuff through any type of platform; mobile phones, tablets, laptops, workstations. Most people are pretty good at this. Another one is resource pooling, so resources are dynamically assigned and reassigned on demand. Another one is rapidly elasticity, right, bursting magic. We usually know this one.Now the last one is measured service. So we only pay for what we use. So the ones that are most often looked is usually broad network access and on demand self service.Mike Julian: Yeah, what's interesting about that, to me, there's nothing in there that prevents, like say, an internal open stack cluster from qualifying.Nicole Forsgren: Exactly, right. So this could be private Cloud. I love that you pointed that out. The reason that this is so important to call out is, it just comes down to execution. It can be done and the other challenge is so often organizations, executives, or the board says you have to go to the Cloud and so someone says, "Oh yes, we're going to the Cloud." But then someone has redefined what it means to be in the Cloud. Right? And so, you get there, someone checks off their little box, puts a gold star on someone's chart, they walk away, and they're like, "Well we're not seeing any benefits." Well yeah, 'cause you're not doing it.Mike Julian: Right. Yep.Nicole Forsgren: It's like, "I bought a gym membership, I'm done." No. And again, I'm not saying it's easy, right? There's some work involved. Now the other thing that I love is that, let's say you're not in the Cloud, for some reason you have to stay in a Legacy environment, you can look at these five things and you can implement as many possible, you can still realize benefits.Mike Julian: Right. It's not an all or nothing approach. You can do some of these and still get a lot of benefit from it.Nicole Forsgren: It's almost like a cheater back to number one, which was architecture matters, technology doesn't. How can I do a cheat sheet to see some really good tips on how to get there?Mike Julian: So what's your number three here?Nicole Forsgren: My number three would probably be, outsourcing doesn't work.Mike Julian: Yeah.Nicole Forsgren: Which some people hate me for and they shoot laser beams out of their eyes. So let's say outsourcing doesn't work*.Mike Julian: Okay, what's the asterisk?Nicole Forsgren: Asterisk, the asterisk is going to be that functional outsourcing doesn't work.Mike Julian: Okay, so say outsourcing my on call duties, probably isn't going to work so well.Nicole Forsgren: Taking all of DEV, shipping it away. Taking all of TEST, shipping it away. Taking all of OPS, shipping it away. Now, why is that? Because then, all of you've done is taken another set of hand offs, you've created another silo. You've also batched up a huge set of work, and you're making everyone wait for that to happen. The goal is to create value and not make people wait. If now everyone has to wait for everything to come back, if you're making high value work wait on low value work, because it all has to come back together, which is usually the way it works, you're boned.Now, functional outsourcing. If you have an outsourcing partner that collaborates with you and coordinates with you and delivers at the same cadence, that's not functional outsourcing. That's the asterisk.Mike Julian: Okay, gotcha.Nicole Forsgren: Also, if they're part of your team and they're part of your company but they basically disappear for three months at a time. Sorry kids, that's functional outsourcing. I worked no points, may God have mercy on your soul. It's not helpful.Mike Julian: Right. It seems to me, how you could tell if you're in this predicament, is if there is a noticeable hand off between your team and whoever you have given these items to, you have functional outsourcing. Would that be about right?Nicole Forsgren: Yes, and especially if there's a noticeable hand off and then a black box of mystery.Mike Julian: Of like, how is the work getting done?Nicole Forsgren: Step one, something, step two, question mark, step three: profit.Mike Julian: Maybe. So the first two, it's all good because we can kind of see where to go from there, but this third one actually seems a bit harder because if I'm a sysadmin, I have absolutely no control over this functional outsourcing. I may hate it just as much, I may hate it myself, but I don't have any control over it. What can I do as a sysadmin, or someone in ops, someone in dev, how can I improve that situation?Nicole Forsgren: So some ideas might include things like, seeing if there's any way to improve communication or cadences in the interim. Right? You might still have that outsourcing partner, because that's just the way it's gonna be. But, let's say that you've batched up work in three month increments, is there any way to increase handoffs to once a month? Is there any way that we can take capabilities that we know we import, working in small batches, and just increase that handoff? Is there any way that we can integrate them into our cadence, into our teams?Now I realize there is some challenge here because from a legal standpoint, we can't treat them like our team because then, at least from the United States standpoint, once we treat them like an employee, then we're liable for employment taxes and all of that other legal stuff. But if we can integrate them into our work cadence, or more closely into our work cadence, then our outcomes improve.Mike Julian: Okay, cool. That makes a lot more sense. That doesn't sound nearly as hard as I was fearing.Nicole Forsgren: So it can be starting to decrease the delay on the cadence, asking for slightly more visibility into what's happening, if it's a complete black box, looking for that.Mike Julian: Nicole, this has been absolutely fantastic. Thank you so much for joining me. I have two last questions. Where can people find this State of DevOps report to take the survey? Where is the survey at?Nicole Forsgren: Oh, we've got the survey posted. Can I include it in show notes?Mike Julian: Absolutely. Alright, folks, check the show notes for the link. And my last question for you is where can people find out more about you and your work, aside from this survey?Nicole Forsgren: I'm a couple places. So my own website is at nicolefv.com and I'm always on Twitter, usually talking about ice cream and Diet Coke, that's @nicolefv.Mike Julian: I do love you Twitter feed. It's one of my favorites.Nicole Forsgren: Yeah, everybody come say hi. My DMs are open.Mike Julian: What I love most about your Twitter feed is roughly around the time that you're writing the report and saying, "Oh my God, why did I do this?"Nicole Forsgren: Yeah, I try to keep it locked down, but every once in a while something will slip, like "Oh my gosh everybody, something good is happening," or "oh I forgot this one thing," or "So much good is happening."Mike Julian: Yeah, I remember last year like, "Oh my God this is so cool but I can't tell you about it."Alright, well thank you so much for coming on and thanks to everyone else listening to the Real World DevOps podcast. If you want to stay up to date on the latest episodes, you can find us at realworlddevops.com and on Itunes, Google Play, or wherever it is you get your podcasts. I'll see you on the next episode.


11 Apr 2019

Episode artwork

Dr. Nicole Forsgren on DevOps: 'You Are What You Measure'

The New Stack Podcast

Dr. Nicole Forsgren is the CEO and Chief Scientist of DevOps Research and Assessment (DORA), and the principal author of the annual State of DevOps Report.  On many occasions, she’s asserted the case that “you are what you measure” — that a company’s capabilities may be either enabled or constrained by the extent to which it is capable of perceiving how it does what it does.  In a recent conversation for The New Stack, I presented Dr. Forsgren with the notion that an organization that automates its processes in a cloud native manner must also measure those processes cloud-natively as well.  In so doing, I suggested, it would create a business that operated under very different principles, since it operates on a platform where measurement has a very different context.“I think I’ll agree to the point that, when we change our technology stack and applications in order to be cloud native,” the CEO responded, “that will likely change the way we do our work, whether it’s through a direct change because we’ve changed our work, or through the measurements that we now employ to track and evaluate that work.


16 Oct 2018