© 2019 OwlTail All rights reserved. OwlTail only owns the podcast episode rankings. Copyright of underlying podcast content is owned by the publisher, not OwlTail. Audio is streamed directly from Wesley Noble servers. Downloads goes directly to publisher.
#12 Jesse. Four years ago, Jesse was hit by a car and nearly died. Now he wants to find the driver. And thank him.CreditsHeavyweight is hosted and produced by Jonathan Goldstein.This episode was also produced by Kalila Holt. The senior producer is Kaitlin Roberts.Editing by Jorge Just, Alex Blumberg, and Wendy Dorr.Special thanks to Emily Condon, Saidu Tejan-Thomas, and Jackie Cohen.The show was mixed by Kate Bilinski. Music by Christine Fellows, John K Samson, and Edwin, with additional music by Chris Zabriskie, Blue Dot Sessions, Michael Charles Smith, Visager, Graham Barton, and Katie Mullins. Our theme song is by The Weakerthans courtesy of Epitaph Records, and our ad music is by Haley Shaw.
83- Heyoon. Growing up in Ann Arbor, Michigan, Alex Goldman was a misfit. Bored and disaffected and angry, he longed for a place to escape to. And then he found Heyoon. The only way to find out about Heyoon for someone to … Continue reading →
Rahul Moodgal - Master Fund Raiser (Capital Allocators, EP.87). Rahul Moodgal has spent 20 years as a fund raiser across long only strategies, hedge funds, fund of funds, customized solutions, start-ups, and non-profits. Collectively, Rahul has raised and helped raise $60 billion for firms since 2005. He started his career in the industry at powerhouse TT International, and later joined The Children’s Investment Fund (TCI) where he led the marketing effort that raised $20 billion in just 3½ years. Within TCI’s affiliate model, Rahul also was responsible for the largest India fund raise in history ($1 billion for TCI New Horizon Fund), and the largest sector fund launch in history ($1.1 billion for Algebris Investments). Our conversation covers capital raising lessons learned from teaching, the value of transparency, the gold rush before 2008, the lean times afterwards, modern fee structures, the three key points to effective marketing, the three traits that will kill you, the two biggest issues start-up funds face, the best questions asked by leading allocators, and some of the worst horror stories in attempted capital raising. We close comparing by fund raising for charities and investment firms. Learn More Discuss show and Read the Transcript Join Ted's mailing list at CapitalAllocatorsPodcast.com Join the Capital Allocators Forum Write a review on iTunes Follow Ted on twitter at @tseides For more episodes go to CapitalAllocatorsPodcast.com/Podcast
Vanguard's Joe Davis Discusses Global Economics (Podcast). Bloomberg Opinion columnist Barry Ritholtz interviews Joseph H. Davis, global chief economist at The Vanguard Group. Davis is also head of Vanguard's investment strategy group and a member of the senior portfolio management team for Vanguard's fixed income group, which oversees more than $500 billion in assets under management. He earned his doctorate in macroeconomics and finance at Duke University.
Rank #1: 4 Things that Will Make You a Better UX Designer. In this episode, Kate and Laura discuss four things that will make you a better UX Designer. Learn why getting better at taking feedback and understanding business can help improve your career. Also, Laura gets shouty. Again. Music: The Future Soon by Jonathan Coulton Drink Pairing: Single Malt Scotch
Rank #2: Data is Why We Can't Have Nice Things. In this episode, Kate and Laura UXplain GDPR (probably incorrectly) and talk about some good approaches to treating your users' data a little more respectfully. Drink Pairing: Vodka Martini (I'd say "shaken, not stirred" but that bruises the booze, so knock it off with that nonsense)
Rank #1: In Conversation with Alan Cooper. We talk with Alan Cooper about skateboarding, fatherhood, design, ethics, and the responsibility that comes with getting our seat at the table.
Rank #2: In Conversation with Joe Natoli. We talk with Joe Natoli, enterprise UX consultant and author of the new book, Think First: My No-Nonsense Approach to Creating Successful Products, Memorable User Experiences and Happy Customers.
Rank #1: Designing for Mobile Devices - an Interview with Jason Furnell. Gerry Gaffney asks Jason Furnell about designing for mobile devices.Along the way, Jason talks about the poetry of movement, the Agile development methodology, and how to navigate a career path in design.He tells us that without a vision, great minds can go to waste; and how high-fidelity wireframes can help communicate a simple vision.Jason's blog is 'the architecture of everything'. (jasonfurnell.wordpress.com)The William Gibson book is 'Spook Country'. (tinyurl.com/2a7mf9) Duration: 20:29 File size: 7.2MB
Rank #2: Explain yourself! An interview with Tom Greever. Articulating Design Decisions.
Rank #1: #176 The insane growth of UX. Episode 176 is a link show. James and Per discuss two articles that have grabbed their attention. Article one is A 100 Year View of User Experience by Jakob Nielsen. “For all practical purposes, the growth in UX is still to come. I’m predicting that we’ll be 100 million UX professionals in the world by 2050. This corresponds to 1% of the world’s population.” Our second article is Five user research rules of thumb by Leisa Reichelt. “Over years of experience you begin to collect ways of working and talking about how you work that accrete into rules of thumb.” (Listening time: 38 minutes) Episode #176: The insane growth of UX https://t.co/aWw4OGOryp #ux #podcast #uxresearch pic.twitter.com/63zx98gGQG — UX Podcast (@uxpodcast) January 19, 2018 References: Article 1: A 100-Year View of User Experience By Jakob Nielsen Episode 23: James & Per find their breakpoint UX Sverige (Facebook group, Swedish) Article 2: Five user research rules of thumb By Leisa Reichelt Episode 96: James & Per give microfeedback Cover art: Chain links by Howard Lake (CC BY-SA 2.0) Cropped to 1:1 SaveSave SaveSave The post #176 The insane growth of UX appeared first on UX Podcast.
Rank #2: #208 Your next UX job with Jessica Ivins. Preparing for your next job is like saving for retirement says Jessica Ivins. Start putting yourself out there ahead of time, thinking about what you would like your next job to be and do a little bit to prepare on a regular basis. We get lots of questions related to looking for your next, or even your first, UX job. Jessica helps us answer them: What should you do before applying for jobs? Should you have a portfolio? How should you present your portfolio? How can we use storytelling during our interviews? How do you make best use of social media? What design tools do you really need to know about? (Listening time: 36 minutes, transcript) Episode 208: Your next UX job with @jessicaivins – concrete tips from Jessica about preparing yourself for your next move #ux #podcast #careers https://t.co/QuxfdhZ5e9 pic.twitter.com/e4EsyIuADi — UX Podcast (@uxpodcast) April 12, 2019 References: Full transcript of Episode 208 Jessica’s website Don’t Decline a UX Job Offer You Don’t Have Yet How to Tell Compelling Stories During a UX Job Interview Maintain a Professional Network Throughout Your UX Career Build a collection of project work for your UX portfolio How to describe your design work in a portfolio (Twitter thread) Episode 118: Inclusive design Episode 190: Thinking in triplicate Send us your thoughts at: email@example.com Backstage mailing list: Sign up here SaveSave SaveSave The post #208 Your next UX job with Jessica Ivins appeared first on UX Podcast.
Rank #1: Lean UX & Organization Design with Jeff Gothelf (Author of Lean UX & Sense and Respond). What is Lean UX and how does it change the way that designers work? What are the most effective ways for design organizations to be structured? In this episode, Austin interviews Jeff Gothelf (Author of Lean UX & Sense and Respond) to take a deep dive on UX in the modern workplace and the future of design."The goal for me is to illustrate to the folks that I'm speaking with, what the new focus of digital product and service design is. And it's not cranking out more features. It's not getting more stuff out the door and in front of customers. It's having a meaningful impact on customer behavior." — Jeff at 7:54Jeff’s website: www.JeffGothelf.comRead Lean UX: www.LeanUXBook.comRead Sense and Respond: http://SenseAndRespond.coJeff on Twitter: Twitter.com/jboogieEmail us: Hello@UXandGrowth.comAustin on Twitter: Twitter.com/ustinKnightGeoff on Twitter: Twitter.com/dailydaigleMatt on Twitter: Twitter.com/mattrheault
Rank #2: Creating & Measuring Delight. What is delight, beyond the buzzword? How should it really be used and can it be measured? In this episode, we examine how delight fits into the hierarchy of user needs and where it should sit in a product roadmap."The best way to delight your users is to deliver on the core value that you promised them. Why are they using your product in the first place? If you can't deliver on that in a functional and reliable way, then you aren't delighting them. And you loose every opportunity to delight them." — Matt at 2:25Thoughts on delight: http://uxmastery.com/formula-delight/Email us: Hello@UXandGrowth.comAustin on Twitter: Twitter.com/ustinKnightGeoff on Twitter: Twitter.com/dailydaigleMatt on Twitter: Twitter.com/mattrheault
Rank #1: Jared Spool - How Do We Design Designers? Live!. Transcript available. Why don’t design students coming out of school know about responsive design or creating mobile apps? Why are our self-taught hackers and C.S. grads having a tough time keeping up with the pace of technology innovation? It’s not that schools or professional development programs are slow to adapt; it’s more complicated than that. But our tendency to focus on skills alone just isn’t sustainable. Instead, we need to start investing in the ways we create designers and fuel their growth.
Rank #2: Noah Iliinsky - Telling the Right Story with Data Visualizations A Virtual Seminar Follow-up. Transcript available. The right data can be more effective than words when it comes to telling a story. Even if you have the data, you have to present it in the correct manner. Choosing the right axes, colors and placement are all a big part of putting together a great visualization. Noah Iliinsky demonstrates what goes into creating an effective visualization.
Rank #1: 002: Don’t Let Anything Get in the Way of What You Want to Build with Josh Tucker. Josh Tucker shares with us the importance of determination and resolve when building whatever it is you desire to build. He also stresses the importance of knowing everything you possibly can about your users and designing experiences that delight them and improve their lives. Josh Tucker (The Interactioneer) is an Interactive Designer at Google and […]
Rank #2: 042: Practical Design Discovery with Dan Brown. Dan Brown inspires us to get great at asking questions because 'the question' is one of our most invincible resources. He reminds us to maintain a spirit of collaboration with the the team members we’re struggling with since they care as much about the project as we do. He encourages us to always peer beneath the surface at the underlying structure of whatever we’re working on. He also motivates us to never settle. Secret Identity (7:47) Origin Story (9:29) Biggest Failure (20:23) Awkward Testing Story (23:27) Design Superpower (28:04) Design Kryptonite (30:52) UX Superhero Name (33:50) Fights for Users (36:08) Habit of Success (40:54) Invincible Resource (43:27) Recommended Book (51:23) Practical Design Discovery (53:40) Overcoming Team Tension (57:00) Best Advice (69:04) Contact Info (71:48) Check out the detailed show notes and Eli Jorgensen’s astonishing superhero artwork at userdefenders.com/042 This episode is brought to you by Adobe, makers of XD. Try it free at userdefenders.com/xd Get your FREE audiobook from Audible at userdefenders.com/freebook. No commitment. Cancel in 30 days, and you won't be charged. The book is still yours to keep.
Rank #1: Episode 47 - How to plan your design career(from start to finish). How do you get started in Design? What's next after your first job? What positions should I build to? What are the possible paths for a designer to take? We discuss these important questions and more in this week's podcast!
Rank #2: Episode 25 - How to design a great portfolio. This week Chris and Jon discuss one of the most difficult design projects of them all, portfolios. We talk about what they are, how they can help move your career forward, and how we're faring on this journey to portfolio greatness!
Rank #1: Episode 137: Lean, Agile & Design Thinking with Jeff Gothelf. There are so many design methodologies available these days — lean, agile & design thinking being the most popular. Could you use them side-by-side? Our guest today is Jeff Gothelf, author of Lean UX and Sense & Respond, and co-founder of Sense & Respond Press. You'll learn how to make the most out of these frameworks, help teams talk to each other, and measure customer outcomes (instead of your effort) using the right behavior metrics. Podcast feed: subscribe to http://simplecast.fm/podcasts/1441/rss in your favorite podcast app, and follow us on iTunes, Stitcher, or Google Play Music. Show Notes Lean UX, Sense & Respond, Lean vs Agile vs Design Thinking — Jeff's books Sense & Respond Press — Jeff's publishing house together with Josh Seiden Making Progress, Hire Women — some of the latest books by Sense & Respond Press Agile vs Lean vs Design Thinking — Jeff's original article Episode 131: Design Sprint with Jonathan Courtney AARRR! Dave McClure’s “Pirate Metrics” And The Only Five Numbers That Matter — an article by Walter Chen Jeff's website Follow Jeff on Twitter: @jboogie Today's Sponsor This episode is brought to you by Abstract — design workflow management for product design teams using Sketch. No more searching for the right version of the file, exporting and importing between tools, or trying to consolidate feedback. Now everything is in one place! Sign your team up for a free 30-day trial today by heading over to abstract.com. Interested in sponsoring an episode? Learn more here. Leave a Review Reviews are hugely important because they help new people discover this podcast. If you enjoyed listening to this episode, please leave a review on iTunes. Here's how.
Rank #2: Episode 52: Data-Driven Design with Brent Palmer. How do you make sure your design efforts are effective? There's no bigger mistake than just guess! Brent Palmer, a design lead at Zendesk, walks us through the tools and methods for data-driven design. You'll learn how to approach customer research the right way, why testing is important, and how to deal with the research results. Podcast feed: subscribe to http://simplecast.fm/podcasts/1441/rss in your favorite podcast app, and follow us on iTunes, Stitcher, or Google Play Music. Show Notes Zendesk — the company where Brent works Heap, Google Analytics, Mixpanel — common analytics tools Usability Hub — remote user testing for quick design validation InVision — a prototyping tool Zoom — a video conferencing tool for conducting 1-on-1 sessions Confluence — a documentation tool for storing results of UX research (also Google Drive, Slack) Inspectlet — a tool for heatmaps Pendo, Intercom — tools for customer research Zendesk Garden — a public library of user interface components for Zendesk products Follow Brent on Medium Brent's website Conferences where Brent is speaking this year: DIBI 2017, Design Matters 2017 Follow Brent on Twitter: @brentpalmer Today's Sponsor This episode is brought to you by Tiny Reminder — a simple productivity tool for freelancers and consultants. Need content or files on time? Tiny Reminder will play "bad cop" for you with friendly notifications! It works great for collecting content, feedback, files, bios, or anything else. What's best, this tool is forever free — sign up today! Interested in sponsoring an episode? Learn more here. Leave a Review Reviews are hugely important because they help new people discover this podcast. If you enjoyed listening to this episode, please leave a review on iTunes.
Rank #1: 17. Tomer Sharon of Goldman Sachs. In this episode of Dollars to Donuts, I talk with Tomer Sharon, the Head of User Research and Metrics at Goldman Sachs. We talk about how to assess potential hires for user research positions, infrastructure for capturing and searching a body of data, and developing a practice inside a willing, yet large, organization. Some parts of kind of pure research are creative. Probably the biggest one is translating a set of questions that a team has into okay, what are we going to do to get answers? If it was that easy to come up with an answer to that, then anybody could do that well. That’s not the case. A lot of people are having a lot of trouble with that part. So, I think that’s a creative part. You’re not going to see a beautiful painting coming out of that, but it is creative. – Tomer Sharon Show Links Tomer on LinkedIn Tomer on Twitter Goldman Sachs WeWork It’s OUR Research on Twitter It’s OUR Research on Amazon Validating Product Ideas Through Lean User Research Goldman Sachs Private Wealth Management Marcus by Goldman Sachs UserZoom UserTesting OKRs ResearchOps Democratizing UX (and Polaris) Masters In Human Factors at Bentley Adam Neumann, WeWork CEO Key Experience Indicators: How to decide what to measure? (Medium) Google’s HEART Framework for Measuring UX Face of Finance NYC 2019 User Research London 2019 Follow Dollars to Donuts on Twitter and help other listeners find the podcast by leaving a review on iTunes. Transcript Steve Portigal: Greetings, humans! Thanks for listening to Dollars to Donuts, the podcast where I talk to people who lead user research in their organization. Over the past while I’ve been putting together a household emergency kit. It’s primarily shopping exercise, and I’ve ordered a hand crank and solar powered radio, a replacement for matches, latex gloves, bandages, and air filter masks (which we made use of during a period of dangerously poor air quality recently). The last step was getting some food that will last – cans of soup and stew, crackers, single-serve breakfast cereals. There’s something satisfying about acquiring a bunch of stuff and storing it away, somewhat organized. And that led to a stray thought that I noticed – “Oh, I can’t wait to use all this great stuff!” And then I realized how crazy that sounded. I don’t want to use it! I don’t want there to be some emergency that is bad enough that I’m drinking the emergency water stored in the garage and eating canned stew, also stored in the garage. I mean, yes, we’ll eat or donate the food before it expires and replace it, but it’s a whole set of preparations that I hope I’ll not use, which leaves me with the hope for no shopping gratification, kind of a confusing way to feel. But it did remind of the recent workshop I led with researchers in Sydney, Australia. We looked at a lot of the user research war stories I’ve been collecting, like those published in Doorbells, Danger, and Dead Batteries, and we pulled out lessons and best practices. There was – as there always is – a lot of discussion about safety and preparation. It seemed to me that people who worked in organizations with established safety cultures already had a strong baseline of safety procedures for user research and fieldwork, like texting a contact before and after a session, not going out alone, and so on. Their work cultures were strong on processes, especially for safety, so thinking about this for research was obvious. But not everyone works in that kind of environment, and plenty of researchers work for themselves, without that corporate structure to support them in creating best practices for safety. Anyway it led to a lot of discussion beyond just safety about running through possibilities ahead of time so that when any situation comes up it’s not a surprise and there’s at least a starting point already established about how to respond. I think this is a great idea, but I think we have to acknowledge the limitations – you can’t plan for every possible situation, there are always going to be things that come up that you probably haven’t ever considered. I think that some planning for the unexpected will help you to adapt in the moment to surprises, but that’s different than the false comfort of assuming you have every contingency planned for. I hope I never have to make use of our large cache of sterile latex gloves, but maybe just having acquired them I’m in a slightly better situation for some other unexpected situation? You can help me continue to produce this podcast for you. I run my own small business, and you can hire me! I lead in-depth user research that helps drive product and strategy decisions, that helps drive internal change. I provide guidance to teams while they are learning about their customers, and I deliver training that teaches people how to be better at user research and analysis. You can buy either of my two books – the classic Interviewing Users and Doorbells, Danger, and Dead Batteries, a book of stories from other researchers about the kinds of things that happen when they go out into the field. You can also review this podcast on iTunes and review the books on Amazon. With your support, I can keep doing this podcast for you. All right! Time for my interview with Tomer. Tomer Sharon is the Head of User Research and Metrics at Goldman Sachs. He’s worked at Google and WeWork, and written two books – It’s Our Research, and Validating Product Ideas Through Lean User Research. Thanks for being on the podcast. It’s great to have you here – virtually here in audio space that we’re all sharing. Why don’t you start off – just do a little introduction. Who are you? Where do you work? What are you doing? Tomer Sharon: Okay. Thank you for having me, first. My name is Tomer Sharon. I am currently Head of User Research and Metrics at Goldman Sachs. I do have a second day job. I’m also heading a design group for a product called PWM (Private Wealth Management). Yeah, this is where I’m at in the past – well, almost a year. Steve: Alright. So, what is Goldman Sachs? Tomer: Goldman Sachs is, I would say, an investment bank. Probably one of the more important banks in the world. Big corporate. Definitely not one you would associated with design and research, at least not that type of research. But they are changing and they’re celebrating 150 years this year and they’re moving towards what’s called outside digital transformation and that includes learning more from their audiences and investing a lot more in design. Steve: What’s the relationship between the existence of your role and this larger shift that’s going on? Tomer: I think there’s a strong relationship. They have been realizing that they can’t just be living in their own box and they have to open up and try and understand audiences that they’re engaged with already and new audiences. I’ll give an example. Goldman has a commercial bank that’s called Marcus. It’s been around for a couple of years, but still these are consumers that Goldman is now trying to attract. So, it’s definitely not the kind of typical audience that they’re used to. So, they understand that they need to open up, learn from them, and design for them and with them, and that is a shift that has been happening in the past few years. And my role – I didn’t replace anyone. I’m the first one. – is a part of that shift. Steve: Is there any sort of one point or one incident, or key moment, I guess, which sort of marks that transition in a company like Goldman to say yeah, we’ve been doing it this way, we need to do it this way. Tomer: I think it happened – I don’t know if there was one event, but I think it happened in the past maybe two years. The user experience team there was very small and then suddenly they decided that it’s time to invest more in that. And from zero it went to several dozens, many dozens, within a year. And the more people do their work, show their work, share their work, and their work is very successful, the more teams and leaders talk about that and then it’s like a cycle that feeds itself and then it grows and grows. So, that’s been happening a lot in the past few years. Steve: And do you have a perspective on – this awareness that you’re describing at Goldman seems to align with what I’ve heard and observed in financial services in general. That it’s an industry that maybe wasn’t seen as paying attention to consumers, users, design, and over the past number of years. Tomer: Yeah. I will admit, this is my first financial services job. So, I’m not really familiar with that world other than Goldman. But, so I hear. I don’t really know from first experience. Steve: But I’m inferring – correct me if I’m wrong – I’m inferring that that also is not a conversation that you’re having inside Goldman? Tomer: No. It’s very, I would say, kind of practical and tactical. We’re not talking about the concept of having me and people like me there. We’re just focusing on doing the work that we know how to do, we’ve been doing for many years, and just bringing that insight and understanding of that world to an organization that wasn’t aware of that previously. Steve: That world is the process and tools of learning about people? Tomer: Um, yeah. I would say process, tools, people. They’re used to hiring different people. I saw that in Google at the time. I saw that at WeWork at the time where even formally you don’t have in the – I don’t know HR systems – you don’t have names for roles for what we do, for who we are. I can’t remember the names, but we were engineers or UI engineers, or things like that. Until you get recognized, and I experienced that at Google, and then you do have a job family for design and for research and so on. It’s a process. Steve: So, at Goldman you’re trying to then like – that’s part of the… Tomer: Well, I’m not – we’re too, especially in research, we’re too few people to start having a job family in the HR systems, but we’ll get there. What I’m trying to do is first a lot of evangelism. A lot of conversations with people to plant seeds in their minds that they might need somebody like that. That they might engage in a project, a one-off project, and learn more about it. And with several groups and divisions it works already. So, we have people there and they start doing their work. Steve: It’s interesting that you’re sort of highlighting evangelism vs. like conducting research. Tomer: Right. Right. Sometimes it’s integrated. Sometimes I’m asking people to – some kind of a leap of faith. And then we actually, we do the research and then the work talks for itself. I don’t need to evangelize anymore. They get it. They understand. They want more of it and from there the doors are open. I prefer it this way. Even in the past, I prefer to kind of do the work show then kind of wave my hands and talk about it. I found it always be more meaningful to people and more persuading. Steve: I’ve heard this. I don’t know if this is what’s playing out for you in these situations, but that you help someone take a leap of faith. You do this work. The results speak for themselves for them. Do those results – or how can you help those results spread elsewhere in the organization? Tomer: Um, I use that example when I talk with others and I invite people to connect. I mean in the same company you can hear from them. You can meet them. I sometimes facilitate those meetings. And then they hear from others. I know it will take time, that it’s not something that happens over a day or a week or a month. Sometimes a year. But these conversations happen, whether I’m aware of them, or not. Whether I facilitate them or not. I’m confident that the more work we do the more people we’ll have, and we’ll be more impactful. Steve: How does that work across – I don’t know if they’re called business units. Goldman as you said speaks to different kinds of users, different kinds of customers, with its products and services. Tomer: So, what I do – so, as soon as we have researchers joining the group, we assign them to a – let’s call it a business unit, or a product team. And then they’re theirs 100% of the time. I only support them with infrastructure, career growth and things like that. And then they do the work with the team. I do a lot of legwork kind of before that happens to make sure that they have a team that wants them, that needs them, and that knows what to expect. So, it’s not the first time that they hear about that research thing with the appearance of the person. So, that’s how I intend to kind of continue doing rather than hire to – like work in an agency model. I have a pool of people that kind of come and go kind of based on projects. Steve: Okay, so, just to reiterate, you talk about help someone to take a leap of faith, and I’m maybe changing your words a little bit. And a next step from that is to hire someone and put them in that team. Tomer: Yes. Steve: And then they do the work and then the work, as you said, the work speaks for itself, but it’s the work that you’ve staffed someone into that team? Tomer: Yeah. I’ll just add to that. In many cases it’s not really a leap of faith because there are people who are – I call myself an outsider – who didn’t grow up in Goldman, that know about this field, know about research, know about people like us. So, they’re very open to having them. So, that’s much easier. Steve: So, you said you’re not looking to do an agency model. Tomer: No. Steve: From your face, I think there’s a point of view. Tomer: Yes, definitely. Steve: What’s the point of view behind that? Tomer: The point of view is that I want people to feel they belong; the researchers feel they belong to a team. And I feel that if I just send them off to short term projects they can’t grow with the team. They can’t have any history with the product. They’re not familiar. They’re really like consultants that come and go. I want them to feel a part of the team, understand all the history, sit with them. I don’t look for them to sit next to me. I want them to sit with their teams. And then going to be a part of that team and then be more impactful. I strongly believe that this is the way to go. Steve: Yeah. Tomer: And I’ve seen that happen before with my own eyes. Not WeWork, because we didn’t do that at WeWork, but definitely at Google where we experienced, at some point, a decentralization. So, there was one UX group and they were decentralized into the different business units. Not much has changed because people were assigned to teams already, but it just felt like the right thing to do, to have people sit with their teams, work with the same team, same product, for long periods of time. Steve: So what implications, if any, does that have for what kind of skillset someone that you’re hiring needs to have? Tomer: Yeah. That’s a good one. So, I’m looking for more experienced people. More senior people. There will be a time where we will hire more junior people and that’s not the time when we’re just starting. And I had the same thing at WeWork when we were building a group from scratch. You need experienced people. So, I’m looking for people who are eager and passionate about building something from scratch, taking a team that maybe doesn’t know anything about user research and try and kind of build that relationship and build those results. We talked about that earlier. With that team and grow with them, grow the practice with them. And maybe if it’s extremely successful grow a team there at some point. Steve: There’s always a thing with questions and answers, at least for me. I ask a question, I have an idea what answer I’m going to get. And it’s interesting when the answer is different. I know it doesn’t always cleave this way, but there’s sort of the soft skills and the hard skills thing. I imagined I was going to get a hard skills answer because I was thinking about some of the pragmatic constraints of this, but your answer emphasized more, I think, softer skills. Growth, advocacy, leadership from a research point of view. Tomer: I think it kind of almost goes without saying that once you’re – I don’t know, maybe I’m wrong. But once you’re – I always like to – I tell that to my people when we hire. We look for resumes that scream researcher. So, if your resume screams researcher, then that’s covered. I’m more interested in the soft skills. Steve: Yeah. Can I push on that just a little bit more? Tomer: Sure. Steve: And I think – my bias here, or the perspective I come from is the consultant. So, I come in and out. So, I know what the limitations of that are and so just from where I sit, I see the challenges of what you’re describing because – I mean there’s a life cycle of when a team is developing – there’s probably a smarter way to say that. But something is being built and it goes through stages over weeks/months/years and so the research needs change and so the way the researcher has to respond to that, or lead that, or support that changes. So, that’s kind of the – I’m just revealing what I was probing on here, what I thought the answer was going to be. When you say screams researcher in a resume does that sort of imply that the ability to – are you inferring from that resume the ability to support that team and all these different areas of its evolution? Tomer: Yeah. A resume that screams researcher is a resume that you see all the methods, you see the flexibility in kind of not sticking to one method or one approach. Doing that in multiple cultures, multiple companies. Have a trajectory of growth. So, you’re doing kind of from less meaningful things to more meaningful things, and so on. So, that to me screams researcher. Not, you know, you look at the job titles and you ask yourself why am I even reading this resume. And then you read a cover letter that says something along the lines of “I’ve always wanted to be.” Maybe at some point, but not now. Steve: Yes. There is an archetype of a person I think that identifies strongly with researchers and sort of what they’re about, but their experience doesn’t match that. Tomer: And I’m smiling because there’s always an exception. I remember one at WeWork where – and I was talking – speaking the same way – only senior people. I’m not looking for more junior people. And then I hired two junior people because there’s always an exception. Because you see somebody who makes me ignore everything I just said and hire them. Steve: What’s an example of something that overrides that? Tomer: I don’t even know how to explain it. It’s just – sometimes it’s the spark in the eyes that you see immediately with people and you see that they are going to be like a sponge. That’s a good metaphor. And you just know it’s going to work. Steve: And you said passion early on. Tomer: Yeah, yeah, yeah. Steve: And I think evidence of passion is past performance. But that’s not the only evidence of passion if you’re seeing that with people that you’re meeting. Tomer: Right, right. Although in most cases I’m very reluctant to hire based on passion if you don’t have experience. Because you can have a lot of passion and motivation, but you know nothing. It doesn’t mean you’re going to be bad, but it means a lot of people are going to need to support. So, I think kind of my approach is that I’ll try and do zero to very little of that because we’re just starting. We’re just a few people. We don’t really have the time that we need to support our teams. We don’t have the time to support another person. Not now. When we grow, yeah, but that’s definitely – you know, if we were 10/20 senior people on the team I would say we have to have more junior people because we can’t have a team of only senior people. But right now, we’re not even close to that. Steve: Can you say the size that you are now? Tomer: Yeah, sure. We are four. In a couple of weeks, we’ll be five. Worldwide, both here in New York and in London. Steve: And is there like a roadmap for number of people to get to that 10 or 20? Tomer: Yeah. I don’t want to mention numbers, but I think we’re on a path of growth. Steve: Okay. So, you talk about sort of the time commitment required to support people at different levels. I want to go back and clarify one thing. It was almost an aside. You said that you don’t want to do the agency model. You want people with these teams, but you’re trying to provide infrastructure. And then you used a phrase like career growth. So, given the size you’re at now, acknowledging that there’s bandwidth challenges here, so what does infrastructure look like? What does career growth look like now? Tomer: Um, infrastructure. So, that’s easy. So, there’s a lot of – when I look around me there’s a lot of motivation to do research, but there are needs in terms of – and gaps in terms of tools, knowledge, guidance, process. So, research was happening before there were researchers, but it was kind of very kind of based on sporadic motivation from different people who are really trying to do the best they could do. So, we need to support them with services that are out there, industry standard services. So, we need to sign agreements and get those licenses on board, different vendors. Steve: What’s an example? Tomer: UserTesting, UserZoom. These would be two, but there are more. And, um – let’s see what else. We are kind of creating process for, okay what happens if you want to do research, but you don’t have a researcher and you probably won’t have a researcher in the next few years? But you still – you acknowledge the fact that you need research now. So, um – so we have some – we have established some kind of a way to ask for that and then sign up for office hours or something like that, and then get some help from us. We will advise you and help you kind of get going without the researcher. What else? We started working with OKRs. So, if that’s – to me that’s a part of kind of infrastructure. Or the cool kids now call it research ops. So, research ops. Steve: So, explain what OKRs are and explain what research ops is. Tomer: OKRs is Objectives and Key Results. It’s a goal setting approach. There are many. This is just the one that I’m used to, familiar with. And that’s just a way to set goals and see how you’re doing. So, we do that as a research group as well. Research ops – I would take the easy route and say this is everything that helps research happen without the actual research. Steve: Are the OKRs an example of research ops? That’s what you were saying? Tomer: So, the tools that I mentioned – the OKRs, hiring, knowledge management. Steve: All of the infrastructure? Tomer: Yes. If somebody wants to do – I don’t know, whatever – a usability test, and they don’t know how. So, you want them to have the tool. We want them to have a knowledge base that they can access and see okay, what do I need to do to run the usability test? And we want them to have guidance and support from a person, from a researcher who knows what they’re doing. And I would say all of that is research ops. For researchers, a big part of research ops is participant recruitment. So, finding people to learn from. That’s a big part. And something that I’m truly passionate about is kind of insight repositories. Building some kind of a repository that we can pull from later on. I would add to that any infrastructure involving measuring the user experience. So, building that. I would also include that under research ops. Steve: So, you said you’re passionate about insight repository? Tomer: Yeah. Steve: Do I have to ask an actual question? Tomer: What do you want to know? Steve: What are you trying to get to? At Goldman, what are you working towards with that? Tomer: It’s the same as everywhere. I, for many years, I realized I was kind of bothered by how wasteful research is. That even I felt that I’m “learning the same things over and over again.” I know that other people did research, in the same company, did research that I’m about to do and even if I get their insights I’m going to do that again. And I know it’s really, really messy and hard to retain all the knowledge that you gather from research. The second I had an opportunity to do something about it, I did. That was at WeWork. And we built a system that we called Polaris for that, to solve these problems. We identified the – it’s going to sound funny, but we identified that the main problem, the main root cause for these problems are – or is, the research report, that was what I called back then the atomic unit of a research insight and we changed that unit into – that’s why the metaphor breaks – a smaller atom, that we called a nugget, a research nugget. And that’s what we stored in this repository. So, a nugget was a combination of an observation, evidence and tags. Steve: What was the last one? Tomer: Tags. And then due to these tags you can then search through this database and find answers to questions that you didn’t design studies around. So, it happened many times at WeWork where people came to us and said, “what do we know about…?” or, “can we do a study about blah blah?” And we said let’s try Polaris first and realized that we have all the answers without even needing to do more research. This will only happen – you have to change your ways a little bit, not just work with a system like that – this will only happen if you do kind of continuous research – continuous and open-ended research. Back then at WeWork we did exit interviews. So, every WeWork member, customer that decided to leave, we pinged them and talked with them, interviewed them, and asked them kind of very open-ended questions such as why are you leaving? What worked well at WeWork? What didn’t work so well? If you had 15 minutes with the CEO, what would you tell him? Things like that. And then that allowed us to have answers to many, many questions because these research participants, these exiting members decided what they wanted to talk about. If they wanted to talk about – I don’t know, whatever – the price that was too high for them, or the coffee that was too great for them to leave to another place, or whatever it is that they chose, went into the system and then we could pull it out later on and then see okay we heard – and combining with – combining that with a user experience measurement system would lead you to a situation where you can say okay, I saw that satisfaction with coffee in our WeWork buildings in the Netherlands has gone down in the past month. Here is a play list of three Dutch members bitch about coffee from the past month. And then you have the what happened – the numbers. You have the why it happened from these videos. And if you’re going to “serve that” to the person that buys or decides how to brew coffee in the Netherlands then that’s half way through to the solution. So, we imagine something like that at Goldman as well. Steve: So, it only works, you said, if you have kind of ongoing open-ended research? Tomer: Yeah. Steve: Why is that? Tomer: Because if you always – so, the other type of research I had to give it a name. So, I would call it dedicated research. Dedicated research is research that you do, and you know what research questions you have beforehand, and you answer those questions. And then you can create nuggets and it’s all good. But then you’ll only have answers to those questions. When you do open-ended you have answers to questions you never imagined that you might have, or may have in the future. Steve: What if my research question is what are the highs and lows of a WeWork member? Tomer: So, if you do that once you’ll get a snapshot of that point in time. But if you do that continuously, all the time – and at WeWork we, at some point, interviewed -all the UX team members did that on a regular frequency. Then you have thousands and thousands of data points. Steve: Okay. So, there’s sort of a scale here – scope or scale. I don’t even know which one it is, but there’s an amount of data that covers a breadth of topics and that is also refreshed. Tomer: Yes. Steve: Okay. So, what’s – I would hate if anyone asked me this question, but I’m going to ask you. What’s an insight? Because you were kind of saying a nugget is this sort of stuff, but you brought up insight. So, what’s an insight. Tomer: I actually have a definition for that. I’m thinking about that these days. An insight to me is a deep understanding of the situation. So, you – I’m trying to think of an example. I’ll go to WeWork again. So, imagine a researcher that walks in a WeWork building, looking around, and then sees WeWork has shared open spaces, but also private offices. Okay. So, let’s say they notice in private offices that a lot of them have printers that they’ve brought in and that looks odd because WeWork offers printing. So, we have printers and we offer that as a part of your WeWork membership and if you’re going to go over you pay for more. It’s a nice stream of revenue for WeWork and when they counted it – let’s say the count it and saw that half of the private offices brought in their own printers. They go to a second building and a third building and they count, and they see the same. Half of the members that have private offices brought in their own printer. So, let’s say they stop here, go back to the office and add an insight, nugget, whatever we call it, to the system saying half of WeWork members in those buildings brought in their own printer. That to me is not a deep understanding of the situation. It’s very interesting. It may be indicative of something that’s going on that we’re not aware of, but that’s not enough. We have to understand why? So, I would encourage that researcher to knock on the doors and ask why? And then we may hear things like, “oh, you know you have a 15-page manual on how to install printers and I’m not going to waste my time on that.” Or, “you have to log in each time you go to a printer and that’s taking more time.” And so on and so forth. “We do steal your paper, so we enjoy that.” That’s what I mean by deeper understanding, so we can better understand the situation, know to answer why that is happening, have some evidence and then I would say that’s an insight. That’s a deep understanding of the situation. And to me the system that we built is going to enforce that, so you cannot submit nuggets or insights without that why part? Just facts are not enough. We don’t need facts. We need facts plus why they are what they are. Steve: Right. There’s a behavior and what’s the reason for that behavior. I struggle when I hear people talk about insights because sometimes they talk about why a single person is doing something, as opposed to sort of why users of a system are experiencing something. Like, in your scenario, you knock on somebody’s door and say what’s up with your own printer? “It’s too hard to install.” And you come back and say – like there’s a difference between we understand why this person was doing it and then sort of the generalized conclusion. I don’t know, I’m putting my own language into your framework. It’s probably not working, but… Tomer: That’s alright. What we would do with Polaris is gather those individual insights. So, each one would be a nugget. And then if you have 100 of these, the only difference would be the video, the person in front of the camera explaining why they brought in a printer, or whatever it is. And then you can create a playlist and show it to IT, or whoever decided that we’re going to go with this system for printing, and have them decide what they’re going to do about it. Steve: Okay, so let me push on that a little bit. Because when you talk to say 5 people or something, about this behavior, bringing in your own printers, people are going to give related, but sort of seemingly individualized explanations. And there’s an act of interpretation, analysis and synthesis – words that we often use – to sort of say well let’s look at all of those. Like what’s the overarching reason? Talk about deep understanding. It’s not that we loose track of the fact that the installation manual is messy. There are not enough plugs. There are a lot of sort of reasons, but the larger issue is something like complexity, or not seen as adding to my business or something. There’s a higher order thing. To me, that’s what the insight is. I think you were talking about doing that. Tomer: Yeah. Steve: The words are slippery here when we’re talking about insights and nuggets and sort of explanations. Tomer: Nugget is more the kind of the technical way we called it. But, I agree. And the way it happened in Polaris is through those playlists. So, we encourage people, both in – you know people who belong to the UX team, and ones that are not, to collect these nuggets into playlists and then prove a point, or do this analysis, get to an insight and share it. So, it depends on what you find and what you collect there, but let’s say you search for printers and then you get 73 results. You sift through them and you see that 15 are not really related to what you’re trying to communicate here. The rest is too many, so you’ll pick maybe 7 that the videos are really good, like “good participants” that eloquently explain the point and then you can add your analysis in writing and describe that higher insight, or deeper insight. Steve: Right. So, Polaris kind of sets you up to make that interpretation. Tomer: Yeah, yeah. Steve: It gives you some structure or some way to quickly… Tomer: Yes, yes. Steve: Does Polaris capture that thing it facilitates you to make? Tomer: What do you mean? Steve: So, create this playlist. You watch the playlist and you come up with sort of new articulation. The biggest issue around printers for us is that we are doing this, and they are thinking that. Right? That’s a new piece of knowledge that’s created by reviewing what Polaris gives you. Tomer: Yeah. Polaris was not smart. It’s just a tool. It would do whatever – it allows you to do whatever you want to do with it. So, it allows you to add text to do that. So, if that’s a yes then it allows you to do that. Polaris, in its kind of essence, is very simple. Just a tool that facilitates the kind of storage of those nuggets and creation of those playlists. Steve: Yeah. So, that really helps me understand sort of what you’re aiming at when you talk about insight repository. It’s to me something you can re-query to come up with new conclusions. Tomer: Yes. Exactly. Steve: As opposed to sort of here’s all the things that we’ve concluded. Tomer: Yes. Exactly. Exactly. Steve: Okay. Wow. Here’s the GIF of the mind blowing up a little bit. Tomer: You want more? Steve: If you have more, yes. Tomer: Now it’s just ideas and that’s not something I’m kind of working towards in Goldman, but I’m thinking – so imagine every company that should have a Polaris has it. That’s also a waste because then every company has its own repository and then I’m sure there’s a lot of overlap. So, maybe in the future there should be a kind of open… Steve: Panopticon. Tomer: And open Polaris, or whatever we call it, that anybody can contribute to and anybody can pull from. I’m not doing anything about that. Steve: A friend of mine – so, this is like a third hand quote that I’m sure I’m misquoting, but a friend of mine told me this and he was quoting the head of knowledge management at NASA. This guy says, “the best knowledge management tool is lunch.” Tomer: I can see where – I can see why that was said. Yet, I wholeheartedly reject the idea. Steve: Yeah Tomer: I mean, that’s not scalable. That’s not what if I went to the bathroom when that happened? What if I started working in NASA the day after that important insight was shared? I can understand the kind of anecdotal part of it, how it’s useful. But to me there has to be something – I don’t know what to call it – more solid. Steve: Yeah. Analogously, you started off our conversation by describing the lunching that you’re doing at Goldman, connecting people, meeting people. That’s different I think than transmitting nuggets, but you are using sort of lunch, and I mean very vaguely lunch, time with people, talking to them as a way to – I don’t know, as a way to do what? What’s the difference between sort of the NASA lunch thing and what you’re doing with socializing and connecting people? Tomer: I think what I’m trying to do is kind of socialize a discipline. And if I understand it correctly, the NASA lunch is to socialize an insight maybe. So, yeah, I don’t think we’re there yet in terms of socializing insights. Steve: And who knows what the context of that quote, which has been quoted, which I’m requoting. It may be exactly coherent with what you’re talking about. Tomer: Maybe Steve: I want to loop back to something that you talked about as part of this infrastructure. You said there’s groups that don’t have a researcher and may never have a researcher. So, what are sort of tools, knowledgebase, that can help them do things? I feel like that’s – there’s a thing in our field about who does research? And I’m not even sure what the label for that is? Tomer: Job security. Steve: Yeah. Is that what it is? Tomer: Should we let them do research? Steve: Right. Tomer: Yeah. It makes me laugh. Of course, we should let them. As if we’re the authority. But, yeah, of course. I mean why would anyone not be allowed to do research? Because they didn’t go to school? I don’t think so. If somebody wants to do it, to me that’s huge. So, we should give them everything we can to let them do it. Are they going to be doing a bad job? Maybe. But to me bad research is better than no research. It’s a first step and we are good about the tools, the socializing what we do, socializing best practices, things will get better. Yeah, there will probably be crap added in the first few times. Maybe the first 20 times. But if they really want to and they have this passion, then why kill it by saying that it’s our job, or something like that. So, yes, I’m all for “letting them” do research. Definitely. Steve: I mean I think you highlighted exactly – so there’s – I think job security is a fear, but I think bad research is also a fear, as you said. Tomer: Yeah. I’m okay with that. Steve: And you said bad research is better than no research. Tomer: To, me, yeah. 100%. Yes Steve: I like how definitive you are. Tomer: I’m… Steve: Because that’s a hot topic, I think. I’ve heard people go back and forth on it. Tomer: I know. I heard that too. I’m definitely on that side. Steve: I also will say you’re describing ways to limit or mitigate bad research. Tomer: Help make it better. Steve: Yeah. Tomer: Yeah, yeah. I mean first they need to know about it. What happens – there are so many people who develop, let’s say software, at Goldman. They’re not even aware of our existence. It’s not that they think about it and say, “uh, no, I’m not going to do it.” They don’t even know that this is happening, that we are there. So, I think there’s a long way to go. We need to kind of be more popular and be more known and then provide all the tools, help, guidance, knowledge that we can. Knowing that we can’t support everybody. It’s not going to happen. It happened even at Google that had, at the time 100s, and today probably a lot more researchers. There are teams that build stuff. Why not allow them to do research to help them? Steve: You made the comment that research was happening before there were researchers. Tomer: Yeah. Steve: In the history of the world that’s also true, right. Tomer: Yeah. True. We can’t stop that or control that. Steve: So, maybe just a slight shift. We’re sort of talking about who’s allowed to do what, or what are researchers and what do we do? You mentioned early on that you also take on an additional role. Do you want to give some context to that? And what does that mean for you? Tomer: Yeah, I also lead a design group for a product, Private Wealth Management. It allows very rich people to manage their money. It’s a part of a service that Goldman offers. And the digital aspect of it is not the primary aspect of it. It’s just kind of a supporting role. It’s mostly based on a relationship between an advisor, a Goldman Sachs advisor, and a client. And there’s the usual suspect of website apps and so on. I’m now leading a group of people that design that. And I mean design with the expanded way we define it. So, it’s not just designers. It’s also researchers and data people and a writer and prototyping and so on. Steve: Are there differences between managing researchers – we talked before about people coming in with resumes that scream researchers – vs. all the different kinds of functions you’re working with on that team? Tomer: Honestly, no. I don’t think so. I would be the first to admit it. I’m personally not the best designer. Not in the world definitely and not that I could be. But, I think there are things that are similar no matter what kind of group you’re leading. It’s good that you know something about what the group is doing, but I think it’s mostly about empowering the right people, giving them what they need, releasing them of things that are just stupid, that they don’t need to do, and they don’t need to be involved in, and focusing them on what they are passionate about. This doesn’t have any direct relationship with design or research or whatever it is. Steve: Yeah. There’s an email list that I think you and I are both on that’s about design and user research. And there was a thread, or maybe people were having a conference call. I can’t even remember how it manifested, but the topic was researchers managing designers, which seems like it’s a newer thing. If you look historically, like research was sort of the accessory or adjacency to design, so design teams kind of managed researchers. But as research has grown there’s other people in the situation like yours where their label for themselves would lean more towards researcher, but they’re managing designers. So, it’s interesting that you sort of don’t see a difference because I feel like the thrust of this group needing to talk was hey, there’s something different here and so how are we going to deal with it? Tomer: I know I have kind of my internal bias toward research. So, I’m probably more kind of attentive to mostly when that is not happening. Maybe than a person who would be a designer that manages designers. But I’m just guessing. I don’t know. I know that I’m definitely – I care about research and I notice and say something when it doesn’t happen. I don’t know – does it have to be a designer – designers need to know if that’s their thing. Steve: I think I might switch gears entirely here. Tomer: Go for it. Steve: I’d love to just go way back, like as far back as you want to go and maybe give the story of things you did in your life to kind of get here, whether those are work or school or other things? Tomer: That got me here? Steve: Yeah. What’s your sort of background, or your narrative arc, if you will? Tomer: So, I’ll tell you something from a long time ago that probably really was the tipping point, that I wasn’t even aware of that that was the tipping point at the time. I mean it’s not a secret I’m originally from Israel and I relocated, it was 12 years ago. And while in Israel I served in the Army. I signed for a career in the Army. So, the whole shebang. I was going to be a career officer for the long term. But then when I was 24, long story short, I was a paraglider. And I took a course and I got injured badly and I was out of the Army for a year. I was at home recovering. And that was a bad thing. Bad injury, but that opened my eyes. And during that year I came back to the Army and said I want to cancel the whole thing; I don’t want to stay; I want out. I will do my next job because we planned for one more job, but that’s it. I’m out. And that’s what happened. I think without that accident I would probably – I’d probably be retired by now, but I would be a career officer and not what I am today. And kind of looking back, I’m happy that that’s what happened. I would say that’s the biggest thing that affected what I’m doing today. Steve: So, opening your eyes was realizing that you didn’t want to go on the path that you were on. Was there any hints for you of what path you did want to pursue? Tomer: I knew it was creative. I was in a wheelchair for four months and then crutches and then learn how to walk again. And I made my way, once I was able to kind of get up, to a local artist that gave kind of very open-ended lessons in his basement, like a couple of blocks from my house at the time. So, I started painting and tried all kinds of ways to paint. And I know – I didn’t know to say that I will be an artist and honestly, I wasn’t really good at it. But I knew it would be something creative. I didn’t know exactly what. Steve: If you look at your work today, does it match that? Tomer: Um, not 100% overlap, but I feel some of it is, yeah. Steve: I mean I have wrestled, mostly privately, with just the idea is research a creative field? Or are we creative? Tomer: Some of it is. Steve: I found myself in a collaboration with people that I think more traditionally fit that job description and I kind of had my hair blown back, just on sort of the speed and breadth of making stuff. It was definitely intimidating. Tomer: So, I would call myself a researcher, but still I was heavily involved in shaping Polaris. That’s a product. It’s not research work. I’m now involved in creating a system that – I also lead a small team of engineers that build a system to measure the user experience. So, that is definitely more creative than maybe research. But some parts of kind of pure research are creative. To me probably the biggest one is translating one or a set of questions that a team has into okay, what are we going to do to get answers. That’s not always – if it was that easy to come up with an answer to that, then anybody could do that well. That’s not the case. A lot of people are having a lot of trouble with that part. So, I think that’s a creative part. You’re not going to see a beautiful painting coming out of that, but it is creative. Steve: Right. I think, for me, creating the new story out of a bunch of experiences or nuggets, or whatever you’re pulling from… Tomer: Realizing, getting to an insight. Yeah. Steve: So, what did you do after the art class? What do you end up doing? Tomer: I applied – I applied to what we call today, probably a visual communications program. Got accepted to one of the best ones, if not the best one in Israel at the time, and last minute decided that it’s not for me. And then I learned – I studied copywriting. So, I’m a certified copywriter, in Hebrew though. And then I took my first job – or, I worked. I worked for 3 years as – I didn’t really know what I was going to do, so I did something that I knew how to do and that was I worked in a very small consultancy for military oriented industries. So, I did that for – I was a project manager. I did that for I think 3 years. That was the time where I kind of learned all these things and really set my mind that anything army related is not for me. And then my first real job in that direction was I was – funny how names were at that time. I was an internet copywriter, which we would probably call today a content strategist, for a website that I would compare it to Monster or Indeed, or something like that today. And there I got exposed to – probably at the time it was Jakob Nielsen and people in that area. I started reading more. I did some, again what we would call today, product management work there. And then I asked them to switch to what we would call today a researcher. They said, “no.” And I was like, okay, and I looked for a company who would take me. And there was one company that took me as – again, it wasn’t researcher. It was called a usability something. And that’s it. That’s how it started. They were very brave, I should admit, because I didn’t know much. Steve: But they took you as a researcher? Tomer: Yeah. Steve: So, that was sort of your first time with the title. Tomer: Yeah. You want an even funnier story. The only person who actually knew what it was was the CEO. He was the one who interviewed me. And then he hired me and then a month later he decided to leave. So, the only person who really knew what I was doing left about a month in. But that went well. That’s how it started. Steve: What was the point at which you came to the U.S.? Tomer: Um, so, I had a couple more jobs and I realized very quickly that, at the time at least, there weren’t enough, or even at all, opportunities to grow into managing a group or a team of people who do that. I was always the only person in the company that did that. And I realized, or I got to the conclusion that the only place for me to work in a company that had a lot of people of my tribe would be here in the states. And I also realized the army there is not like the army here. I didn’t have any academic degree. I also realized that I needed the right degree because these companies that I was thinking about would not even read my resume. My resume didn’t scream researcher. So, I went to school. I continued working and went to school. I completed my Bachelor’s and then applied to Bentley University here and then moved. And as I was studying there I – during school there, in Massachusetts, I contacted Google and things rolled from there. Steve: We talked about WeWork a little bit. Can you – like what was the – what was your role at Google and maybe what was your role at WeWork? Tomer: Google, I was a user researcher, a senior user researcher. First in advertising. They’ve changed all the names by now, but at the time it was DoubleClick for Publishers. That was the product I was involved in. Ironically, I was the only researcher there in that group of hundreds of people. And after, I think, 2 ½ years I switched transferred to Search and in Search, I was first Voice Search and then we called it, at the time, Core Search – the bag of 10 blue links and how they developed from that point to what you see today as all the visual aspects of the result and so on. So, that started back then, vertical by vertical – TV, movies, music. I was doing a lot of research into search results for sports. I know a lot more than I should about all kinds of sports, cricket and so on. Yeah. Steve: And then what was the role you took at WeWork? Tomer: WeWork, I was head of user experience. So, started a group from scratch. The goal there, and this is kind of following several conversations with the CEO and co-founder that hired me. At the time – again, I’m sure things have changed since. But at the time WeWork had three big groups internally that kind of built or created the three aspects of the WeWork product. They were called digital, physical and community. And Adam, the CEO, felt, and he was very right, that while each group is doing a great job, sometimes if you look – if you’re going to think or look from how things are from the perspective of the customer, the member, there are gaps between those groups that we’re not even aware of. And the goal was to identify those gaps, to me that translates to research, and then solve them – solve the problems there. I’m thinking of an example. Think about conference rooms. So, conference rooms is something that WeWork offers. Members pay for it. So, somebody physically designed the conference room. An architect decided on the size and location. An interior designer decided on the mood and what will be in the room. Somebody from IT picked and AV system for that room. Somebody in digital developed a system to book this room. Somebody in community designed a policy of how to use this room. And community team members in the buildings enforce this policy. Everything is good. Everything is working well, but then situations happen. Such as members come to a meeting, they book the room and then another member is squatting the room and refusing to go out. Even if they walk in they realize that I’m a startup, I booked a room for a meeting with a potential investor and then I see a room that is designed as a music room with bean bags and no projector, clearly inappropriate for my meeting. Or, that person who’s squatting, I’m going to the community manager, but they are dealing with a leak of water from the ceiling on another member’s head. They’re all very nice, but they can’t solve my problem right now. So, this is what I’m talking about when I said the gap between one of those groups. So, we’re trying to identify those gaps – because in many cases we didn’t even know about this – and try and solve them. That was the premise back then. Steve: You’ve written books. You give a lot of talks. Tomer: Less now. Steve: You have a good sort of history of creating material. You’ve interviewed a lot of people. People listening, what would you send them to, to buy, read, watch? Tomer: Well, our publisher would not be a publisher – would be happy if I say that they should buy my book, yours too. And that would be – the name of the book is Validating Product Ideas Through Lean User Research. That’s a book for – I would only mention that because my first book is for researchers, and a lot of researchers do listen, so maybe I’ll mention both. But the second book, the one I mentioned, is for – it’s a step by step guide into answering different research questions that people have. Each chapter is a research question, step by step, on how to answer it through research that anybody can do quickly. The first book is called It’s Our Research. It’s to solve a problem that, at least at the time – I now have different thoughts about that – but, at least at the time a lot of researchers had, maybe today as well, and that is a problem that a lot of people don’t want to do research because they feel they have the answers already, or they have good intuition. And also, once they agree to do research in some cases they don’t want to act on it. Steve: Wait, act on? Tomer: On their research results. Steve: What they’ve learned? Yes Tomer: So, that’s a book that’s supposed to help with that problem. And I’m kind of having different thoughts now because my answer in that book was – and that’s why I called it It’s Our Research – make them feel that it’s their research as much as you feel it’s yours and then they would want it and they would do something about it. That was my point. Today, I’m having kind of different thoughts about how to get to a point where research is wanted and acted on. And I will try it at some point. We just need to grow a little bit at Goldman. But, my thoughts are when you – and I posted something about that recently – when you plug your own charger into the wall, do you really care how electricity gets there? What’s happening in the power plant? Why it’s working? How is it efficient? Is not efficient, and so on. You don’t really care. You want your phone charged. My thoughts about research is why wouldn’t it be the same. People have questions. Research provides answers. Yes, a lot is going on to get to those answers, but if you have a question, why do we need to bother you with all the details? Just do the thing, trust us to do the thing, and we’ll give you an answer to your question. I’m going to try that at some point. I’m thinking – it’s not political, but building a wall between stakeholders and researchers. That wall could be Slack or something like that through which stakeholders ask questions and researchers provide answers. If we have an answer immediately, if we have a system like Polaris or something like that, we can provide an answer. If we don’t, we will just ask a few questions, follow-up questions, and then do the research and get the answer. Just thoughts. I haven’t tried it yet. Steve: Which makes me think of the research ops piece a little bit where – like building up participant recruitment infrastructure… Tomer: Yeah. Steve: …is an interesting one because back in the old days where we had to do everything ourselves, you’re learning about your problem by figuring how to – by recruiting. You also learn about your problem by dealing with your stakeholder and seeing what – it’s that art piece vs. this kind of process infrastructure piece and it’s interesting to think about like what is lost and what is gained? Or how is changed when you create infrastructure that – like if you’re a researcher and you’re completely decoupled from participant recruiting, that may change how you deal with people that you meet, or how you deal with framing the problem. So, for everything that we build up a process, that’s efficiency, that kind of is a query system, how does that change what we do? And who is coming to this field? Tomer: Yeah. Steve: These are not necessarily my own thoughts, but just things I’m hearing from people as well. Tomer: One other thing that I would send people to is a series of Medium posts that I’ve published in the past year, maybe less, about measuring user experience. A lot of people like to talk about metrics these days. I took the HEART framework from Google and then we have a post per letter about happiness, engagement, adoption, retention and task success. And for each one, what it is, what’s important to measure, why, how, mistakes and what actions you can take from each one? So, this is something that I’m interested in these days, measurements. And I’m trying to figure out the “H” part, the happiness part, specifically. There are a ton of challenges with that. How to measure satisfaction and happiness. I’m also posting – kind of tracking my – not tracking my own life but paying more attention to when I’m exposed to requests to rate satisfaction and happiness and I share them with people, with my thoughts about them. Steve: Okay. Great. Anything else that we should talk about in this conversation? Tomer: I said I am speaking kind of publicly a lot less, but I do do that from time to time. I’ll be speaking in two conferences, the Face of Finance in April in New York and in London, User Research London in June, in London. Steve: Alright. Well, thanks for taking the time to chat and sharing all the information and stories and everything. I really appreciate it. Tomer: That was fun. Thank you. Steve: Thanks. And so concludes another episode of Dollars to Donuts. Follow the podcast on Twitter, and subscribe to the podcast at portigal.com/podcast, or iTunes, or Spotify, or Stitcher, or anyplace you get your podcasts. Also online at portigal.com/podcast is the transcript and links for this episode (and of course all the previous episodes). At Amazon and rosenfeldmedia.com you can buy Tomer’s books and my books. Our rocking theme music was written and performed by Bruce Todd.
Rank #2: 19. Leisa Reichelt of Atlassian (Part 1). This episode of Dollars to Donuts features part 1 of my two-part conversation with Leisa Reichelt of Atlassian. We talk about educating the organization to do user research better, the limitations of horizontal products, and the tension between “good” and “bad” research. If you’re working on a product that has got some more foundational issues that need to be addressed, but the vast majority of the work is happening at that very detailed feature level, how are you going to ever going to stop kind of circling the drain? You get stuck in this kind of local maxima. How are you ever going to take that big substantial step to really move your product forward if it’s nobody’s job, nobody’s priority, to do that? – Leisa Reichelt Show Links Fundamentals of Interviewing Users (SF) The Art of Noticing, by Rob Walker The Art of Noticing newsletter Objectified Did you see that? Tapping into your super-noticing power Leisa on LinkedIn Leisa on Twitter Atlassian Jira Confluence Trello Quantifying Qualitative Research – Mind the Product Gerry McGovern’s Top Tasks What is Jobs to be Done (JTBD)? Build Measure Learn Follow Dollars to Donuts on Twitter and help other people discover the podcast by leaving a review on iTunes. Transcript Steve Portigal: Howdy, and here we are with another episode of Dollars to Donuts, the podcast where I talk to the people who are leading user research in their organization. I just taught a public workshop in New York City about user research, organized by Rosenfeld Media. But don’t despair – this workshop, Fundamentals of Interviewing Users is also happening September 13th in San Francisco. I’ll put the link in the show notes. Send your team! Recommend this workshop to your friends! If you aren’t in San Francisco, or you can’t make it September 13th, you can hire me to come into your organization and lead a training workshop. Recently I’ve taught classes for companies in New York City, and coming up will be San Diego, as well as the Midwest, and Texas. I’d love to talk with you about coming into your company and teaching people about research. As always, a reminder that supporting me in my business is a way to support this podcast and ensure that I can keep making new episodes. If you have feedback about the podcast, I’d love to hear from you at DONUTS AT PORTIGAL DOT COM or on Twitter at Dollars To Donuts, that’s d o l l R s T O D o n u t s. I was pretty excited this week to receive a new book in the mail. It’s called The Art of Noticing, by Rob Walker, whose name you may recognize from his books, or New York Times columns, or his appearance in Gary Hustwit’s documentary “Objectified.” I’ve only just started the book but I am eager to read it, which is not something I say that often about a book of non-fiction. The book is structured around 131 different exercises to practice noticing. Each page has really great pull-quotes and the exercises seem to come from a bunch of interesting sources like people who focus on creativity or art or storytelling. Rob also publishes a great newsletter with lots of additional tips and examples around noticing, and I’ve even sent him a few references that he’s included. I’ll put all this info in the notes for this episode. This topic takes me back to a workshop I ran a few years ago at one of the first user research conferences I ever attended, called About, With and For. The workshop is about noticing and II wonder if it’s time to revisit that workshop, and I can look to Rob’s book as a new resource. Well, let’s get to the episode. I had a fascinating and in-depth conversation with Leisa Reichelt. She is the Head of Research and Insights at Atlassian in Sydney Australia. Our interview went on for a long time and I’m going to break it up into two parts. So, let’s get to part one here. Thank you very much for being here. Leisa Reichelt: Thank you for inviting me. Steve: Let’s start with maybe some background introduction. Who are you? What do you do? Maybe a little bit of how we got here – by we, I mean you. Leisa: I am the Head of Research and Insights at Atlassian. Probably best known for creating software such as Jira and Confluence. Basically, tools that people use to make software. And then we also have Trello in our stable as well. So, there are a bunch of tools that are used by people who don’t make software as well. A whole bunch of stuff. Steve: It seems like Jira and Confluence, if you’re any kind of software developer, those are just words you’re using, and terms from within those tools. It’s just part of the vocabulary. Leisa: Yeah. Steve: But if you’re outside, you maybe have never heard those words before? Leisa: Exactly. Atlassian is quite a famous company in Australia because it’s kind of big and successful. But I think if you did a poll across Australia to find out who knew what Atlassian actually does, the brand awareness is high. The knowledge of what the company does is pretty low, unless you’re a software developer or a sort of project manager of software teams in which case you probably have heard of or used or have an opinion about Jira and probably Confluence as well. Steve: And then Trello is used by people that aren’t necessarily software makers. Leisa: Correct. A bunch of people do use it for making software as well, but it’s also used for people running – like in businesses, running non-technical projects and then a huge number of people just use it kind of personally – planning holidays, or weddings. I plan my kids weekends of Trello sometimes and I know I’m not alone. So, yeah it’s a real – it’s a very what we call a horizontal product. Steve: A horizontal product can go into a lot of different industries. Leisa: Exactly, exactly. I’m very skeptical about that term, by the way. Steve: Horizontal? Yeah. Leisa: Or the fact that it won’t necessarily be a good thing, but that’s another topic probably. Steve: So, I can’t follow-up on that? Leisa: Well, yeah, you can. Well, the problem with horizontal products, I think, is that they only do a certain amount for everybody and then people reach a point where they really want to be able to do more. And if your product is too horizontal then they will graduate to other products. And that gives you interesting business model challenges, I think. So, you have to be continually kind of seeking new people who only want to use your product up to a certain point in order to maintain your marketplace really. Steve: When I think about my own small business and just any research I’ve done in small and just under medium sized businesses where everything is in Excel, sort of historically, where Excel is the – there may be a product, Cloud-based or otherwise, to do a thing. That someone has built kind of a custom Excel tool to do it. So, is Excel a horizontal product that way? Leisa: I think so, yeah. In fact, I was talking to someone about this yesterday. I think that for a lot of people the first protocol for everything is a spreadsheet. They try to do everything that they can possibly do in a spreadsheet. And then there are some people who are like the, “ooh, new shiny tool. Let’s always try to find an application for a new shiny tool.” I think actually the vast majority of people take the first tool that they knew that had some kind of flexibility in it. So, if you can’t do it in Word or Excel – people will compromise a lot to be able to do things in tools that they have great familiarity with. Steve: Yeah. But from the maker point of view you’re saying that a risk in the horizontalness, the lack of specificity, creates kind of a marketplace for the maker of the tool? Leisa: Can do. I think for it to be successful you just have to be at such a huge scale to be able to always meet the needs of enough people. But I think things like Excel and Word and Trello, for example, they’ll always do some things for some people. Like just ‘cuz you moved to a more kind of sophisticated tool doesn’t mean that you completely abandon the old tool. You probably still use it for a bunch of things. Steve: So, your title is Head of Research and Insights? Leisa: Correct. Steve: So, what’s the difference between research and insights? Leisa: Yeah, good question. I didn’t make up my own title. I kind of inherited it. If I remember correctly, the way that it came about was that when I came into my role it was a new combination of people in the team in that we were bringing together the people who had been doing design research in the organization and the voice of the customer team who were effectively running the MPS. And I think because we were putting the two of them together it sounded weird to say research and voice of the customer. So, they went with research and insights instead. And, honestly, I haven’t spent any time really thinking about whether that was a good title or not. I’ve got other problems on my mind that are probably more pressing, so I’ve just kind of left it. If you want to get into it, I think research is the act of going out and gathering the data and pulling it altogether to make sense and insights. So, hopefully the sense that you get from it and we do try to do both of those things in my team, so I think it’s a reasonably accurate description. Steve: How long have you been in this role? Leisa: It’s about 18 months now. Steve: If you look back on 18 months what are the notable things? For the experience of people inside the organization what has changed ins? Leisa: Quite a few things. The shape and make up of the team has changed quite a lot. We’re bigger and differently composed to what we were previously. When I first came in it was literally made up of researchers who were working in products. Prior to me coming in they were reporting to design managers and we, for various reasons, pulled them out of products pretty quickly once I started. And then we had the other part of the business who were running MPS that was being gathered in product and we don’t do that anymore either. So, that team is doing quite different things now. So, we’ve changed a lot of things. We’ve introduced a couple of big programs of work as well. One of them being a big piece of work around top tasks. So, taking Gerry McGovern’s approach to top tasks. Trying to build that kind of foundational knowledge in the organization. So, that’s kind of new. There have always been those people doing Jobs To Be Done type stuff, but very very close to the product level. So, we’ve tried to pull back a little bit to really build a bigger understanding of what are the larger problems that we’re trying to solve for people and how might we see the opportunities to address those more effectively? Steve: So, trying to create shared knowledge around what those top tasks are – I’m guessing for different products, different user scenarios. Leisa: One of the things that we really tried to do is to get away from product top tasks and get more into really understanding what problems-based is the product, or combinations of products, trying to address. So, we don’t do top tasks for Jira. We do top tasks for agile software teams. And then through that we can then sort of ladder down to what that means for Jira or Confluence or Bitbucket or Trello, or whichever individual or combination of products we have. But it means that we sort of pull away a little bit from that feature focus. I think it can be very seductive and also very limiting. Steve: What’s the size of the team now? Leisa: I have a general rule in life of never count how many researchers you have in the organization because it’s always too many, according to whoever is asking you – not you, but like senior people. I think we’re probably around the mid-20s now. Steve: And around the world, or around different locations? Leisa: Mostly we’re in Sydney and California. So, we’re across three different offices there and we have a couple of people working remotely as well. So, I have somebody remote in Europe and another person in California who’s working out of a remote office too. Steve: Can we sort of talk about the make up of the team evolving – what kinds of – without sort of enumerating, what are sort of the background, skillsets? Any way that you want to segment researchers, what kinds of – what’s the mix that you’re putting together? Leisa: So, I think at the highest level we’ve got people who do design research, predominantly qualitative and they do a mixture of discovery work and evaluative work. We’ve got a team of what we call quantitative researchers and those are generally people who have got a marketing research type background and so they bring in a lot of those data and statistical skills that the other crew don’t necessarily have quite so much of. And then we have a research ops team as well who are doing some different things. And then we have a handful of research educators. Steve: And what are the research educators doing? Leisa: Educating people about how to do research. Steve: These are hard questions and hard answers! Leisa: Well, if you dig into it too much it gets complicated pretty quickly. So, I think the reality of research at Atlassian is the people in the research team probably do – best case – 20% of the research that happens in the organization and that’s probably an optimistic estimate as well. A huge amount of research is done by product managers and by designers and by other people in the organization, most of whom haven’t had any kind of training and so take a very intuitive approach to how they might do research. So, the research education team are really trying to help shape the way that this work is done so that it can be more effective. Steve: I’m thinking about a talk I saw you give at the Mind the Product Conference where you began the talk – I think you kind of bookended the talk with a question that you didn’t answer – I don’t think you answered it definitively which is, is bad research better than no research, or words close to that. It was a great provocation to raise the question and then sort of talk about what the different aspects of that – how we might answer that? What the tradeoffs are? When you talk about this design education effort I can’t help but think that that’s connected to that question. If 80% of the research is done by people acting intuitively then yeah, how do you level up the quality of that? Leisa: Absolutely. Steve: Which implies that – well, I don’t know if it answers – does that answer the question? I’m not trying to gotcha here, but if you are trying to level up the quality that suggests that at some point better research is – I don’t know, there’s some equation here about bad vs. better vs. no. I’m not sure what the math of it is. Leisa: So, this has been the real thing that’s occupied my mind a lot in the last couple of years really. And I’ve seen different organizations have really different kind of appetites for participating in research in different ways. I think at the time that I did that talk, which was probably, what? Steve: Almost a year ago. Leisa: Yeah. I think I was still trying to come to terms with exactly where I’ve felt – what I’ve felt about all of this, because as somebody who – well, as somebody in my role, everybody in an organization is – a lot of people in the organization are going to be watching you to see you trying to be overly precious and to become a gatekeeper or a bottleneck or all of these kinds of things. So, I’ve always felt like I had to be very careful about what you enabled and what you stopped because everyone has work that they need to get done, right. And the fact that people want to involve their customers and their users in the design process is something that I want to be seen to be supporting. Like I don’t want to be – I don’t want to stop that and I certainly don’t want to message that we shouldn’t have a closeness to our users and customers when we’re doing the research. But, I’ve also seen a lot of practices that are done with the best of intentions that just get us to some crazy outcomes. And that really worries me. It worries me on two levels. It worries me in terms of the fact that we do that and we don’t move our products forward in the way that we could and it worries me because I think it reflects really poorly on research as a profession. I think most of us have seen situations where people have said well I did the research and nothing got better, so I’m not going to do it anymore. Clearly it’s a waste of time, right. And almost always what’s happened there is that the way that people have executed it has not been in a way that has helped them to see the things that they need to see or make the decisions that they need to make. So, it’s this really hard like to walk to try to understand how to enable, but how to enable in a way that is actually enabling in a positive way and is not actually facilitating really problematic outcomes. So, that’s, yeah – that’s my conundrum is balancing that. One the one hand I feel really uncomfortable saying no research is better than bad research. But on the other hand, I’ve seen plenty of evidence that makes me feel like actually maybe it’s true. Steve: So, that’s kind of a time horizon there that the bad research may lead to the wrong decisions that impact the product and then sort of harm the prospects of it. I’m just reflecting back. That harm the prospects of a research to kind of go forward. Right, every researcher has heard that, “well we already knew that” response which to me is part of what you’re talking about. It’s that when research doesn’t – isn’t conducted in a way and sort of isn’t facilitated so that people have those learning moments where they – I think you said something about sort of helping them see the thing that’s going to help them make the right decision. And that’s – right, that’s not a – that’s different than what methodology do you use, or are you asking leading questions? Maybe leading questions are part of it because you confirm what you knew and so you can’t see something new because you’re not kind of surfacing it. Leisa: Loads of it comes around framing, right. Loads of it comes around where are you in the organization? What are you focused on right now? What’s your remit? What’s your scope? What are you allowed to or interested in asking questions about? In a lot of cases this high volume of research comes from very downstream, very feature focused areas, right. So, if you’re working on a product that has got some more foundational issues that need to be addressed, but the vast majority of the work is happening at that very detailed feature level, how are you going to ever going to stop kind of circling the drain. You get stuck in this kind of local maxima. How are you ever going to take that big substantial step to really move your product forward if it’s nobody’s job, nobody’s priority, to do that. So, a lot of this is kind of structural. That so many of our people who are conducting this research are so close to the machine of delivery, and shipping and shipping and shipping as quickly as possible, that they don’t have the opportunity to think – to look sideways and see what’s happening on either side of their feature. Even when there are teams that are working on really similar things. They’ve run so heads down and feature driven. So, doing the least possible that you can to validate and then ship to learn, which is a whole other area of bad practice, I think, in many cases. It’s really limiting and you can see organizations that spend a huge amount of time and effort doing this research, but it’s at such a micro level that they don’t see the problems that their customers are dying to tell them about and they never ask about the problems that the customers are dying to tell them about because the customers just answer the questions that they asked and that’s kind of what bothers me. And so, it’s not about – in a lot of cases, some of it is about practice. Like I think it’s amazing how few people can just resist the temptation to say hey I’ve got this idea for a feature, how much would you like it? They can get 7 out of who said they loved my feature. Like that feels so definitive and so reliable and that’s very desirable and hard to resist. So, yes that happens. But the bigger thing for me I think is where research is situated and the fact that you don’t have both that kind of big picture view as well as that micro view. We just have all of these fragments of microness and a huge amount of effort expended to do it. And I don’t feel like it’s helping us take those big steps forward. Steve: But Leisa, all you need is a data repository so that you can surface those micro findings for the rest of the organization, right? Leisa: You’re a troll Steve. Steve: I am trolling you. Leisa: You’re a troll, but again, I mean in some – kind of yes, but again, like all of those micro things don’t necessarily collectively give you the full view. And a lot of those micro things, because of the way they’re being asked, actually give you information that’s useless. If all of your questioning is around validating a particular feature that you’ve already decided that you think you want to do and you go in asking about that, you never really ask for the why? Like why would you use – like who is actually using this? Why would they use this? What actual real problem are they solving with this, right? So, the problem becomes the feature and then all of that research then becomes very disposable because the feature shifts and then nobody uses it as much as everybody thought they would, or they’re not as satisfied with it as what they thought they would be. So, we just keep iterating and iterating and iterating on these tiny things. We move buttons around. We change buttons. We like add more stuff in – surely that will fix it. But it’s because we haven’t taken that step back to actually understand the why? If you’re talking about those big user need level questions, the big whys, then you know what, you can put those things in a repository and you can build more and more detail into those because those tend to be long lasting. And then at the other end, just the basic interaction design type stuff that we learn through research. A lot of those things don’t change much either. But it’s that bit in the middle, that feature level stuff, is the most disposable and often the least useful and I feel like that’s where we spend a huge amount of our time. Steve: Do you have any theories as to why in large software enterprise technology companies that there is a focus on leaning heavily on that kind of feature validation research? What causes that? Leisa: I think that there’s probably at least two things that I can think about that contribute to that. One is around what’s driving people? What gets people their promotions? What makes people look good in organizations? Shipping stuff – shipping stuff gets you good feedback when you’re going for your promotion. They want a list of the stuff that you’ve done, the stuff that you’ve shipped. In a lot of organizations just the fact that you’ve shipped it is enough. Nobody is actually following up to see whether or not it actually made a substantive difference in customers’ lives or not. So, I think that drive to ship is incredibly important. And our org structures as well, like the way that we divide teams up now, especially in organizations where you’ve got this kind of microservice platform kind of environment. You can have teams who’ve got huge dependencies on each other and they really have to try to componentize their work quite a lot. So, you have all of these kind of little micro teams who their customer’s experience is all of their work combined, but they all have different bosses with different KPIs or different OKRs or whatever the case may be. And I think that’s a problem. And then the other thing is this like build, measure, build – what is it? I’ve forgotten how you say it. Steve: I’m the wrong person. Leisa: Build/measure/learn. Steve: Yeah, okay. Leisa: The lean thing, right, which is this kind of idea that you can just build it and ship it and then learn. And that’s – that’s – that means that like if it had been learn/build/measure/learn we would be in a different situation, right, because we would be doing some discovery up front and then we would have a lot of stuff that we already knew before we had to ship it out to the customers. But it’s not. It’s build – you have an idea, build/measure/learn. And then people are often not particularly fussy about thinking about the learn bit. So, we ship it, put some analytics on it and we’ll put a feedback collector on it and then we’ll learn. What are you going to learn? Whatever. What does success look like? I don’t know. It’s very – it’s kind of lazy and it makes – we treat our customers like lab rats when we do that. And I feel like they can tell. There’s lots of stuff that gets shipped that shouldn’t get shipped, that we know shouldn’t get shipped. I’m not talking about Atlassian in specific. I’m talking about in general. We all see it in our day to day digital lives that people ship stuff that we really should know better, but this build/measure/learn thing means that unless you can see it in the product analytics, unless you can see thousands of customers howling with anger and distress, it doesn’t count. Steve: It reminds me of a client I worked with a few years ago where we had some pretty deep understandings of certain points of the interactions that were happening and what the value proposition was and where there was meaning. Just a lot of sort of – some pretty rich stuff. And the client was great because they kept introducing us to different product teams to help apply what we had learned to really, really specific decisions that were going to be made. But what we started to see – this pattern was they were interested in setting up experiments, not improving the product. And in fact, this is a product that has an annual use cycle. So, the time horizon for actually making changes was really far and some of the stuff was not rocket science. Like it was clear for this decision – A versus B versus C – like here’s the research said very clearly like this is what people care about, or what’s going to have impact, or what’s going to make them feel secure in these decisions. They were like great, we can conduct an experiment. We have three weeks to set up the experiment and then we’ll get some data. And I was – I just hadn’t really encountered that mindset before. I don’t know if they were really build/measure/learn literally, if that was their philosophy, but I didn’t know how to sort of help – I wasn’t able to move them to the way that I wanted it done. I’d really just encountered it for the first time and it seemed like there was a missed opportunity there. Like we knew how to improve the product and I wasn’t against – like they are conducting experiments and measuring and learnings – awesome. That’s research. But acting on what you’ve learned seems like why you’re doing the research in the first place. Leisa: It feels as though we have this great toolset that we could be using, right? We’ve got going out and doing your kind of ethnography, contextual type stuff. And then right at the other end we’ve got the product analytics and running experiments and a whole bunch of stuff in between. And it really feels to me as though organizations tend to really just get excited about one thing and go hard on that one thing. And growth – doing the experiments is a big thing right now. I see loads of situations where we look at data and we go, oh look the graph is going in this direction. Well, the graph is not going in that direction. Why? And I’ll kind of – like there’s so much guessing behind all of that, right? And if it doesn’t go quite right, well then let’s have another guess, let’s have another guess, let’s have another guess. And this – like you say, there’s so much stuff that we probably already know if we could connect two parts of the organization together to take together more effectively. Or, there are methods that we could use to find out the why pretty quickly without having to just put another experiment, another experiment onto our customers, our users. But the knowledge of this toolset and the ability to choose the right tool and apply it, and to apply these approaches in combination seems to be where the challenge is. Steve: What’s the role of design in this? In terms of here’s the thing that we know and here’s the thing we want to make. We haven’t talked about designers. Ideally for me, I sort of hope designers are the instruments of making that translation. Without design then you can sort of test implementations, but you can’t synthesize and make anything new necessarily. Leisa: Well, I mean yeah. Theoretically design has a really important role to play here because I think design hopefully understands the users in a design process better than anybody else. Understands the opportunities for iteration and levels of fidelity for exploring problems in a way that nobody else on the team does. And lots of designers do know that. But the time pressure is enormous, especially in these kind of larger tech companies where a lot of times designers – designers are also concerned about being bottlenecks. They have to feed the engineering machine. And it can be really difficult for them to have the conversations to talk about all of the work that we should be doing before we ship something. So, I feel as though they have a lot of pressure on them, a lot of time pressure on them. They’re being pressured to really contract the amount of effort that they put in before we ship something. And there is – yeah, there’s this huge desire amongst product teams and their bosses to just get something shipped, get something live and then we’ll learn that I think design really struggles with. And I don’t know – it can be really difficult in those kinds of environments to be the person who stands up and says we need to slow down and we need to do less and do it better. So, I have empathy for designers in their inability to shift the system – these problems – necessarily because of this big pressure that’s being put on them just to keep the engineers coding. Steve: Right. If you say you want more time to work on something that’s – what are the risks to you for doing that? Leisa: Exactly, exactly. On a personal level everyone wants to look good in front of their boss. Everyone would like a pay raise and a promotion and the easiest way to get those things is to do what you’re told and ship the stuff. Keep everyone busy, produce the shiny stuff, let it get coded up, let it go live, learn, carry on. That’s thinking short term. That’s how to have a happy life. Is that how you’re going to fundamentally improve the product or service that you’re working on? Probably not. But it takes a lot of bravery to try to change that, to try to stop this crazy train that’s out of control. Throw yourself in front of the bus. All that kind of stuff. Like you said, it’s hard, especially when there are loads of people all around you who are very happy to keep doing it the way that it’s being done right now. So, yeah. And I think that’s research’s role, that’s design’s role. I would like that to be PM’s role as well. And most engineers are – a lot of engineers are very, very, very interested in making sure that the work that they’re doing is actually solving real problems and delivering real value as well. Steve: So, as you point to the individuals there’s a lot of shared objectives. But if you point the system, which is – it’s the system of – it’s the rewards system and the incentive system, but there’s also some – I think there’s just sort of what day to day looks like. Sort of the operations of the system of producing technology products. Leisa: I think there’s also – there’s something about like what do people like to see? What gets people excited, right? And graphs pointing upwards in the right direction is really exciting. Like certain outcomes are really exciting. Ambiguous outcomes, really not exciting at all. Having things that go fast, very exciting. Things that go slow, not exciting. So, I think there are all of these things that this collection of humans, that form an organization have a strong bias towards, that we get really excited about, that don’t necessary help us in the long run. You know you see lots of people who get really excited about the graph. Very few people dig in behind the graph to the data. Where did this data come from? How much can I actually believe it? Like people are so willing to just accept experiment findings without actually digging in behind it. And a lot of the time, when you do dig in behind it, you go huh, that doesn’t look very reliable at all. So, there’s something about we love to tell ourselves that we’re data informed or data driven, but a huge number of the people who are excited about that don’t have very much data literacy to be able to check in and make sure that the stuff they’re getting excited about is actually reliable. Steve: So, how could someone improve their data literacy? Leisa: I think that there’s a lot of work that we need to do to make sure that we actually understand how experiments should be structured. And understand more about – being more curious about where is this data coming from and what are the different ways that we could get tricked by this, right? And there are tons of like books and papers and all kinds of things on the subject. But actually, when you go looking for and you’re coming at it with a critical mind, rather than with a mind that gets excited by graphs, you know a lot of it is pretty logical. When you think about like – like surveys, right? Who did this survey actually – who actually answered these questions? Instead of just going hey, there’s an answer that supports my argument, I’m just going to grab that. To dig behind it and go like are these – are the people who are being surveyed actually the same people that we’re talking about when we’re making this decision. Is there something about the nature of those group of people that is going to bias their response? It’s remarkable to me how few people actually kind of check the source to make sure that it’s worthy of relying on. Everyone – a lot of people are really keen to just grab whatever data they can get, but that supports their argument. I think this is another one of those kind of human things, those human inclinations that we have that lead us towards kind of bad behaviors. Steve: I can imagine that with this research education effort that you’re doing that people that participate in that are going to learn how to make better choices in the research that they then run, some practical skills, some planning, some framing, as you talked about. But it seems like that literacy is a likely side effect of that, or maybe it’s not the side effect. Maybe it’s the effect. How to be better consumers of research. Once you understand how the sausage is made a little bit then you understand, oh yeah, that’s a biased question, or that’s bad sampling, and there’s a choice to make in all of these and we have to question those choices to understand the research that’s presented to us. I hadn’t really thought of training people to do research as a way to help then in their consumption or critical thinking around research. Leisa: Something else that we’ve really come to realize recently as well is that because of all of the other pressures that the people who are making these decisions are having to deal with as well, we think we’ll do much better if we train the entire team together instead of just training the people who are tasked with doing the research. Because something that we’ve observed is that you can train people and they can go this is great – I really see what I was doing before was creating these kind of not great outcomes and what I need to do to do better. And then you’ll see them, not that long later, doing exactly the opposite to what we thought we had agreed we were going to do going forward. And we’re like well what’s happening? These are like smart, good people who have been given the information and are still doing crazy stuff. Like what’s going on with that? And you realize they go back into their context where everyone is just trying to drive to get stuff done faster, faster, faster, faster, and you have to plan to do good research. The easiest research to do really, really quickly is the crappy research. If you want to do good research you do have to do some planning. So, you need to get into a proactive mindset for it rather than a reactive mindset for it. And you need the entire team to be able to do that. So, one of the things that we’re looking to do moving forward is not just to train the designers and the PMs, but actually to train a ton of people all around them in the teams to help them understand that the way that you ask the questions and who you ask them of and where – all the different things that could impact the reliability of your research – requires planning and thinking about in advance. So, we hope that means that the whole team will understand the importance of taking the time to do it and it won’t just be like one or two people in a large team fighting to do the right thing and being pressured by everybody else. So, I think it is – the education aspect, I suspect, is super important and it goes way beyond just helping people who are doing research to do better research. It goes to helping the whole organization to understand the impact and risk that goes with doing a lot of research activity in the wrong way or the wrong place. Steve: Just fascinating to me and I think a big challenge for all of us that lots of researchers are involved in education of one form or another – workshops – a big program like you’ve been building. But most of us are not trained educators. We don’t have a pedagogical, theoretical background about – it’s about communicating information. It’s about influence and changing minds. And it just seems like a lot of researchers, from an individual researcher on a product team to someone like you that’s looking at the whole organization, we’re sort of experimenting and trying to build tools and processes that help people learn. And learn is not just imparting the information, but reframe and empower, like these big words where – I wish I had – I wish you had a PhD in education. I wish I had that. Or that I had been a college professor for a number of years, or whatever it would take – however I would get that level of insight. Personally, I have none of that. So, to hear us – you know we all talk about these kinds of things. I think research gives you some skill and prototyping and iterating and measuring in order to make the kinds of changes in the implementation that your making. I don’t know about you. I feel like I am amateurish, I guess, in terms of my educational theory. Leisa: Absolutely. And I think talking to the team who are doing this work, like it’s really, really, really hard – really hard work to come up with the best way to share this knowledge with people in your organizational context, in a way that is always time constrained. At Atlassian we have a long history of doing kind of internal training. We have this thing called bootcamps. We’ve got almost like a voluntary system where people come in and they run these bootcamps and you can go and learn about the finer details of advanced Jira administration or you can come in and learn about how to do a customer interview. But the timeframe around that was like – like a two-hour bootcamp was a long bootcamp. Most bootcamps are like an hour. And so, when we started thinking about this we were like we’re going to need at least a day, maybe two days. And everyone was like nobody will turn up. But yeah – fortunately we have – we do day long sessions and people have been turning up. So, that’s great. But yeah, it’s a huge effort to try to come up with something that works well. And every time we do these courses the trainers and educators go away and think about what worked and how can we do it better. So, we iterate every single time. So, yeah, it’s a huge amount of effort. I think in larger organizations too, there are other people in the organizations who are also tasked with learning type stuff. So, we have a couple of different teams at Atlassian who are kind of involved in helping educate within the organization. So, we’re building a lot of relationships with different parts of the org than we have before in order to try to get some support and even just infrastructure. Like there’s just a whole lot of like logistics behind setting this up if you want to do it in any kind of scale. It’s great to have that support internally and it’s really good to start to build these relationships across the organization in different ways. But yeah, I think that we certainly underestimated the challenge of just designing what education looks like and how to make it run well. It’s a massive, massive effort. Steve: It’s not the material – you guys probably have at hand a pretty good set of here’s how to go do this. If you brought an intern onto your team you could probably get them up to speed with material that you have, but you’re trying to change a culture. And I think the advantage that I have, the context that I have as an external educator is that people are opting in and that I have to by the assumption that they want to be there and I don’t have access to what the outcomes are. I might through someone that’s my gatekeeper, but it’s kind of one them. I have the responsibility for the training, but not the responsibility for the outcomes, which is what you all are kind of working with. So, I envy you and I don’t envy you. Leisa: Well, I think I – I like it because, you know, going back to the build/measure/learn thing, right. Again, we did a learn before we did our build and measure because we’re researchers and that’s what we do, but it is – it’s super interesting to see the behaviors of people who have come through the training and see whether they shift or not. That gives us – we get feedback from people after the course who tell us whether they thought it was useful or not useful and what they liked and didn’t like, but then we actually get to observe because through our ops team, they come through us to do their recruiting and get their incentives. So, we can keep a little bit more of an eye on what activity is coming out and see if there is any sort of shifting in that as well. And it’s not just courses as well. It’s thinking about like what’s the overall ecosystem? What are other things that we can be doing where we are sort of reaching out to help support people on this journey as well. Before we did educating, we had advisors who kind of made themselves available to go and sort of support teams who recognized that they might have a need for some help with getting their research right. So, that was kind of our first attempt. But we had to pivot into educating because of the time factor. We would go in and give advice and everybody would go it’s great advice. We’d totally love to do that, but I have to get this done by next Thursday. So, I’m going to ignore your advice for now and carry on with what I was going to do anyway. And that was pretty frustrating. So, we felt like we have to invest in trying to get ahead of the curve a little bit – try to get ahead. Try to not necessarily influence the stuff that has to happen by next Thursday, but try to encourage teams to start being proactive and planning to do research instead of doing this really reactive work instead. Or as well as the reactive work perhaps. I don’t know. Steve: Have the right mix. Leisa: Yeah. Steve: I wonder if – and this is probably not going to turn out to be true, but I wonder about being proactive and planning versus time. The time pressure to me is about oh we only have so many hours to spend on this, or that calendar wise we need to be there next Thursday. But being proactive says, well if we started thinking about it three weeks ago we’d be ready for it Thursday to do it “the right way.” I’m wondering, can we tease apart the pressures. One is like no proactivity, the sort of very short-term kind of thinking, is different than hours required. Is that true? Leisa: I think so. Because I think that even want to do the short-term stuff we still spend like quite a lot of time and effort on it. And the planning in advance, like the more proactive work doesn’t necessarily, I don’t think, entail more actual work. It just might be that you put in your recruitment request a couple of weeks beforehand so that we can try to make sure that the people that you meet are the right kinds of people, instead of if you have to have it done in 3 or 4 days time then your ability to be careful and selective in terms of who you recruit to participate in the research is very much limited. So, we see – when we have those kind of time constraints you see everybody just going to unmoderated usability testing and their panel and that introduces a whole lot of problems in terms of what you’re able to learn and how reliable that might be. Yeah. I was thinking for a second about you know when – theoretically when you do your unmoderated usability testing you should still be watching the videos, right. So, that should take as much time as watching a handful of carefully recruited people being – doing these sessions in a facilitated way. But the reality is, I think, that most people don’t watch the videos, which speaks to quality. Steve: Here we are back again. It seems like there’s a, for the profession overall, maybe one way to start sort of framing the way around this time pressure thing is to decouple proactiveness versus sort of hours burned. That it’s going to be the similar number of hours burned, but if you start earlier the quality goes up. I had never really thought about those two things as being separate. Leisa: Yeah. And I don’t think people do. I think when people think about – and this is – I mean I sort of said the word quality. I’m trying not to say quality anymore. I’m trying to talk about meaningfulness more now, I think, because whenever you talk about quality to pushback that you get is well it doesn’t need to be perfect, so it doesn’t need to be academic. I just need enough data to be able to make a decision and understand that. But then I see the data upon which they’re making the decision and that makes me worry, right? And I think that’s – we want to get out of this discussion of like quality levels, more into sort of reliability levels, like how reliable does it need to be? Because surely there has to be a bar of reliability that you have to meet before you feel like you’ve got that information that you need to make a decision. But I see loads of people making decisions off the back of really dreadful and misleading data and that’s what worries me. And they feel confident with that data – going back to the data literacy problem. Like they really haven’t dug into why they might be given really misleading answers as a result of the who and the how and the why, all those decisions that are made around how to do the research, most of which have been driven by time constraints. Steve: Okay, that’s the end of part one of my interview with Leisa. What a cliffhanger! There’s more to come from Leisa in the next episode of Dollars to Donuts. Meanwhile, subscribe to the podcast at portigal dot com slash podcast, or go to your favorite podcasting tool like Apple Podcasts, Stitcher or Spotify, among others. The website has transcripts, show notes, and the complete set of episodes. Follow the podcast on Twitter, and buy my books Interviewing Users and Doorbells Danger and Dead Batteries from Rosenfeld Media or Amazon. Thanks to Bruce Todd for the Dollars to Donuts theme music.
Rank #1: An old guys guide to building modern websites. [S02E09] Boagworld.com is the first site I have personally coded in a long time. Things have certainly come a long way and become very exciting.
Rank #2: 193. Get more from Google Analytics. On this week's show: Paul and Marcus are joined by Matt Curry who shares some advanced Google Analytics techniques. We have a review of Fancy Form Design by Jina Bolton and Paul goes on endlessly about the Website Owners Manual.
Rank #1: 64: Nielson, Evans, and Riot on Photoshop. Senior Product Manager Stephen Nielson, Interaction designer Bradee Evans, and Senior Lead User Experience Designer (and Generator Maestro) Tim Riot of Adobe join Marc, Seth, and Rene to talk the present and future of Photoshop!
Rank #2: 55: Eric Mayville, Clayton Morris, and Read Quick. Eric Mayville of Wondersauce and Clayton Morris of FOX news join Rene and Mark to talk about their speed-reading app, Read Quick, and what it took to bring the interface from iPad to iPhone.
Rank #1: #03: Melissa Hajj (Design Manager at Facebook): Managing Design from the ground up. This week we’re coming at you from the Facebook Tel Aviv office, where we were lucky to meet up with Melissa Hajj. Melissa is the head of Facebook’s core growth design team, whose stated mission is connecting the next billion people to Facebook (no big deal). Shownotes and more here: http://hackingui.com/scaling-a-design-team/03-melissa-hajj/
Rank #2: #06: Sagi Shrieber (Co-Founder of Hacking UI): How I built, and now scale, my side projects: Story, insights, and practical tips. This is the read-out-loud of the Article found here: http://hackingui.com/product-hacking/side-projects/ In the following episode Sagi - Design Director, Hacking UI co-founder, and the founder of Israel's Pixel Perfect Magazine - shares his story & personal tips on the hows & whys of scaling side projects
Rank #1: 116: The Court of Design. This week we talk about the Apple Event, Apple Watch Series 4, iPhone Xs and Xs Max and the iPhone Xʀ.Show Notes @sdw’s tweet Apple Watch: Series 4 Comparing Series 4 to Series 3 Apple Watch design language iPhone Xs and Xs Max iPhone Xʀ Recommendation Marvel’s Spider-Man (PS4) HostsKevin Clark (@vernalkick)Rafael Conde (@rafahari)
Rank #2: 46: Unpronounceable Smart Lights. Being good at one thing vs being ok at a lot, the new Framer.com and the original iPhone.Follow up Lutron Caseta dimmer Topics The new Framer.com T-shaped skills Animation Sound Design: Ben Burtt Creates the Sounds for Wall-E Recommendations Split (2016) Marc Edward’s icon design workflow HostsKevin Clark (@vernalkick)Rafael Conde (@rafahari)
Rank #1: Getting a Clue: Journey Mapping and the Rashomon Effect. Transcript available. We often talk in terms of silos in organizations, where information isn’t readily shared and communication leaves something to be desired. Another way to think of a team who is heads-down working on the overall journey is to imagine swim lanes. Each department is so focused on their own part of the experience that they might not be fully aware of each step a user has to go through to complete the journey. In this episode, Conor Ward, Head of UX and Design at Centrica & British Gas, tells a story of how mapping out the journey to acquiring a quote for boiler insurance revealed some unexpected insights. Jim Kalbach, author of Mapping Experiences, also joins the podcast to share his expertise on the subject of journey mapping. 1
Rank #2: Empathy as a Service: Applying Service Design to the Homelessness Issue. Transcript available. Empathy. It’s an unavoidable word in the world of user experience design. Too often it is applied to designs in too narrow a fashion. Your empathy should come from the problem your design is solving, not measured in the level of frustration or delight experienced with your design. Ariel Kennan is the Director of Design and Product at the New York City Mayor's Office for Economic Opportunity. She has been working on the HOME-STAT initiative which is an effort of the City of New York to properly provide services to the city’s homeless population. In this episode, Ariel shares her story and is joined by Marc Stickdorn who offers his insights on how service design can be done on such a massive scale. Marc is the CEO and co-founder of More Than Metrics and author of the book Service Design Thinking. He will also be teaching a daylong workshop at the UI22 conference in Boston this November 13-15. To find out more about his workshop, visit uiconf.com.1