Cover image of The Frontside Podcast
(7)
Business
Technology

The Frontside Podcast

Updated about 1 month ago

Business
Technology
Read more

It's like hanging out at our software studio in Austin, Texas with Charles Lowell and the Frontside Team. We talk to smart people about how to make the world of software better for the people who make and use it. Managed and produced by @therubyrep.

Read more

It's like hanging out at our software studio in Austin, Texas with Charles Lowell and the Frontside Team. We talk to smart people about how to make the world of software better for the people who make and use it. Managed and produced by @therubyrep.

iTunes Ratings

7 Ratings
Average Ratings
7
0
0
0
0

Great show for web developers

By Aaron Dowd - Mar 07 2014
Read more
I’m really enjoying listening to these guys talk about their experiences building web applications. Keep em coming, guys!

iTunes Ratings

7 Ratings
Average Ratings
7
0
0
0
0

Great show for web developers

By Aaron Dowd - Mar 07 2014
Read more
I’m really enjoying listening to these guys talk about their experiences building web applications. Keep em coming, guys!
Cover image of The Frontside Podcast

The Frontside Podcast

Latest release on Jul 09, 2020

Read more

It's like hanging out at our software studio in Austin, Texas with Charles Lowell and the Frontside Team. We talk to smart people about how to make the world of software better for the people who make and use it. Managed and produced by @therubyrep.

repkgs & reasonml

Podcast cover
Read more

In this episode, Marcell Cutts and Shane Wilson explain ReasonML and repkgs to Charles: What are they? What are they for? How do you use them?

00:38 - Reason

03:25 - BuckleScript

06:01 - Reason + BuckleScript

16:07 - Reason: Interoperation & Adoption

30:00 - Operating at the Compiler Level vs the Run-Time Level

  • ppx (pre-processor extension)

34:29 - Last thoughts on, and why use Reason?

44:43 - repkgs

Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Jul 09 2020

59mins

Play

Intro to Rush.js with Pete Gonzalez

Podcast cover
Read more

In this episode, Pete Gonzalez joins the show to talk about everything Rush.js: What is it? What is it for? How do you use it? Find out here on The Frontside Podcast!

Special guest:

Pete Gonzalez | @octogonz

During the day, Pete works at HBO in Seattle on their streaming media apps. Prior to that, he was at Microsoft for 9 years, and before that, he worked at various consulting companies. A long time ago he was a cofounder of Ratloop, a small company that makes video games.

01:24 - Rush.js: What is it and what is it for?
- Rush on GitHub

04:47 - Problems with Managing Large Codebases

  • Rush Stack: provides reusable tech for running large scale monorepos for the web

07:22 - How does Rush provide a solution for build orchestration?

13:34 - Rush Stack Opinion: How to Lint, Bundle, etc.

16:53 - Using Rush Stack: Getting Started

24:27 - Getting Technical About Versions

  • Phantom Dependencies
  • Doppelgangers
  • Pure Dependencies

32:47 - Thoughts on Monorepos

36:30 - Getting Started (Cont’d) + Efficient TypeScript Compellation

43:28 - Does Rush have a size limit? Is it for bigger or smaller projects? Both?

44:34 - Using pieces of Rush in non-Rush projects?

Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Jun 25 2020

53mins

Play

Big Ideas & The Future at The Frontside

Podcast cover
Read more

In this episode, Charles and Taras discuss "big ideas" and all the things they hope to accomplish at The Frontside over the next decade.

Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES: Hello and welcome to The Frontside Podcast, a place where we talk about user interfaces and everything that you need to know to build them right. Today, we're going to talk about big ideas and the future of Frontside.

TARAS: Yeah, starting with is Frontside is good idea?

CHARLES: No, we're going to just talk about how do you know that an idea is good? We've touched on it a couple of times before. Like how do you, how do you go about validating a big idea, how do you discover a big idea? What do you do?

TARAS: And then, even when you have big ideas like big tests, what does that mean for the world? How do you make a big idea an idea that a lot of people like and agree with and actually use on day to day.

CHARLES: Yeah. It turns out that it's not easy. There's a lot of work involved with that. A lot of it crystallized around the conversations we're having about what exactly is big test and recognizing that big test isn't really a code base. It's not a toolkit. Even though it does has aspects of those things, it really is an idea. It's an approach. It's a way of going about your business, right?

TARAS: Yeah. Especially when you put big test functionality in place, when you start doing big testing and then you put together things like using Mocha and Karma, big tests in that kind of test suite is really just like interactors and the idea of big testing. There's nothing else. All the interactors do is just give you an easy way to create composable like [inaudible] objects, so you don't have to write -- you have the components but you don't have to write selectors for each element in the component, especially if gets composed. But that's like a very small functionality that does a very specific thing. But big test itself, it takes a lot of work to actually -- we had this firsthand experience on the project we're working on right now. We are essentially introducing like Ember's acceptance testing but for React in a react project and having to explain to people what is it about this that actually makes it a really good idea and having people in the React world see that this is actually a really good idea. It's kind of incredible. When you actually try to sell something to somebody and convince somebody that this is a good idea is when you realize like how inadequate your understanding of the idea really is. You really have to start to break it down and understand what is it about this that is a really big idea.

CHARLES: Yeah, I completely agree 100% because to be clear, we've actually been doing this now for two years almost. So, this is not the first React project where we've put these ideas in place. But I think in prior examples, we just kind of moved in and it's like we're going to do this because this is what we do. And we have firsthand knowledge of this working because we've operated in this community where this is just taken on faith that this is the way you go about your business. You have a very robust acceptance test suite. And because of that, you can experience incredible things. When you and I were talking before the show, we were kind of commenting on inside the Ember community, you can do impossible things because of the testing framework. You can upgrade from Ember 1 to Ember 3 which is a completely and totally separate framework, basically. You're completely and totally changing the underlying architecture of your application. You can do it in a deterministic way and that's actually incredible.

TARAS: And what's interesting too is that the React core team kind of hinted a book at this also in their blog post about fiber or moving to fiber because one of the things that they talked about there is that knowing how the system is supposed to behave on the outside allowed them to change the internals of the free app framework, specifically about test suite for the React framework, but it allowed them to change the internals of the framework because they were testing kind of on the outside. The system kind of is a black box and that allowed them to change the internals and the test suite essentially stayed the same. So this idea of acceptance testing your thing is really fundamental to how Ember community operates. But other communities have this as a big idea as well. It's just applied in different areas.

CHARLES: Right. And so applied to your actual application, this is something that's accepted in one place, but it's not an accepted practice in other places. But you can make your argument one of two ways and say, "I have experienced this and it's awesome. And you can do architectural upgrades if you follow the patterns laid out by this idea." And that works if you have a very, very high, high trust relationship, but you don't always have that, nor should you. Not everybody is going to be trusting you out of the box. So, you're going to have to lay out the arguments and really be able to illustrate conceptually practically how this is a good idea just so that people will actually give it a try.

TARAS: And it takes a lot. One of the reasons why it was easier for us to introduce big testing to the current project we're working on is because we were able to write, like we've done the implementation for this in a previous project. So when we were convincing people this is a good idea, we're like, "Look, your test suite can be really good. It's going to be really fast. And look, we've done it before." So the actual process of convincing people that a big idea is a really good idea is actually kind of a complicated process that requires a lot of work and partially requires experimentation. You have to actually put an implementation in place and show that you're going to have to build up on your successes to be able to get to a point where you can convince people that this is actually a really good idea. People who have not heard about this idea, and especially people who might have counter, like they have people of authority in their community that have counter views. For example, quite often when it comes to big tests, when you bring up big tests, people will reference a blog post by a Google engineer that talks about how functional testing or acceptance testing is terrible. And for a lot of people, a Google engineer means a lot and the person makes really good points. But that's not the complete idea. The complete idea is not just about having an acceptance test suite. It's a certain kind of acceptance test suite. It's an acceptance test suite that mocks out at boundaries so you don't make API requests to the server, you make an API request to a [inaudible] server using something like Mirage or whatever that might be. So, the big idea, it has like new ones that makes it functional, but getting people who are completely unaware, who don't necessarily look up to you as an authority to believe you like, "Yes, that actually sounds like a really good idea." It is not a trivial task.

CHARLES: No, it's not. Because the first time you try and explain it, you're arguing based on your own assumptions. So, you're coming to it safe in the knowledge that this is a really, really good idea based on your firsthand knowledge. But that means you're assuming a lot of things. You're assuming a lot of context that you have that someone else doesn't and they're going to be asking questions. Why this way, why this way, why this way? And so, you have to generate a framework for thinking about the entire problem in order to explain the value of the idea. And that's something that you don't get when it's just something that's accepted as a practice.

TARAS: I think simulation is actually a really good example of that. If you haven't had experience with Mirage, if you don't know what having a configurable server in your tests does for you, you will probably not realize that similar ideas apply to, for example, Bluetooth that when you're writing tests for your Bluetooth devices, or you're writing tests for an application that interacts with Bluetooth devices, you actually want to have a simulation there for Bluetooth so that you can configure it in a kind of similar way to the way you would configure a Mirage for a specific test. You want to be able to say, "This Bluetooth device is going to exist," or, "I have these kinds of Bluetooth devices around me, they have the following attributes. They might disconnect after a little while." There's all kinds of scenarios that you want to be able to set up so that you can see how your application is going to respond. But if you haven't seen a simulation with something like Mirage, you're going to be going like, "I don't know what the hell why would this be helpful."

CHARLES: It seems like lot of work. One of the things that we've been working on, as I said, is trying to come up with a framework for thinking about why this is a good idea because we can't just assume it. It's not common knowledge. For example, one of the things that we've been developing over the last year and more recently in the last few months is trying to understand what makes a test valuable. At its essence, what are the measures that you can hold up to a test and say this test has X value. Obviously, something like that is very, very difficult to quantify. But if you can show that and you say, "This test has these quantities and this test has these quantities," then we can actually measure them. Then it's going to allow people to accept it a lot more readily and try it a lot more readily. So, the ideas that we're playing with right now is that you kind of have to evaluate a test on one on speed, tests that are fast, have an intrinsic value, or rather test that are slow. The upside that you gained from the test is very quickly bled away or offset if the test is slow. So, I can have a very comprehensive test that tests a lot of high value stuff. But if it takes three days to run, it's going to be basically worthless. Another axis on what you can evaluate is in terms of coverage. I'm not talking about coverage of lines of code. I'm talking about use cases and units of assemblage. So, there's the module. Those modules are then stitched together into components. Those components are stitched together into applications. Those applications are downloaded onto browsers. And I would consider it a different unit of assemblage. Your application running on Firefox is a different assemblage than your application running on Chrome, is a different assemblage than your application running. You have this access, which is the coverage of your unit of assemblage. That's another way that you can evaluate your tests. So if I have a test that runs only on Node in a simulated dom, there's a cap, there's an absolute cap on the value of that test and it cannot rise above a certain point. And the other thing, another access that we've identified is isolate ability, an ability to run a test. So if I have a test suite comprised of 1500 tests, but if one fails in the middle, I have to restart the test from the beginning. That's going to decrease the value. Maybe it's related to speed, being able to run the tests without having to install a bunch of different dependencies. So that's another access. And so trying to really understand the variables there, that's something you have to be very systematic about thinking about the tests so that you can actually take your idea and explain it to someone who's not coming at it from first principles. Imagine you have to explain an if statement to somebody who's never programmed with an if statement before.

TARAS: That is going to be very difficult.

CHARLES: It's going to be difficult, right?

TARAS: Yeah. In general, it's very challenging to go from an experience that somebody had, like overriding somebody's experience conveying your own personal experience is very difficult. And getting someone to experience something is very difficult. And so that's what I think a lot of the work that we've been doing over the last little while has been breaking down these problems into a way of understanding them so that we can actually explain why these things are important. Like what we've had to do this recently with a Bluetooth work that we've been doing. We have a partner that's implementing a Bluetooth abstraction for mobile devices and trying to convey to them the value of being able to simulate devices. That's something that we saw with Mirage. We knew that being able to simulate devices in tests in different environments is extremely valuable. We know this firsthand from our experience. But trying to justify to them why we think this is important and why they should rejig all of their thinking about how they're going to architect this middle layer between the native APIs and the application APIs, why they should rejig this layer to make it so that it will allow for simulation, like convincing very technical, very knowledgeable and experienced people that these ideas are important, it requires kind of fundamental understanding of the value that they provide.

And I think this touches on what really I think Frontside is going to be doing going forward is creating conditions for this kind of thought to be cultivated, to be able to create business environment where our clients gain benefit from this kind of very deep insight, insight that transcends the source of those ideas. Because there are certain ideas that are really fundamental and they're beautiful ideas. For example, in React world, a functional component is a really beautiful idea. It's a really simple concept. And kind of going along along with what you're saying about this kind of fundamental ideas, they will drive you to the next point you couldn't even imagine. I think the work that the React core team has done -- they've set out this functional component as a primitive, but it wasn't possible for a long time to really make the functional component a reality until they've gone through the process of actually understanding what connects all the dots so that you could eventually get to a point where like, "Oh, actually this [inaudible] API is a way for us to enable the fundamental concept of having a component." Not only is it simple, but it actually works. You can write a performance application using this fundamental concept. So, the work that we're going to be doing is first of all, making it possible to have conversations at this level. So, people that we're going to be working with Frontside, clients who we work with, companies that work with us, who partner with us, it's going to be all to support creating this environment where we can have this kind of ideas. Then being able to extract these great ideas from the frameworks and actually making them available across frameworks. You don't have to be locked into a specific community to be able to benefit from that.

One of the first steps we're working on now is now that we have an understanding about big tests, we believe this is a good idea, we have ways to justify why it's a good idea. We have clients who have benefited from this good idea. Now we're in position to fill in the gaps that are missing from big tests, from being something that could be used in any framework. We want it to be really easy for people to be able to say, "I really love acceptance testing, but I have to work in a React project or I have to work in a Vue project, I have to work in an Angular projects." It's like, "It's no problem. I have big tests. Big tests is going to give me what I want." The big tests, the idea, but it's going to come with all the little pieces that you need to be able to assemble the functional test suite that is going to give you the benefits that Ember's acceptance test suite provides for Ember projects.

CHARLES: I mean, why should you have to compromise on these things. If it is a good idea and it is a fundamental idea, whether it's how you manage concurrent processes, whether it's how you manage testing, whether it's how you manage routing, why should you have to compromise? Why should you have to say, "You know what? I love..." I don't know, what's a comparable system? "I love air conditioning." Why should I have to go into a car that doesn't have air conditioning? Because every single car has a different air conditioning system, or every single house has a different air conditioning system. I'm showing the fact that I live in Texas here by using this example. But we've developed air conditioning systems that are modular so that they can be snapped on to pretty much any house and you don't have to build a custom thing for the circulation and refrigeration of air every single time or have it just be like it's part of the house. Or we have this wood that ships with ducks. It's like, no, we want to separate out that system.

TARAS: You don't have to commit to living in a specific area to get the benefit of being comfortable. You don't have to give up the comfort for specific areas. The good idea of air conditioning is not restricted to just one area. You can actually experience it wherever you go and have it be available wherever you would go. You have to be able to, like if you're moving to a new house, you can install air conditioner and now you're going to have like cool air. That's the kind of the theme I think of what we're going to be doing. But there's so much work to do because we're kind of, I would say, probably 30 or 40% into having -- if we were to look at like big tests as a big idea is used and is well known in our industry, we're probably maybe 5% on that. If we take into consideration what it takes for us to be able to convey what it takes to convince people is a good idea. I think we're, if you add to it, we'll probably add another 15%. So the next step is actually creating some tooling around it that would make it really easy for people to consume. Because right now, what's really unfortunate is that you have big tests that is available. If we set it up in the React project, doing it with create-react-app right now is kind of difficult. So we're going to make that easier. But then companies still need to make a choice between the ergonomics [inaudible] Cypress or the speed and the comfort and the control that you have with big tests. We have to basically eliminate the need to make that choice by making the tooling that you get in Cypress and making that tooling available for big test projects.

So that is going to kind of bring us up to maybe 60%. And then the remaining part is taking the software that enables the big idea and then making the software work across all different frameworks so that it's really easy to install. On Angular, it's just going to be add a plugin and it gives you big testing. You don't have to rely on their unit testing or you don't have to rely on the end to end testing as using Selenium. You just have big tests and it works just as well as it works and React and just as well as it works in Ember. So you can get the benefits of that. Going through the whole process and then bringing into the world this way, we have a lot of work to do because every part of the architecture of building modern frontend applications requires this level of effort. If you look at routing, routing is extremely inconsistent across frameworks. There are different opinions for every framework. There are different opinions of how to implement it. And that's problematic both on an individual level and a specific project level because it's so hard to know what you should use. And in many cases, it's not complete. What constitutes a routing system on React looks different than what would be in Ember, and it looks different than what it is in Angular. And each one of them has their own trade-offs. So if you find yourself in a situation where you've been using React and now you have to use Angular, you have a steep learning curve. But this also has an interesting effect of like, "Well, we have a world now where micro frontends are a thing." There are really good reasons why companies might want to use multiple frameworks for single platform because different frameworks have different benefits and different teams based on their location might prefer a different framework. There's lots of different reasons why companies choose and we want developers to be able to choose specific framework. But how do you do that when you need to have basically micro frontend architecture, you need to provide an SDK that provides a consistent user experience like -- you need to provide an SDK that provides a consistent developer experience, but then that developer experience because it's developer experience, needs to enable consistent user experience regardless of what framework the team decides to use. So now, routing needs to be more robust. Routing without having a strong concurrency primitives is going to be very limited and it's not going to be as portable across frameworks as one that actually has very robust concurrency primitives.

So now we'll start looking at concurrency. We've got concurrency primitives that are different for each framework. We've got Ember concurrency, we have a Redux-sagas. We have observables. React is introducing suspense and their hooks. And now each one of those things is different. So now, we need a concurrency primitive that is framework agnostic that is externalized, that we can consume to build stuff. Now that's a piece of the routing system. There's other things like state machines. State machines are an important part. If you look at the authentication system, the core of an authentication system is a state machine. How do you express a state machine in a way that is framework agnostic, in a way that works in every framework. That's an area that we need to explore. So, there's a lot of work for us to do to take these big ideas and externalize them from the framework so that they can be consumed by frameworks. And the applications that use this frameworks, that is a lot of work. And that's essentially what Frontside is setting out to do is hold these good ideas and extract the pieces that are actually core to these ideas. And then make them available independently and then see what that will provide. In a world where we have all of these pieces in place and we have kind of the granular little pieces that we need to be able to have a really strong concurrency primitive, we have a great routing system that's using this concurrency primitives. We have a state machine mechanism that can work with any framework, has very comfortable APIs. We have all these pieces. What kind of collaboration will that allow? What is the world that that will create? I think for people who are in the Ember community, we saw firsthand what it means for us to stand the shoulders of giants. We have wonderful, brilliant people creating really awesome tools. Now, if that same level of collaboration was happening in a way that was framework agnostic, but we were all standing on shoulders of giants and we were helping each other create something like this, what kind of world would that create? That's what Frontside is setting out to find out.

CHARLES: Yeah, and it's going to be fantastic. I should say it's going to be really, really fun. It's going to be challenging. Like you said, there is a lot of work to do.

TARAS: A lot of conversations to have. I mean, I think it's difficult to have these conversations because we all have good reasons for thinking certain things and a lot of times it requires understanding each other to find the place. Core teams do this. It's getting everybody to align on where we want to go, provide leadership like doing all of that work and doing it in a way that is actually possible. Because to create an environment with this kind of research that is happening, it requires a lot of money. Charles and I cannot do this. A small group of people probably cannot do this either because we need real use cases. We need real projects to make sure that these ideas are not dream implementations or just like vaporware. We need the pressure of real projects, real clients' relationships to create software that is going to make a huge difference for their developers. So we need those relationships with clients so that we can actually create the conditions for this kind of thing to emerge. That needs to also happen for us to make this kind of a situation possible. So, there is a lot of work to do but it's a really worthwhile cause.

CHARLES: And I think that it's one of the reasons that we start with testing as the most important use case because it's something that you don't want to be just saying, "Hey, you want to come help us fund research to make everything better?" There is real pressure, there's financial pressure and there's a duty to make sure that if a project has a deadline, that deadline is met. You want to come in at cost and on time. That's the goal of every project. And you want to try and strive to do your best to achieve that. And testing is an area where the research almost always pays off. I mean, where it has clearly. You can invest a lot of money and get a lot of reward in return because you're losing so much money by not testing or by focusing your investment in testing on lower value targets.

TARAS: For people that are familiar with acceptance testing, I think that just goes like a no brainer. Acceptance testing just enables you that. So by putting big test as a first kind of objective, but putting acceptance testing as a first kind of milestone, it sets us up to benefit in the way that Ember community has benefited where the acceptance test now became a mechanism that made it possible for the framework to change. And so, we're kind of setting up the same conditions here. When you start working with us or with any of the companies that work with us, the first benefit you get is you get the acceptance testing. Now, developers are more productive because they know that they have a certainty that if something broke, they will know. If they broke something, they will know. But what that also gives us is a way of introducing change in a systematic way. So, if we find that we have a new routing mechanism, we can start to refactor our React router setup to use this new routing mechanism and we can do it safely because we have our acceptance testing in place. So in many ways, it's kind of like the first step. And it's actually a great first step for business and is a great first step for tech technology. That's kind of a really awesome overlap.

CHARLES: In the podcast with Dylan Collins, we talked about this is like step one is make experimentation cheap because experimentation can be wildly expensive and you can throw away a lot of money if you're just experimenting without constraints. And there is nothing like a great application/acceptance test suite that puts those constraints in a completely and totally unambiguous way. So I think that's actually a great summary of our next decade.

TARAS: Yeah, I think so too. I think it's a good to show you the perfect next step. And I think this is going to show us the perfect next future though we can't really imagine it right now. But I do know one thing, this future is going to include a lot of people. Fundamentally, what we're setting out to do is a big idea. And so, big ideas are brought to the world by a lot of people. And I think one of the things that's important is that from day one, we include everyone in the process. For anyone who's interested in these big ideas, we want to make it possible for you to participate in. So that's the reason why Frontside is going to have an RFC. Like on our Frontside website, we're actually going to start writing and publishing RFCs. The first RFC that we're going to have is going to be for big tests, which is to lay out why big test is a big good idea for frontend for our industry. And as we start talking to people, there's actually lots of applications beyond frontend. So, we're going to focus on writing these RFCs and sharing them with the world so that we can include as many people in it as possible so everyone can participate. And ultimately, hopefully, that will have an influence on our industry and actually allow us to leverage these big ideas in a way that is not tied to any particular source of these brilliant ideas.

CHARLES: Yeah, hat tip to the Ember community for the whole RFC thing, which I guess they didn't invent either.

TARAS: Yeah, they borrowed it also, sorry. But I mean, it's been adopted everywhere. And I think it is a really interesting way of making people part of your process and making it our process as our meaning, like everyone who works with us. Whether you work for Frontside, that there's a client of Frontside or you work with the company that is partner with Frontside, whatever that might look like, we want to start to create places where we can be having these conversations and together surfacing these great ideas and making them part of a tool sets.

CHARLES: I think that's a perfect way to wrap up. Like I said, you pretty much described our next decade. Maybe if we get more people, it's going to be significantly less and we'll move onto the next thing even more rapidly. I am fully pumped. I'm ready to see this. There's a lot of work to do and I feel great about getting to it.

TARAS: I'm very excited about all the work that we're going to be doing, so let's get to it.

CHARLES: Yup. All right. We'll see everybody next time.

Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @TheFrontside or over just plain old email at contact@frontside.io. Thanks. See you next time.

Oct 03 2019

30mins

Play

Transparent Development

Podcast cover
Read more

In this episode, Charles and Taras discuss "transparent development" and why it's not only beneficial to development teams, but to their clients as well.

Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES: Hello and welcome to The Frontside Podcast, the place where we talk about user interfaces and everything that you need to know to build them right.

It's been a long summer and we're back. I'm actually back living in Austin, Texas again. I think there wasn't too much margin in terms of time to record anything or do much else besides kind of hang on for survival. We've been really, really busy over the last couple of months, especially on the professional side. Frontside has been doing some pretty extraordinary things, some pretty interesting things. So, we've got a lot to chew on, a lot to talk about.

TARAS: There's so much stuff to talk about and it's hard to know where to start.

CHARLES: So, we'll be a little bit rambly, a little bit focused. We'll call it fambly. But I think one of the key points that is crystallized in our minds, I would say over this summer, is something that binds the way that we work together. Every once in a while, you do some work, you do some work, you do some work, and then all of a sudden you realize that a theme, there's something thematic to that work and it bubbles up to the surface and you kind of organically perceive an abstraction over the way that you work. I think we've hit that. I hit one of those points at least because one of the things that's very important for us is -- and if you know us, this is things that we talk about, things that we work on -- we will go into a project and set up the deployment on the very first day. Make sure that there is an entire pipeline, making sure that there is a test suite, making sure that there are preview applications. And this is kind of the mode that we've been working in, I mean, for years and years and years. And where you say like if what it takes is spending the first month of a project setting up your entire delivery and showcasing pipeline then that's the most important work, inverting the order and saying that has to really come before any code can come before it. And I don't know that we've ever had like a kind of unifying theme for all of those practices. I mean, we've talked about it in terms of saving money, in terms of ensuring quality, in terms of making sure that something is good for five or 10 years, like this is the way to do it. And I think those are definitely the outcomes that we're looking for. But I think we've kind of identified what the actual mode is for all of that. Is that fair to say?

TARAS: Yeah, I think one of the things I've always thought about for a long time is the context within which decisions are made because it's not always easy. And it's sometimes really difficult to really give it a name, like getting to a point where you have really clear understanding of what is it that is guiding all of your actions. What is it that's making you do things? Like why do we put a month of work before we even start doing any work? Why do we put this in our contract? Why do we have a conversation with every client and say, "Look, before we start doing anything, we're going to put CI in place." Why are we blocking our business on doing this piece? It's actually kind of crazy that from a business perspective, it's a little bit crazy that you be like, "Oh, so you're willing to lose a client because the client doesn't want you to set up a CI process?" Or in a case of many clients, it's like you're not willing to accept -- the client is going to say, "We want to use Jenkins." And what we've done in the past, in almost every engagement, we're like, "Actually, no. We're not going to use Jenkins because we know that it's going to take so long for you to put Jenkins in place. By the time that we finish the project, you're probably still not going to have it in place. That means that we're not going to be able to rely on our CI process and we're not going to be able to rely on testing until you're finished." We're not going to have any of those things while we're doing development. But why are we doing all this stuff? It was actually not really apparent until very recently because they didn't really had a name to describe what is it about this tooling and all of these things that makes why is it so important to us. I think that's what kind of crystallized. And the way that I know that it's crystallized because now that we're talking to our clients about it, our clients are taking on to picking up the language. We don't have to convince people that this is a value. It just comes out of their mouth. Like it actually comes out of their mouth as a solution to completely unrelated problems, but they recognize how this particular thing is actually a solution in that particular circumstance as well even though it's not something Frontside sold in that particular situation. Do you want to announce what it actually is?

CHARLES: Sure. Drum roll please [makes drum roll sound]. Not to get too hokey, but it's something that we're calling Transparent Development. What it means is having radical transparency throughout the entire development process, from the planning to the design, to the actual coding and to the releases. Everything about your process. The measure by which you can evaluate it is how transparent is this process to not just developers but other stakeholders, designers or people who are very developer adjacent, engineering managers all the way up to C level executives. How transparent is your development? And one of the ways that we sell this, because I think as we talk about how we arrived at this concept, we can see how this practice actually is a mode of thinking that guides you towards each one of these individual practice. It guides you towards continuous integration. It guides you towards testing. It guides you towards the continuous deployment. It guides you towards the continuous release in preview. I think the most important thing is that it's guided us, by capturing this concept, it's guided us to adopt new practices, which we did not have before. That's where the proof is in the pudding is if you can take an idea and it shows you things that you hadn't previously thought before.

I think there's a fantastic example. I actually saw it at Clojure/conj in 2016, there was a talk on juggling. And one of the things that they talked about was up until I think it was the early 80's or maybe it was the early 60's, the state of juggling was you knew a bunch of tricks and you practice the tricks and you get these hard tricks. And that was what juggling was, is you practice these things. It was very satisfying and it had been like that for several millennia. But these guys in the Physics department were juggling enthusiasts and I don't know how the conversation came about, you'd have to watch the talk. It's really a great talk. But what they do is they make a writing system, a nomenclature system for systematizing juggling tricks, so they can describe any juggling trick with this abstract notation. And the surprising outcome, or not so surprising outcome, is that by then, once you have it in the notation, you can manipulate the notation to reveal new tricks that nobody had thought of before. But you're like, "Ah, by capturing the timing and the height and the hand and we can actually understand the fundamental concept, and so now we can recombine it in ways that weren't seen before." That actually opened up, I think an order of magnitude of new tricks that people just had not conceived of before because they did not know that they existed.

And so, I think that's really, as an abstract concept, is a great yardstick by which to measure any idea. Yes, this idea very neatly explains the phenomenon with which I'm already familiar, but does the idea guide me towards things with which I have no concept of their existence? But because the idea predicts their existence, I know it must be there and I know where to look for it. And aha, there it is. It's like shining a light. And so I think that that's kind of the proof in the pudding. So that's a little bit of a tangent, but I think that's why we're so excited about this. And I think it's why we think it's a good idea.

TARAS: Yes. So what's also been interesting for me is how universal it is. Because the question of is this transparent enough? That question could be actually asked in many cases. What's been interesting for me is that asking that question in different contexts that I didn't expect actually yielded better outcome. At the end of the day, I think that a test for any idea is like, is it something that can help you more frequently than not? Like is it actually leading you? Does applying this pattern, does it increase the chances of success? And that's one of the things that we've seen, thinking about just practices that we're putting into place and quite asking are they transparent enough? Is this transparent enough? It's actually been really effective. Do you want to talk about some of the things that we've put in place in regards to transparency? Like what it actually looks like?

CHARLES: Yeah. I think this originally started when we were setting up a CI pipeline for a native application which is not something that we've typically done in the past. Over the last, I would say, 10 years, most of our work has been on the web. And so, when we're asked to essentially take responsibility for how a native application is going to be delivered, one of the first things that we asked kind of out of habit and out of just the way that we operate is how are we going to deliver this? How are we going to test it? How are we going to integrate it? All the things that we've just talked about is something that we have to do naturally. But because this is not very -- like continuous integration and build is very prevalent on the web. I think that testing still has a lot of progress on the web, but it's far more prevalent than it is in other communities, certainly the native community. So when we started spending a month setting up continuous integration an integration test suite, spending time working on simulators so that we could simulate Bluetooth, having an automated process with which we could ship to the App Store, all of these things kind of existed as one-offs in the native development community. There are a lot of native developers who do these things. But because it's not as prevalent and because it was new to us, it caused a lot of self reflection both on why is it that we feel compelled to do this. And also we had to express this, we had to really justify this work to existing native development teams and existing stakeholders who were responsible for the outcomes of these native development teams. So, there was this period of self reflection, like we had to write down and be transparent about why we were doing this.

TARAS: Yeah. We had to describe that in SoWs. We actually had really long write ups about like what it is that we're setting up. And for a while, it was, people I think would read these SoWs and I think they would get the what's of what we're actually going to be putting into place. But it wasn't until we actually put it into place and we've seen a few like really big wins with the setup -- one of the first ones was the setting up preview apps where preview apps in -- the web are pretty straightforward because we've got Netlify that just kind of gives it to you easily.

CHARLES: Netlify and Heroku. It's very common.

TARAS: Yeah, you activate it, it's there. But on the mobile side, it's quite a different story because you can't just spin up a mobile device that is available through the web. It's something kind of very special. And so we did find a service called Appetize that does this. And so we hooked up the CI pipeline to show preview apps in pull requests. So for every pull request, you could see specifically what was introduced in our pull request without having to pull down source code, like compile it. You could just click a link and you see a MVC stream of a mobile device and that application running on a mobile device. So the setup took a little bit of time. But once we actually put it in place and we showed it to our clients, one of the things that we noticed is that it became a topic of conversation. Like, "Oh, preview apps are amazing." "This is so cool." "Preview apps are really great." And I think in some ways, it actually surprised us because we knew that they were great, but I think it was one of the first times that we encountered a situation where we would show something to a client and they just loved it. And it wasn't an app feature. It was a CI feature. It was part of a development process.

CHARLES: Right. So, the question is then why was this so revelatory? Why was it so inspiring to them? And I think that the reason is that even if we have an agile process and we're on two week iterations, one week iterations, whatever, there's still a macroscopic waterfall process going on because essentially, your business people, your design people, maybe some of your engineering people are involved at the very front of the conversation. And there's a lot of talking and everybody's on the same page. And then we start introducing coding cycles. And like I said, even if we're working on two week iterations and we're "agile", the only feedback that you actually have, whether something is working, is if the coder says it's done. "I'm Done with this feature. I'm on to the next feature for the next two weeks." And after that two weeks, it's like, "I'm done with this feature. I'm on to the next feature." From the initial design, you have the expectation about what's going on in the non-technical stakeholders minds. They have this expectation. And then they hope that through the process of this agile iterative development cycles, they will get the outcome that satisfies that hope. But they're not able to actually perceive and put their hands on it. It's only the engineers and maybe some really tech savvy engineering managers who can actually perceive it. And so they're getting information secondhand. "Hey, We've got authentication working and we can see this screen and that screen." And, "Hey, it works on iOS now." "I have some fix ups that I need to do on Android." So, maybe they're consuming all of their information through standups or something like that, which is better than nothing. That is a level of transparency. But the problem is then you get to actually releasing the app or whether it's on the web, whether it's on native, but this is really a problem on native. You get to where you actually release the app and then everybody gets to perceive the way the app as it actually is. So you have this expectation and this hope that was set maybe months prior and it just comes absolutely careening into reality in an instant, the very first moment that you open the app when it's been released. And if it doesn't meet that expectation, that's when you get disappointment. When expectations are out of sync and grossly out of sync with reality, even a little bit out of sync with reality, you get disappointment. As fundamental and explanation of just the phenomenon of disappointment, but it's an explanation of why disappointment happens so often on development projects. Is this kind of the expectations and hopes of what a system can be in the minds of the stakeholders? It's kind of this probability cloud that collapses to a single point in an instant.

TARAS: And that's when things really hit the proverbial fan. Now, you have the opposite. So everything that was not transparent about your development process. So everything that was hidden in the opaqueness of your development process, all of those problems, either on a product side, maybe something didn't quite get implemented the way it's supposed to. Like you actually found out two weeks or three weeks before you're supposed to release that that feature wasn't actually quite implemented right. It went through testing, but it was tested against the Jira stories that were maybe not quite written correctly. So the product people are going like, "What the hell is this? It's actually not what I signed up for. This is not what I was asking for." So, there's that part.

And then on the development side, you've got all of the little problems that you didn't really account for because you haven't been shipping to production from day one. You actually have like application not really quite working right. You didn't account as supposed to integrate with some system that is using Chorus or something you didn't account for. Like you have a third party dependency you didn't really fully understand. But because it wasn't until you actually turned it on that you actually started to talk to the thing properly, and you realize there's some mismatch that is not quite working. But now you've got everything that was not transparent about the development process, everything that was hiding in the opaque corners of your development process is now your problem for the next three weeks because you've got to fix all of these problems before you release. And that's what I think where a lot of organizations kind of find themselves in is this position where they've been operating for six months and like, "Everything is going great!" And then three months or three weeks before, you're like, "Actually, this is not really what we were supposed to do. Why did this happen?" That time is really tough.

CHARLES: Yeah. That's what we call crunch time. And it's actually something that is lot of times we think of it as inevitable, but in fact it is actually an artifact of an opaque process.

TARAS: Yeah.

CHARLES: That's the time when we have to go, everybody's like, "We're ordering pizza and Dr. Pepper and nobody's leaving for a month."

TARAS: Yeah. I think there are people that do that practice like functional testing as part of development process or acceptance testing, I think they could relate to this in some cases where if you had to set up a test suite on an application that was written without a test suite, first thing you deal with are all the problems that you didn't even know were there. And it's not until you actually start testing, like doing functional testing, not integration or unit testing where you're testing everything in isolation, but when you're perceiving the entire system as one big system and you're testing each one of those things as the user would, it's not until that point you start to notice all the weird problems that you have. Like your views are re-rendering more than you expected. You have things that are being rendered you didn't even notice because it's hard to see, because it happens so quickly. But in test, it happens at a different pace. And so, there's all these problems that you start to observe the moment that you start doing acceptance testing, but you don't see them otherwise. And so, it's the process of making something transparent that actually highlights all these problems. But for the most part, if you don't know that there are transparent options available, you actually never realize that you are having these problems until you are in crunch time.

CHARLES: Right. And what's interesting is viewed through that lens, your test suite is a tool for perception. And to provide that transparency, not necessarily something that ensures quality, but ensuring the quality is a side effect of being able to perceive bugs as they happen or perceive integration issues at the soonest possible juncture. To shine the light, so to speak, rather than to act as a filter. It's a subtle distinction, but I think it's an important one.

TARAS: About functional testing and acceptance testing. I think one of the things that I know personally from experience working with comprehensive acceptance test suites is that there is certainty that you get by expressing the behavior of the application in your tests. And I think what that certainty does is it replaces hope as opposed to having hope baked into your system where you think like you're hoping. I think for many people, they don't even perceive it as hope. They perceive it as reality. They see it as, "My application works this way." But really what's happening is there's a lot of trust that's built into that where you have to say like, "Yeah, I believe the system should work because I wrote it and I'm good. And it should not be broken." But unless you have a mechanism that actually verifies this and actually insures this is the case, you are operating in the area of dreams and hopes and wishes, and not necessarily reality. And I think that's one of the things that's different. A lot of the processes around highlighting or shining light on the opaque areas of the development process. And it's actually not even just development process. It's actually the business process of running a development organization. Shining light in those areas is then what gives you the opportunity to replace hope with real validatable truth about your circumstances.

CHARLES: And making it so that anyone can answer that question and discover that truth and discover that reality for themselves. So, generating the artifacts, putting them out there, and then letting anybody be the primary perceiver of what that artifact means in the context of the business, not just developers. And so, that kind of really explains preview apps quite neatly, doesn't it? Here we've done some work. We are proposing a change. What are the artifacts that explain the ramifications of this change? So we run the test suite. That's one of the artifacts that explains and radiates the information so that people can be their own primary source. And look at it in a developer centric, although you can tell, any old person can tell if the test suite's failing, it's not a change that we should go with. But the preview app is something we take this hypothetical change, we build it, we put it out there and now, everyone can perceive it. And so, it calibrates the perception of reality and it eliminates hope. Which is like if your development process is based on hope, you are signing yourself up for disaster. I like what you said that it implies a huge amount of trust in the development team. And you know what? If you have a cracked development team, that trust is earned and people will continually invest based on that trust. But the fundamentals are still fragile because they still can open up a crack between the expectation and the reality. And the problem is when that happens, the trust is destroyed. And it's during that crunch time, if it does happen that you lose credibility and it's not because you became a worse developer. It's not because your team is like lower performing, it's just that there was this divergence allowed to open. But then the problem is that really lowers the trust and that means that unfortunately that's going to have a negative knock on effect. And reasonably so. Because if you're an engineering manager or a product manager, you're something like this and you're losing trust in your development team and their ability to deliver what you talked about, then you're going to want to micromanage them more. The natural inclination is to try and be very defensive and interventionist and you might actually introduce a set of practices that inhibit the development cycle even further and lower the team's abilities to perform right when they need to do it the most, then you end up destroying more trust.

TARAS: Yeah, it's a spiraling effect I think because it's in the process of trying to make things better. And then you start to introduce practices. Like maybe you're going to have meetings every day reviewing outstanding stories to try to get everybody on the same page, but now you're micromanaging development team. The development team starts to resent that and now you've got this like people hating their job. It starts to get messier and dirtier and more complicated. And the root cause of that is that from day one, there was a lot of just [inaudible] about getting into it and just starting to write some code but what you didn't actually do is you didn't put in place the fundamentals of making sure that you can all observe a reality that is honest. And I think that kind of fundamental principle, it's interesting how when you actually start to kind of take this idea and when you start to think about it in different use cases, it actually tells you a lot about what's going on and you can actually use it to design new solutions.

One of the things that Frontside does, I don't know if those who've kind of worked with us before might know this or might not, but we don't do blended rates anymore. Because we don't actually, one of the challenges with blended rates is that they hide the new ones that gives you the power to choose how to solve a problem.

CHARLES: Yeah. There's a whole blog post that needs to be written on why blended rates are absolute poison for a consultancy. But this is the principle of why.

TARAS: Yeah. I think it's poison for transparent consultancy because if you want to get the benefits of transparency, you have to be transparent about your people. Because alternatively what happens is that you start off relying on your company's reputation and then there is a kind of inherent lie in the way that the price points are set up because everybody knows that there is going to be a few senior people, there's going to be a few intermediate people, a few junior people. But these exact ratios of those or who is doing what, how much people are available, all of those things are kind of hidden inside of the consulting company so that they can manage their resources internally. And so what that does is it simplifies your communication with the client. But actually what it also does is it disempowers you to have certain difficult conversations when you need the most. And you could say, "Look, for this kind of work, we don't need to have more senior people working on this." We can have someone who is junior who is at like $100 an hour, $75 an hour as opposed to being $200 or $250 an hour. We can have that person working on this and we can actually very clearly define how certain work gets solved. It requires more work. But then what it does is it creates a really strong bond of honesty and transparency between you and your clients. And it gives you a way, like now the client starts to think about you as a resource that allows them to fulfill on their obligations in a very actionable way. They can now think about how they can use you as a resource to solve their problems. They don't need a filter that will process that and try to make it work within the organization. You essentially kind of become one unit. And I think that sense of unity is the fundamental piece that keeps consulting companies and clients glued together. It's the sense of like, "We can rely on this team to give us exactly what we want when we need it, and sometimes give us what we need that we don't know we need." But that bond is there. And that bond is strong because there is no lie in that relationship. You're very transparent about who are the people that's working on it. What are they actually going to be doing? How much is this costing us?

CHARLES: It's worth calling out explicitly whether on the flip side of it is, is if you have a blended rate, which is the way that Frontside operated for, gosh, pretty much forever, is that people will naturally calibrate towards your most senior people. If I'm going to be paying $200 an hour across the board, or $150 an hour across the board, or $300 across the board, whatever the price point is, they're going to want to extract the most value for that one price point. And so, it means that they're going to expect the most senior people and become resentful if what I'm paying is $300 for a task. If I've got five senior people, it's a better deal for me. For the same price to get five senior people than two senior people to a medium level people and one junior person. And so, it has two terrible effects. One is that they don't appreciate the senior people to be like, "Hey actually, these are people with extraordinary experience, extraordinary knowledge, extraordinary capability that will kick start your part." So they are under appreciated and then they're extremely resentful of the junior people. It's like, "I'm paying the same rate for this very senior person as I am for this junior person? Get this person off my project." But if you say, "You know what, we're going to charge a fifth of the cost for this junior person and we're going to utilize them," then you're providing real value and they're appreciating it. They're like, "Oh, thank you for saving me so much money. We've got this task that does not require your most senior person. That would be a misallocation of funds. I'd be wasting money on them. But if you can charge me less and give me this junior person and they're going to do just as competent a job, but it's going to cost me a fifth of the money, then that's great. Thank you." So, it flips the conversation from 'get this god-damn junior person off my project' to 'thank you so much for bringing this person on'. It's so critical. But that's what that transparency can provide. It can totally turn a feeling of resentment into gratitude.

TARAS: What's interesting is from business perspective, you make the same amount of money. In some cases, you actually make more money. I think in that way, it's a consulting company. But that's not the important part because the amount of value that's generated from having more granular visibility into what's happening is so much greater. It's kind of like with testing where any of those things where when you start to put, when you start to shine light on these kind of opaque areas and then you start to kind of flush out the gremlins that are hiding there, what you then start to do, what you kind of discover is this opportunity to have relationships with clients that are honest. So you could say, for example, like one of the things that we've done recently is we actually have like 10-tier price point model, which allows us to to be really flexible about the kind of people that we introduce. So, there's a lot of details that go into the actual contracting negotiation. But what it does is it allows us to be very honest about the costs and work together with our clients, like actually really find a solution that's going to work really well for them. And then this is kind of a starting point when we start thinking about transparency in this kind of diverse way, you actually start to realize that there are additional benefits that you might have never been experienced before. One of the things that we found recently is that one of the initiatives that we kind of launched with one of our clients is we wanted to bring together, there's a general problem that exists in large projects, which is that if you have a really big company and you have like, let's say 20 or 30 interconnected services, your data domain, like the older data, kinds of data you work with is spread over a whole bunch of microservices spread over potentially a bunch of different development teams spread over a bunch of different locations. What usually has happened in the past is each one of those problems or the domain, the data domain has been kind of siloed into a specific application. We worked with a bank in the past and that being had for every, they had 80 countries. In each country they had 12 different industries, like insurance and mortgage and different kinds of areas of services they offered. And then for each of the country, for each of the service, they had a different application that provided that functionality. Then the next step is, let's not do that anymore because we now have something like 100, 150 apps, let's bring it all together under a single umbrella and let's create a single shared domain that we can then use. And so, a GraphQL becomes a great solution for that. But the problem is that making that change is crazy complicated because the people on the business side who understand how all the pieces fit together. On the other side, you have the developers who know where the data can come from and how to make all that real. And on the other side is there's like frontend implementers who actually build in the UIs that are consuming all these services.

On a project that we're working on right now is we're building a federated GraphQL gateway layer that is kind of connecting all these things, bringing all these things together. But the problem is that without very specific tooling to enable that kind of coming together of the frontend, the backend, the business people having coming together, creating a single point of conversation and having a single point of reference for all the different data that we have to work with and different data that is available to the gateway, without having something like that, without having that transparency in the actual data model, it is really difficult to make progress because you don't have shared terminology, you don't have shared understanding of the scope of the problem. There's a lot of dots in context that needs to be connected. And for anyone who has worked with enterprise, you know how big these problems get. And so what we've done on a project that we're working on now is we actually aimed to bring transparency to this process. What we actually did is put in place, start to build an application that brings together all of the federated services into a visualization that different parties can be involved in. And so I think one of the kind of common patterns that we see with transparency in general is that we are including people in the process, in the development process that were previously not included. So in the past, they would be involved in the process either very early on or very late in the process, but they wouldn't be involved along the way. And so what this kind of transparency practice actually does is it allows us to kind of democratize and flatten the process of creating foundations for pieces that touch many different parts of the organization. And so this tool that we created allows everyone to be involved in the process of creating the data model that spans the entire organization and then have a single point of reference that everybody can go to and have a process for contributing to it. They don't have to be a developer. There's developers who consume it. There are business people that consume it. There are data modeling people that consume it. Like there's different people parties involved. But the end result is that everyone is on the same page about what it is that they're creating. And we're seeing the same kind of response as we saw with preview apps where people who previously didn't really have an opinion on development practices or how something gets built, all of a sudden they're participating in the conversation and actually making really valuable suggestions that developers couldn't really have exposure to previously because developers often don't have the context necessary to understand why something gets implemented in a particular way.

CHARLES: Something beautiful to behold, really. And like I said, it's wonderful when a simple concept reveals things that had lay hidden before.

TARAS: Yeah. It's a very interesting lens to look at things through. How transparent is this and how can we make it more transparent? I think asking that question and answering that question is what has been kind of giving us a lot of -- it had been very helpful in understanding our challenges in the work that we do on a daily basis and also in understanding how we could actually make it better.

CHARLES: I apply this concept in action on my pull requests. I've really been focusing on trying to make sure that if you look at my pull request, before you actually look at the code, you can pretty much understand what I've done before you even look at the diff. The hallmark of a good pull request is basically if by reading the conversation, you understand what the implementation is going to be. There's not really any surprises there. It's actually hard to achieve that. Same thing with git history. Spending a lot of time trying to think like how can I provide the most transparent git history? That doesn't necessarily mean exactly the log of what happened moment to moment, day to day, but making sure that your history presents a clear story of how the application has evolved. And sometimes that involves a lot of rebasing and merging and branch management.

I think another area that has been new for us, which this has revealed those things that I just described are areas where we're kind of re-evaluating already accepted principles against a new measure, but introducing an RFC process to actually a client project where we're making architectural decisions with our developers, the client's developers, external consultants. You've got a lot of different parties, all of whom need to be on the same page about the architectural decisions that you've made. Why are we doing this this way? Why are we doing modals this way? Why are we using this style system? Why are we using routing in this way? Why are we doing testing like this? These are decisions that are usually made in an ad hoc basis to satisfy an immediate need. It's like, "Hey, we need to do state management. Let's bring in Redux or let's bring in MobX or let's bring in whatever." And you want to hire experts to help you make that best ad hoc decision? Well, not really. I mean, you want to lean on their experience to make the best decision. But having a way of recording and saying this is the rationale for a decision that we are about to make to fulfill a need. And then having a record of that and putting it down in the book so that anybody who can come later. First of all, when the discussion is happening, everybody can understand the process that's going on in the development team's head. And then afterwards and it's particularly important is someone asks a question, "Why is this thing this way?" You can point directly to an RFC. And this is something that we picked up from the Ember community, but this is something that open source projects really by their very nature have to operate in a very highly transparent manner. And so, it's no surprise that that process came from the internet and an open source project. But it's been remarkably effective, I would say, in achieving consensus and making sure that people are satisfied with decisions, especially if they come on afterwards, after they've been made.

TARAS: We actually have this particular benefit that could experience that particular benefit today where one of the other things that this RFC process and transparency with the architecture, how that kind of benefits the development organization is that a lot of times when you are knee deep in doing some implementation, that is not a time you want to be having architectural conversations. In the same way like in a big football team, they huddle up before they go on a field. You can't be talking strategy and architecture and plans while you're on the football field. You have to be ready to play. And this is one of the things that the RFC process does is it allows us to say, "Look, right now we have a process for managing architecture so that with the RFC process you can go review our accepted RFCs. You can make proposals there." And that is a different process than the process that we're involved in on a daily basis, which is writing application, using architecture where we have in place. And so that in itself can be really helpful because well intentioned people can bring up these conversations because they really are trying to solve a problem, but that might not be the best time. And so having that kind of process in place and being transparent about how architecture decisions are made allows everyone to participate and it also allows you to prioritize conversations.

CHARLES: Yeah. And that wasn't a practice that we had adopted previous to this, but it's something that seemed obvious that we should be doing. It's like, how can we make our architecture more transparent? Well, let's do this practice. So, I keep harping on this. But I think it's the hallmark of a good idea if it leads you to new practices and new tools. And we're actually thinking about adopting the RFC process for all of our internal developments, for maintaining our open source libraries.

TARAS: There is something that we've been working on that we're really excited about. So, there's a lot of stuff happening at Frontside. But one of the things that we've been doing is working on something we call the transparent node publishing process, which is something that I think we originally drew inspiration from the way NativeScript has their repo set up. But one thing that's really cool about how they have things set up is that every pull request automatically is available, like everything is available. Very quickly, a pull request is available for you to play with and you can actually put it into your application as a published version in npm and actually see exactly if that pull request is going to work for you. You don't have to jump through hoops. You don't have to clone the repo, build it locally, link it. You don't have to do any of that stuff because if you see a pull request that has something that you want but then is not available in master, there's an instruction on the pull request that tells you, "Here's how you can install this particular version from npm." And so you essentially you're publishing. Every pull request automatically gets published to npm and you can just download and install that specific version for that particular pull request in your project. That in itself I think is one of those things I suspect that is going to be talked about. It actually can alleviate a lot of problems that we have on a development processes because like the availability of the work of people who are participating in the project, there is kind of a built in barrier that we are essentially breaking down with this transparent node publishing process. And so, that's something that we're very close to to having it all on our repos and we're going to try it out and then hopefully share it with everyone on the internet.

CHARLES: I didn't know that the NativeScript did this. I thought that the idea that came from it is like how can we apply these transparency principles to the way we maintain npm packages. The entire release process should be completely transparent, so that when I make a pull request, it's available immediately in a comment. And then furthermore, even when a pull request is merged, there's no separate step of let's get someone to publish it. It's just now it's on master. Now it is available as a production release. You close the latency and you close the gap and people perceive things as they are. There is nothing like, "Oh that emerged. When do I get this?" This is something that I can't stand about using public packages is you have some issue, you find out that someone also has had this issue, they've submitted a pull request for it and then it's impossible to find if there's a version and what version actually supports this? And it's even more complex between projects that actually do backporting of fixes to other versions. So I might be on version two of a project. Version three is the most recent one, but I can't upgrade to version three because I might be dependent on some version two APIs. But I really need this fix. Well, has it been backported? I don't know. Maybe upgrading is what I have to do, but maybe downgrading. Or if I'm on the same major release version, maybe there's been 10 pull requests, but there's been no release to npm. And it can be shockingly difficult to find out if something is even publicly available. And the transparency principle comes in to, "Hey, if I see it on GitHub, if I see it there, then there's something there that I can touch and I can perceive for myself to see if my issue has been resolved or if the things work as I expect."

TARAS: I'm really excited about this. I'm really excited about this kind of clarification, this crystallization of transparency. And I'm also seeing our clients starting to apply it to solving problems within their organization as well is very inspiring.

CHARLES: Yeah, it is. It is really exciting. And honestly, I feel like we've got one of those little triangular sticks that people use to find water. I feel like we have a divination stick. And I'm really excited to see what new tools and practices it actually predicts and leads us to.

TARAS: Me too. I'm really excited to hear if anyone likes this idea. Send us a tweet and let us know what you're seeing for yourself about this because I think it's a really interesting topic and I'm sure there's going to be a lot that people can do with this idea in general.

CHARLES: Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @TheFrontside, or over just plain old email at contact@frontside.io. Thanks and see you next time.

Sep 26 2019

47mins

Play

Svelte and Reactivity with Rich Harris

Podcast cover
Read more

Rich Harris talks about Svelte and Reactivity.

Rich Harris: Graphics Editor on The New York Times investigations team.

Resources:

Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES: Hello and welcome to The Frontside Podcast, a place where we talk about user interfaces and everything that you need to know to build them right.

TARAS: It's actually a really nice, Rich and I'm really, really happy to have a chance to actually chat with you about this because Svelte is a really fun piece technology. In many ways, it's interesting to see our technology evolve and our industry evolve through innovation, real innovation. I think Svelte 3 has really been kind of that next thought provoking technology that kind of makes you think about different ways that we can approach problems in our space. So, really excited to chat with you about this stuff.

RICH: Well, thank you. Excited to be here.

TARAS: I think quite a lot of people know, Rich, about your history, like how you got into what you're doing now. But I'm not sure if Charles is aware, so if you could kind of give us a little bit of a lowdown on where you kind of come from in terms of your technical background and such.

RICH: Sure. I'll give you the 30-second life history. I started out as a reporter at a financial news organization. I had a Philosophy Degree and didn't know what else to do with it. So, I went into journalism. This was around the time of the great recession. And within a few weeks of me joining this company, I watched half of my colleagues get laid off and it's like, "Shit, I need to make myself more employable." And so gradually, sort of took on more and more technical responsibilities until I was writing JavaScript as part of my day job. Then from there, all these opportunities kind of opened up. And the big thing that I had in mind was building interactive pieces of journalism, data-driven, personalized, all of that sort of thing, which were being built at places like the New York Times, and The Guardian, and the BBC. That was the reason that I really wanted to get into JavaScript. And that's guided my career path ever since.

CHARLES: It's interesting that this D3 and all that did come out of journalism.

RICH: It's not a coincidence because when you're working under extreme time pressure and you're not building things with a view to maintain them over a long period of time, you just need to build something and get it shipped immediately. But it needs to be built in a way that is going to work across a whole range of devices. We've got native apps, we've got [inaudible], we've got our own website. And in order to do all that, you need to have tools that really guide you into the pit of success. And D3 is a perfect example of that. And a lot of people have come into JavaScript through D3.

CHARLES: And so, are you still working for the same company?

RICH: No. That's ancient history at this point.

CHARLES: Because I'm wondering, are you actually getting to use these tools that you've been building to actually do the types of visualizations and stuff that we've been talking about?

RICH: Very much so. I moved to The Guardian some years ago. And then from there, moved to Guardian US, which has an office in New York. And it was there that I started working on Svelte. I then moved to the New York Times and I'm still working on Svelte. I've used it a number of times to build things at the New York Times and the people have built things with it too. And so, yeah, it's very much informed by the demands of building high performance interactive applications on a very tight deadline.

CHARLES: Okay, cool. So I've probably used, I mean, I'm an avid reader of both Guardian and the New York Times, so I've probably used a bunch of these visualizations. I had no idea what was driving them. I just assumed it was all D3.

RICH: There is a lot of D3. Mike Bostock, the creator of D3, he was a linchpin at the graphics department for many years. Unfortunately we didn't overlap. He left the Times before I joined the Times, but his presence is still very much felt in the department. And a lot of people who are entering the industry, they're still becoming database practitioners by learning from D3 examples. It's been a hugely influential thing in our industry.

TARAS: How long is a typical project? How long would it take to put together a visualization for an article that we typically see?

RICH: It varies wildly. The graphics desk is about 50 strong and they will turn around things within a day. Like when the Notre Dame burnt down a couple of months ago, my colleagues turned around this interactive scroll driven webGL 3D reconstruction of how the fire spreads through the cathedral in less than 24 hours, which was absolutely mind blowing. But at the same time, there are projects that will take months. I work on the investigations team at the Times. And so, I'm working with people who are investigating stories for the best part of the year or sometimes more. And I'm building graphics for those. And so that, it's two very different timescales, but you need to be able to accommodate all of those different possibilities.

CHARLES: So, what does the software development practice look like? I mean, because it sounds like some of this stuff, are you just throwing it together? I guess what I mean by that is, I guess the projects that we typically work on, three months is kind of a minimum that you would expect. So, you go into it, we need to make sure we've got good collaboration practices around source control and continuous integration and testing and all this stuff. But I mean, you're talking about compressing that entire process into a matter of hours. So what, do you just throw right out the window? What do you say? "We're just doing a live version of this."

RICH: Our collaboration processes consist of sitting near each other. And when the time calls for it, getting in the same room as each other and just hammering stuff out on the laptop together. There's no time for messing around with continuous integration and writing tests. No one writes tests in the news graphics, it's just not a thing.

CHARLES: Right. But then for those projects that stretch into like three months, I imagine there are some. Do you run into like quality concerns or things like that where you do have to take into account some of those practices? I'm just so curious because it sounds like there's actually, the difference between two hours and two months is, that's several orders of magnitude and complexity of what you're developing.

RICH: It is. Although I haven't worked on a news project yet that has involved tests. And I know that's a shocking admission to a lot of people who have a development background, but it's just not part of the culture. And I guess the main difference between the codebase for a two-hour project and a two-month project is that the two-month project will strive to have some reasonable components. And that's, I think, the main thing that I've been able to get out of working on the kinds of projects that I do is instead of just throwing code at the page until it works, we actually have a bit of time to extract out common functionality and make components that can be used in subsequent interactives. So, things like scroll driven storytelling, that's much easier for me now than it was when I first built a scroll driven storytelling component like a couple of years ago.

CHARLES: Yeah. That was actually literally my next question is how do you bridge that, given that you've got kind of this frothy experimentation, but you are being, sounds like, very deliberate about extracting those tools and extracting those common components? And how do you find the time to even do that?

RICH: Well, this is where the component driven mindset comes in really handy, I think. I think that five or 10 years ago when people thought in terms of libraries and scripts, there wasn't like that good unit of reusability that wasn't the sort of all encompassing, like a component is just the right level of atomicity or whatever the word is. It makes sense to have things that are reusable but also very easy to tweak and manipulate and adapt to your current situation. And so, I think that the advent of component oriented development is actually quite big for those of us working in this space. And it hasn't really caught on yet to a huge degree because like I say, a lot of people are still coming with this kind of D3 script based mindset because the news industry, for some interesting and historical reasons, is slightly out of step with mainstream mode development in some ways. We don't use things like Babel a lot, for example.

CHARLES: That makes sense, right? I mean, the online print is not like it's a React application or it's not like the application is all encompassing, so you really need to have a light footprint, I would imagine, because it really is a script. What you're doing is scripting in the truest sense of the word where you essentially have a whole bunch of content and then you just need to kind of --

RICH: Yeah. And the light footprint that you mentioned is key because like most new sites, we have analytics on the page and we have ads and we have comments and all of these things that involve JavaScript. And by the time our code loads, all of this other stuff is already fighting for the main thread. And so, we need to get in there as fast as we can and do our work with a minimum fuss. We don't have the capacity to be loading big frameworks and messing about on the page. So that again is one of these sort of downward pressures that kind of enforces a certain type of tool to come out of the news business.

TARAS: A lot of the tooling that's available, especially on like the really fatter, bigger frameworks, the tools that you get with those frameworks, they benefit over long term. So if you have like a long running project, the weight of the abstractions, you've experienced that benefit over time and it adds up significantly. But if you're working to ship something in a day, you want something that is just like a chisel. It does exactly what you want it to do. You want to apply it in exactly the right place and you want to get it done exactly, like you want the outcome to be precise.

RICH: That's true. And I think a lot of people who have built large React apps, for example, or large Ember apps, they sort of look at Svelte and think, "Well, maybe this isn't going to be applicable to my situation," because it has this bias towards being able to very quickly produce something. And I'm not convinced that that's true. I think that if you make something easier to get started with, then you're just making it easier. If you build something that is simple for beginners to use, then you're also building something simple for experts to use. And so, I don't necessarily see it as a tradeoff, I don't think we're trading long-term maintainability for short term production. But it is certainly a suspicion that I've encountered from people.

TARAS: This is something that we've also encountered recently. It's been kind of a brewing discussion inside a front side about the fact that it seems to be that certain problems are actually better to rewrite than they are to maintain or refactor towards an end goal. And we found this, especially as the tools that we create have gotten more precise and more refined and simplified and lighter, it is actually easier to rewrite those things five times than it is to refactor it one time to a particular place that we want it to be. And it's interesting, like I find this to be very recent, this idea is blossoming in my mind very recently. I didn't observe this in the past.

CHARLES: Do you mean in the sense that like if a tool is focused enough and a tool is simple enough, then refactoring is tantamount to a rewrite if you're talking about 200 or 300 lines of code? Is that what you mean?

TARAS: Yeah. If you're sitting down to make a change or you have something in mind, it is actually easy to say, "Let's just start from scratch and then we're going to get exactly the same place in the same amount of time." But this kind of mantra of not rewriting makes me think about that, makes me question whether that's actually something that is always the right answer.

RICH: I definitely question that conventional wisdom at all levels, as well. I started a bundler called Rollup as well as Svelte more recently. And Rollup was the second JavaScript bundler that I wrote, because the first one that I wrote wasn't quite capable of doing the things that I wanted. And it was easier to just start from scratch than to try and shift the existing user base of its predecessor over to this new way of doing things. Svelte 3 is a more or less complete rewrite. Svelte has had multiple, more or less, complete rewrite. Some of them weren't breaking changes. But Svelte itself was a rewrite of an earlier project that I'd started in 2013. And so in my career, I've benefited massively from learning from having built something. But then when the time comes and you realize that you can't change it in the ways that you need to change it, just rewrite it.

And I think that at the other end of the spectrum, the recent debate about micro frontend has largely missed this point. People think that the benefit of the micro frontend is that people don't need to talk to each other, which is absolute nonsense. I think the benefit of this way of thinking about building applications is that it optimizes for this fact of life that we all agree is inevitable, which is that at some point, you're going to have to rewrite your code. And we spend so much energy trying to optimize for the stability of a code base over the long term. And in the process, lock ourselves into architectural and technical decisions that don't necessarily make sense three or four years down the line. And I think as an industry, would be a lot better placed if we all started thinking about how to optimize for rewrites.

CHARLES: So for those of us who aren't familiar, what is the debate surrounding micro frontends? This is actually something I've heard a lot about, but I've actually never heard what micro frontends actually are.

RICH: Yeah. I mean, to be clear, I don't really have a dog in this fight because I'm not building products, but the nub of it is that typically if you're building a website that maybe has like an admin page, maybe it has a a settings page, maybe it has product pages, whatever. Traditionally, these would all be parts of a single monolithic application. The micro frontend approach is to say, "Well, this team is going to own the settings page. This team is going to own the product page." And they can use whatever technologies they want to bring that about. And the detractors sort of attack a straw man version of this, "You're going to have different styles in every page. You're going to have to load Vue on one page. You're going to have to load React on the other page. It's going to be a terrible user experience," when actually its proponents aren't suggesting that at all. They're suggesting that people from these different teams coordinate a lot more that are free to deviate from some kind of grand master architectural plan when it's not suitable for a given task. And darn right. I think it means that you have a lot more agility as an engineering organization than you would if you're building this monolithic app where someone can't say, "Oh, we should use this new tool for this thing. We should use microstates when the rest of the organization is using Google docs." It's not possible. And so, you get locked into the decisions of a previous generation.

CHARLES: Right. No, it makes sense. It's funny because my first reaction is like, "Oh my goodness, that's a potential for disaster." The klaxon's going to go off in your head, but then you think, really then the work is how do you actually manage it so it doesn't become a disaster. And if you can figure that out, then yeah, there is a lot of potential.

RICH: Yeah. People always try and solve social problems with technology. You solve social problems with social solutions.

CHARLES: Right. And you have to imagine it too, it depends on the application, right? I think Amazon, the Amazon website is developed that way where they have different teams that are responsible even down to little content boxes that are up on the toolbar. And the site doesn't really, it shows, right? Like it shows like this is kind of like slapped together, but that's not what they need. They don't need it to not look like there's slight variation with the different ways that things behave. They need to be showing for their business to work. They need to be showing the right thing at the right time. And that's the overriding concern. So having it look very beautiful and very coherent isn't necessarily a thing. Same thing in Spotify, used as another example of this. I didn't know if it was called micro frontends, but I know that they've got a similar type thing, but they are clearly the experience and having it look coherent is more important. And so, they make it work somehow. And then like you're saying, it probably involves groups of people talking to other groups of people about the priorities.

So yeah, it doesn't sound to me like just like you're going to adopt micro frontends guarantees one particular set of outcomes. It really is context dependent on what you make of it.

RICH: Totally.

TARAS: I'm curious though, so with Svelte, essentially for your reactivity engine, you have to compile to get that reactive behavior.

RICH: Yeah.

TARAS: How does that play with other tools like when you actually integrate it together? I've never worked with Svelte on a large project, so I can't imagine what it looks like at scale. I was wondering if you've seen those kind of use cases and what that ends up, if there's any kind of side effects from that.

RICH: As you say, the reactivity within a component is only in the local state within that component or to state that is patched in as a prop from a parent component. But we also have this concept called a store. And a store is just a project that represents a specific value and you import it from svelte/store. And there are three types of store that you get out of the box. A writable, a readable and a derived. And a writeable is just, var count = writable (0) and then you can update that and you can set it using methods on that store. Inside your marker, you can reference or in fact inside the script block in the component, you can reference the value of that store just by prefacing it with a dollar sign. And the compiler sees that and says, "Okay, we need to subscribe to this store as value and then assign it and apply the reactivity." And that is the primary way of having state that exists outside the component hierarchy. Now, I mentioned the writable, readable, and derived are the built in stores that you get, but you can actually implement your own stores. You just need to implement this very simple contract. And so,, it's entirely possible to use that API to wrap any state management solution you have. So you can wrap redux, you can wrap microstates, you can wrap state, you can wrap whatever it is, whatever your preferred state management solution is, you can adapt it to use with Svelte. And it's very sort of idiomatic and streamlined. Like it takes care of unsubscriptions when the component is unmounted. All of that stuff is just done for you.

CHARLES: Digging a little bit deeper into the question of integration, how difficult would it be to take wholesale components that were implemented in Svelte and kind of integrate them with some other component framework like React?

RICH: If the component is a leaf node, then it's fairly straightforward. There is a project called react-svelte which is, I say project, it's like 20 lines of code and I don't think it's [inaudible] they did for Svelte 3, which I should probably do. But that allows you to use a Svelte component in the context of React application, just using the component API the same way that you would [inaudible] or whatever. You can do that inside a React component. Or you could compile the Svelte component to a web component. And this is one of the great benefits of being a compiler is that you can target different things. You can generate a regular JavaScript class and you've got an interactive application. Or you can target a server side rendering component which will just generate some html for some given state which can then later be hydrated on the client. Or you can target a web component which you can use like any other element in the context of any framework at all. And because it's a compiler, because it's discarding all of the bits of the framework that you're not using, it's not like you're bundling an entire framework to go along with your component. And I should mention while I'm talking about being able to target different outputs, we can also, as a NativeScript project, you can target iOS and Android that same way. Where it gets a little bit more complicated is if it's not a leaf node. If you want to have a React app that contains a Svelte component that has React [inaudible], then things start to get a little bit more unwieldy, I think. It's probably technically possible, but I don't know that I would recommend it. But the point is that it is definitely possible to incrementally adopt Svelte inside an existing application, should that be what you need to do.

CHARLES: You said there's a NativeScript project, but it sounds to me like you shouldn't necessarily need NativeScript, right? If you're a compiler, you can actually target Android and you could target iOS directly instead of having NativeScript as an intermediary, right?

RICH: Yes. If, if we had the time to do the work, then yes. I think the big thing there would be getting styles to work because Svelte components have styles. And a regular style tag just to CSS and you can't just throw CSS in a native app.

CHARLES: Right. Sometimes, I feel like it'd be a lot cooler if you could.

[Laughter]

RICH: NativeScript really is doing a lot of heavy lifting. Basically what it's doing is it's providing a fake dom. And so, what the NativeScript does is it targets that dom instead of the real dom and then NativeScript turns that into the native instructions.

CHARLES: Okay. And you can do that because you're a compiler.

TARAS: Compilers has been on our radar for some time, but I'm curious like what is your process for figuring out what it should compile to? Like how do you arrive at the final compile output? Manually, have you written that code and then, "I'm going to now change this to be dynamically generated." Or like how do you figure out what the output should be?

RICH: That's pretty much it. Certainly, when the project started, it was a case of, I'm going to think like a compiler, I'm going to hand convert this declarative component code into some framework plus JavaScript. And then once that's done, sort of work backwards and figure out how a compiler would generate that code. And then the process, you do learn certain things about what the points of reusability are, which things should be abstracted out into a shared internal helper library and what things should be generated in line. The whole process is designed to produce output that is easy for a human to understand and reason about. It's not like what you would imagine compile [inaudible] to be like, it's not completely inscrutable. It's designed to be, even to that level of being well formatted, it's designed to be something that someone can look at and understand what the compiler was thinking at that moment. And there's definitely ways that we could change and improve it. There are some places where there's more duplication than we need to have. There are some places where we should be using classes instead of closures for performance and memory benefits. But these are all things that once you've got that base, having gone through that process, that you can begin to iterate on.

CHARLES: It's always curious to me about when is the proper time to move to a compiler, because when you're doing everything at runtime, there's more flexibility there. But at what point do you decide, "You know what? I know that these pathways are so well worn that I'm going to lay down pavement. And I'm going to write a compiler." What was the decision process in your mind about, "Okay, now it's time." Because I think that that's maybe not a thought that occurs to most of us. It's like, "I had to write a compiler for this." Is this something that people should do more often?

RICH: The [inaudible] of 'this should be a compiler' is one that is worth sort of having at the back of your head. I think there are a lot of opportunities not just in DUI framework space but in general, like is there some way that we can take this work that is currently happening at runtime and shift it into a step that only happens once. That obviously benefits users. And very often we find that benefits developers as well. I don't think there was a point at which I said, "Oh, this stuff that's happening at runtime should be happening at compile time." It was more, I mean, the actual origin has felt that it was a brain worm that someone else infected me with. Judgment is a very well known figure in the JavaScript world. He had been working on this exact idea but hadn't taken it to the point where he was ready to open source it. But he had shared like his findings and the general idea and I was just immediately smitten with this concept of getting rid of the framework runtime. At the time, the big conversation happening in the JavaScript community was about the fact that we're shipping too much JavaScript and it's affecting startup performance time. And so the initial thought was, "Well, maybe we can solve that problem by just not having the runtime." And so, that was the starting point with Svelte. Over time, I've come to realize that that is maybe not the main benefit. That is just one of the benefits that you get from this approach. You also get much faster update performance because you don't have to do this fairly expensive virtual dom different process. Lately, I've come to think that the biggest win from it is that you can write a lot less code. If you're a compiler, then you're not kind of hemmed in by the constraints of the language, so you can almost invent your own language. And if you can do that, then you can do the same things that you have been doing with an API in the language itself. And that's the basis of our system of reactivity, for example. We can build these apps that are smaller and by extension, less bug prone and more maintainable.

I just wanted to quickly address the point you made about flexibility. This is a theoretical downside of being a compiler. We're throwing away the constraints about the code needing to be something that runs in the browser, but we're adding a constraint, which is that the code needs to be statically analyzable. And in theory, that results in a loss of flexibility. In practice, we haven't found that to affect the things that we can build. And I think that a lot of times when people have this conversation, they're focusing on the sort of academic concepts of flexibility. But what matters is what can you build? How easy is it to build a certain thing? And so if empirically you find that you're not restricted in the things that you can build and you can build the same things much faster, then that academic notion of flexibility doesn't, to my mind, have any real value.

CHARLES: Hearing you talk reminded me of kind of a quote that I heard that always stuck with me back from early in my career. I came into programming through Perl. Perl was my first language and Perl is a very weird language. But among other things, you can actually just change the way that Perl parses code. You can write Perl that makes Perl not throw, if that makes any sense. And when asked about this feature, the guy, Larry Wall, who came up with Perl, he's like, "You program Perl, but really what you're doing is you're programming Perl with a set of semantics that you've negotiated with the compiler." And that was kind of a funny way of saying like, "You get to extend the compiler yourself." Here's like the default set of things that you can do with our compiler, but if you want to tweak it or add or modify, you can do that. And so, you can utilize the same functionality that makes it powerful in the first place. You can kind of inject that whole mode of operation into the entire workflow. Does that make sense? That's like a long way of saying, have you thought about, and is it possible to kind of extend the Svelte compiler as part of a customization or as part of the Svelte programming experience?

RICH: We have a very rudimentary version of that, which is pre-processing. There's an API that comes with Svelte called preprocess. And the idea there is that you can pass in some code and it will do some very basic, like it will extract your styles, it will extract your script and it will extract your markup. And then it will give you the opportunity to replace those things with something else. So for example, you could write some futuristic JavaScript and then compile it with Babel before it gets passed to the Svelte compiler, which uses acorn and therefore needs to be able to have managed other scripts so that it can construct an abstract syntax tree. A more extreme version of that, people can use [inaudible] to write their markup instead of html. You can use Sass and Less and things like that. Generally, I don't recommend that people do because it adds these moving parts and it makes like a lot of bug reports of people just trying to figure out how to get these different moving parts to operate together. I don't know, it means that your editor plugins can't understand what's inside your style tag all of a sudden and stuff like that. So, it definitely adds some complexity, but it is possible.

At the other end, at a slightly more extreme level, we have talked about making the cogeneration part plugable so that for example, the default renderer and the SSR renderer are just two examples of something that plugs into the compiler that says, "Here is the component, here's the abstract syntax tree, here's some metadata about which values are in scope," all of this stuff and then go away and generate some code from this. We haven't done that so far, partly because there hasn't been a great demand for it, but also because it's really complicated. As soon as you turn something into a plugin platform, you just magnify the number of connection points and the number of ways that things could go wrong by an order of magnitude. And so, we've been a little bit wary of doing that, but it is something that we've talked about primarily in the context of being able to do new and interesting things like target webGL directly or target the command line. There are renders for React that let you build command line apps using React components. And like we've talked about, maybe we should be able to do that. Native is another example. The NativeScript integration as you say, it could be replaced with the compiler doing that work directly, but for that to work presently, that would mean that all of that logic would need to sit in core. And it would be nice if that could be just another extension to the compiler. We're talking about a lot of engineering effort and there's higher priority items on our to do list at the moment. So, it's filed under one day.

CHARLES: Right. What are those high priority items?

RICH: The biggest thing I think at the moment is TypeScript integration. Surprisingly, this is probably like the number one feature request I think is that people want to be able to write Typescript inside the Svelte components and they want to be able to get TypeScript when they import the Svelte component into something else. They want to be able to get completion [inaudible] and type checking and all the rest of it. A couple of years ago, that would've been more or less than thinkable but now it's like table stakes is that you have to have first-class TypeScript support.

CHARLES: Yeah, TypeScript is as popular as Babel these days, right?

RICH: Yeah, I think so. I don't need to be sold on the benefits. I've been using TypeScript a lot myself. Svelte is written in TypeScript, but actually being able to write it inside your components is something that would involve as hacking around in the TypeScript compiler API in a way that, I don't know if anyone actually or any of us on the team actually knows how to do. So, we just need to spend some time and do that. But obviously when you've got an open source project, you need to deal with the bugs that arise and stuff first. So, it's difficult to find time to do a big project like that.

CHARLES: So, devil's advocate here is if the compiler was open for extension, couldn't a TypeScript support be just another plugin?

RICH: It could, but then you could end up with a situation where there's multiple competing TypeScript plugins and no one's sure which ones are used and they all have slightly different characteristics. I always think it's better if these things that are common feature requests that a lot of people would benefit from, if they're built into the project themselves. I go really light in the batteries included way of developing and I think this is something that we've sort of drifted away from in the frontend world over the last few years, we've drifted away from batteries included towards do it yourself.

CHARLES: Assemble the entire thing. Step one, open the box and pour the thousand Lego pieces onto the floor.

RICH: Yeah, but it's worse than that because at least, with a Lego set, you get the Lego pieces. It's like if you had the Lego manual showing you how to build something, but you were then responsible for going out and getting the Lego pieces, that's frontend development and I don't like it.

CHARLES: Right. Yeah. I don't like that either. But still, there's a lot of people advocating directly. You really ought to be doing everything completely and totally yourself.

RICH: Yes.

CHARLES: And a lot of software development shops still operate that way.

RICH: Yeah. I find that the people advocating for that position the most loudly, they tend to be the maintainers of the projects in question. The whole small modules philosophy, they exist for the benefit primarily of library authors and framework authors, not for the benefit of developers, much less users. And the fact that the people who are building libraries and frameworks tend to have the loudest megaphones means that that mindset, that philosophy is taken as a best practice for the industry as a whole. And I think it's a mistake to think that way.

TARAS: There is also, I think, a degree of a sliding scale where you start off with like as the more experience you get, because there is more experience you get closer, you get to that kind of wanting granular control and then they kind of slides down towards granular control and then slice back up to, once you've got a lot of experience, you're like, "Okay, I don't want this control anymore." And then you kind of cast that and you get into like, "I'm now responsible for tools that my team uses," and now you're back to wanting that control because you want things to be able to click together. It's kind of like a way that your interest in that might change over time depending on your experience level and your position in the organization. So yeah, there's definitely different motivating factors. Like one of the things that we've been thinking a lot about is designing tools that are composable and granular at individual module level, but combined together into a system for consumption by regular people. So like finding those primitives that will just click together when you know how to click them together. But when you're consuming them, just feel like a holistic whole, but at the same time not being monolithic. That's a lot of things to figure out and it's a lot of things to manage over time, but that's solely the kind of things we've been thinking about a lot.

RICH: I think that's what distinguishes the good projects that are going to have a long lifespan from the projects that are maybe interesting but don't have a long shelf life is whether they're designed in such a way that permits that kind of cohesion and innovation tradeoff, if you think of it as a trade off. Anyone can build the fastest thing or the smallest thing or the whatever it is thing. But building these things in a way that feels like it was designed holistically but is also flexible enough to be used with everything else that you use, that's the real design challenge.

CHARLES: It's hard to know where to draw that line. Maybe one good example of this and, these are actually two projects that I'm not particularly a fan of, but I think they do a good job of operating this way. So, I guess in that sense, it means I can even be more honest about it. I don't particularly care for Redux or like observables, but we ended up using, in one of our last React projects, we had to choose between using Redux-Saga and Redux-Observable. The Redux-Observable worked very well for us. And I think one of the reasons is because they both had to kind of exist. They had to kind of co-exist is their own projects. Like Redux exists as its own entity and Observables exist as their own kind of whole ecosystem. And so, they put a lot of thought in like what is the natural way in which these two primitives compose together? As opposed to the Saga, which I don't want to disparage the project because I think it actually is a really good project. There's a lot of really good ideas there but because it's more like just bolted on to Redux and it doesn't exist outside of the ecosystem of Redux and the ideas can't flourish outside and figure out how it interfaces with other things. Like the true primitive is still unrevealed there. And so, whereas I feel like with Redux you actually have to really, really true primitives. Now, they're not necessarily my favorite primitives, but they are very refined and very like these do exactly what they are meant to do. And so when you find how they connect together, that experience is also really good. And the primitive that arises there I think ends up being better. Is that an example of what you guys are talking about?

RICH: Maybe. [Laughs]

TARAS: No, I think so. I mean, it's distilling to the essence, the core of what you're trying to do and then be able to combine it together. I mean, that's been kind of the thing that we've been working on at the Frontside. But also within this context, it makes me think of how does a compiler fit into that? How does that work with the compiler? It's just like when you add the compiler element, it just makes it like my mind just goes poof!

[Laughter]

CHARLES: Yeah, exactly. That's why I keep coming back to like, how do you, and maybe I haven't, you just have to kind of go through the experience, but it feels like maybe there's this cycle of like you build up the framework and then once it's well understood, you throw the framework away in favor of like just wiring it straight in there with the compiler and then you iterate on that process. Is that fair to say?

RICH: Kind of, yeah. At the moment, I'm working on this project, so I referred a moment ago to being able to target webGL directly. At the moment, the approach that I'm taking to building webGL apps is to have webGL components inside Svelte in this project called SvelteGL. And we've used it a couple of times at the Times. It's not really production ready yet, but I think it has some promise. But it's also slightly inefficient, like it needs to have all of the shade of code available for whichever path you're going to take, whatever characteristics your materials have, you need to have all of the shade of code. And if we're smart about it, then the compiler could know ahead of time which bits of shade of code it needed to include. At the moment, it just doesn't have a way of figuring that out. And so that would be an example of paving those cow paths. Like if you do try and do everything within the compiler universe, it does restrict your freedom of movement. It's true. And to qualify my earlier statements about how the small modules philosophy is to the benefit of authors over developers, it has actually enabled this huge flourishing of innovation, particularly in the React world. We've got this plethora of different state management solutions and CSS and JS solutions. And while I, as a developer, probably don't want to deal with that, I just want there to be a single correct answer. It's definitely been to the advantage of the ecosystem as a whole to have all of this experimentation. Then in the wild, there are projects like Svelte they can then take advantage of. We can say, "Oh well, having observed all of this, this is the right way to solve this problem." And so, we can kind of bake in that and take advantage of the research that other people have done. And I think we have made contributions of our own but there is a lot of stuff in Svelte like the fact that data generally flows one way instead of having [inaudible] everywhere. Things like that are the results of having seen everyone make mistakes in the past and learning from them. So, there are tradeoffs all around.

TARAS: One thing on topic of data flow here and there, one thing that I've been kind of struggling to compute is the impact of that as opposed to something where you have like one directional data flow because it seems like conceptually it's really simple. You set a property like in two way balance system, like you just propagate through stuff but we don't really have a way, you don't have any way of assessing what is the true impact of that computation. Like what is the cost of that propagation where I think it's almost easier to see the cost of that computation if you have like one directional data flow because you know that essentially everything between the moment that you invoke transition to computing the next state, that is the cost of your computation where you don't have that way of computing the result in a two way balance system. Something like Ember Run Loop or mobx or zones, Vues, reactive system. All these systems make it really difficult to understand what is the real cost of setting state. And that's something that I personally find difficult because this clarity that you have about the one directional data flow and what it takes to compute the next state, it's almost like because that cost is tangible where you're thinking about like mutation of objects and tracking their change like that cost is almost immeasurable. It just seems like a blob of changes that they have to propagate. I don't know. That's just something that I've been thinking a lot because especially with the work that we'll be doing with microstates because as you're figuring out what the next state is, you know exactly what operations are performed in a process where that might not be the case with the system that tracks changes like where you'd have with zones or with Ember Run Loop, or Vue.

RICH: I would agree with that. The times that I found it to be beneficial to deviate from the top-down ideology is when you have things like form elements and you want to bind to the values of those form elements. You want to use them in some other computation. And when you do all that by having props going in and then events going out and then you intercept the event and then you set the prop, you're basically articulating what the compiler can articulate for you more effectively anyway. And so conceptually, we have two way bindings within Svelte, but mechanically everything is top down, if that makes sense.

CHARLES: Is it because you can analyze the tree of top down and basically understanding when you can cheat. This might be really over-simplistic, but if you're kind of with the event, you're collecting the water and then you have to put it way up on top of the thing and it flows down. But if you can see the entire apparatus, you can say, "Actually, I've got this water and it's going to end up here, so I'm just going to cheat and put it over right there." Is that the type of thing that you're talking about where you're effectively getting a two way binding, but you're skipping the ceremony.

RICH: It's kind of writing the exact same code that you would write if you were doing it using events. But if you're writing it yourself, then maybe you would do something in a slightly inefficient way perhaps. For example, with some kinds of bindings, you have to be careful to avoid an infinite loop. If you have an event that triggers a state change, the state change could trigger the event again and you get this infinite loop. A compiler can guard against that. It can say this is a binding that could have that problem, so we're going to just keep track of whether the state changes as a result of the binding. And so, the compiler can sort of solve all of these really hairy problems that you had faced as a developer while also giving you the benefit in terms of being able to write much less code and write code that expresses the relationship between these two things in a more semantic and declarative way without the danger.

TARAS: This is one of the reasons why I was so excited to talk to you about this stuff, Rich, because this stuff is really interesting. I mentioned that we might, so we have a little bit more time. So I just want to mention, because I think that you might find this interesting, the [inaudible], the stuff that we were talking about that I mentioned to you before. So, I want to let Charles talk about it briefly because it's interesting, because it essentially comes down to managing asynchrony as it ties to life cycle of objects. Life cycle of objects and components are something we deal with on a regular basis. So, it's been an interesting exercise and experimenting with that. Charles, do you want to give kind of a low down?

CHARLES: Sure. It's definitely something that I'm very excited about. So, Taras gets to hear like an earful pretty much every day. But the idea behind structure concurrency, I don't know if you're familiar with it. It's something that I read a fantastic -- so people have been using this for a while in the Ember community. So Alex Matchneer, who's a friend and often time guest on the podcast created a library called ember-concurrency where he brought these ideas of structure concurrency to the ember world. But it's actually very prevalent. There's C libraries and Python libraries. There's not a generic one for JavaScript yet, but the idea is just really taking the same concepts of scope that you have with variables and with components, whether they be ember components, Svelte components, React components or whatever there is, you have a tree of components or you have a of parents and children and modeling every single asynchronous process as a tree rather than what we have now, which is kind of parallel linear stacks. You call some tick happens in the event loop and you drill down and you either edit an exception or you go straight back up. The next tick of the event loop comes, you drill down to some stack and then you go back up. A promise resolves, you do that stack. And so with structure concurrency, essentially every stack can have multiple children. And so, you can fork off multiple children. But if you have an error in any of these children, it's going to propagate up the entire tree. And so, it's essentially the same idea as components except to apply to concurrent processes. And you can do some just really, really amazing things because you don't ever have to worry about some process going rogue and you don't have to worry about coordinating all these different event loops. And one of the things that I'm discovering is that I don't need like event loops. I don't really use promises anymore. Like actually, I was watching, I think it was why I was watching your talk when you're talking about Svelte 3, when you're like -- or maybe did you write a blog post about we've got to stop saying that virtual doms are fast?

RICH: Yes, I did.

CHARLES: So I think it was that one. I was reading that one and it jived with me because it's just like, why can't we just go and do the work? We've got the event, we can just do the work. And one of the things that I'm discovering is with using the construction concurrency with generators, I'm experiencing a very similar phenomenon where these stack traces, like if there's an error, the stack traces like three lines long because you're basically doing the work and you're executing all these stacks and you're pausing them with a generator. And then when an event happens, you just resume right where you left off. There's no like, we've got this event, let's push it into this event queue that's waiting behind these three event loops. And then we're draining these queues one at a time. It's like, nope, the event happens. You can just resume right where you were. You're in the middle of a function call, in the middle of like [inaudible] block. You just go without any ceremony, without any fuss. You just go straight to where you were, and the stack and the context and all the variables and everything is there preserved exactly where you left it. So, it's really like you're just taking the book right off the shelf and going right to your bookmark and continuing along. Rather than when you've got things like the run loop in ember or the zones in angular where you have all these mechanics to reconstruct the context of where you were to make sure that you don't have some event listener. An event listeners created inside of a context and making sure that that context is either reconstructed or the event listener doesn't fire. All these problems just cease to exist when you take this approach. And so, if it's pertinent to this conversation, that was a surprising result for me was that if you're using essentially code routines to manage your concurrency, you don't need event loops, you don't need buffers, you don't need any of this other stuff. You just use the JavaScript call stack. And that's enough.

RICH: I'm not going to pretend to have fully understood everything you just said but it does sound interesting. It does have something not that dissimilar to ember's run loop because if you have two state changes right next to each other, X+=1, Y+=1, you want to have a single update resulting from those. So instead of instruments in the code such that your components are updated immediately after X+=1, it waits until the end of the event loop and then it will flush all of the pending changes simultaneously. So, what you're describing sounds quite wonderful and I hope to understand that better. You have also reminded me that Alex Matchneer implemented this idea in Svelte, it's called svelte-concurrency. And when he sent it to me, I was out in the woods somewhere and I couldn't take a look at it and it went on my mental to do list and you just brought it to the top of that to do list. So yeah, we have some common ground here, I think.

CHARLES: All right.

TARAS: This is a really, really fascinating conversation. Thank you, Rich, so much for joining us.

CHARLES: Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @thefrontside or over just plain old email at contact@frontside.io. Thanks and see you next time.

Sep 04 2019

52mins

Play

Security with Philippe De Ryck

Podcast cover
Read more

Philippe De Ryck joins the show to talk all things security: the importance and why you should be taking active steps, how to do it in your codebase effectively, and what can happen during a breach.

Philippe De Ryck: Pragmatic Web Security

Resources:

Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES: Hello and welcome to The Frontside Podcast, a place where we talk about user interfaces and everything that you need to know to build them right.

My name is Charles Lowell, a developer here at The Frontside. Joining me, also hosting today is Taras Mankovsky. Hello, Taras.

TARAS: Hello, hello.

CHARLES: And as always, we're going to be talking about web platforms, UI platforms, and the practices that go into them. And here to talk with us today about a pillar of the platform that I certainly don't know that much about, and so, I'm actually really happy to have this guest on to talk about it is Philippe De Ryck who owns his own company called Pragmatic Web Security. I understand you do trainings and are just generally involved in the small space. So, welcome, Philippe.

PHILIPPE: Hi. Nice to meet you.

CHARLES: Wow! I almost even don't even know where to start with this subject because I'm kind of like the hippie developer mindset where it's like, "LaÖlaÖlaÖlaÖlaÖ we're in this open land and nothing's ever bad going to happen and we're just going to put code out there," and nobody would ever take advantage of any holes or anything like that. And I think that a lot of developers share that mentality and that's how we end up with major, major security breaches. And so, like I said, this is something that I'm actually very eager to learn but I almost even don't know where to start. I need training, man.

[Laughter]

PHILIPPE: Well, that's good to hear. No, you're totally right about that. If you're not into security, it seems like this fast space for a lot is happening and you don't really know how or why and what really matters to you and should I even be worried about this. And let me start by addressing the very first thing. Yes, you should be worried because maybe you're not building something that somebody cares about but you always have something that somebody wants, even the simplest of attacks always targets a valuable resource. Just to give you a very simple idea today, cryptocurrency is all the hype and you have a taker that's just aiming to misuse your users' computers to mine crypto coins because it essentially saves them a bunch on electricity cost. So, there's always something to grab. Usually, it's data or services or worse. But even in the most minimal cases, you have hardware, you have devices, you have network capacity that somebody might want to abuse. So yes, security, I would say, always matters.

CHARLES: What's the best way to get started? You said understanding that everything we do, we're holding onto resources that might be valuable, that someone might want to seize but I'm just getting started with my application. I don't know anything about security. Where do I get started on just understanding the space? And then before I even look at tools that I wantÖ

PHILIPPE: You want the honest answer for that?

[Laughter]

PHILIPPE: The honest answer is probably hire someone who has security knowledge. I don't mean this in a bad way. I've come a very long way in my career doing what I do now. And if I look at that, if you are aiming as a developer with no knowledge about security to build a secure application, it's going to be very hard. There's a lot of things you need to know, intrinsic knowledge. These are not things you can simply read a small book in a week, you know all of these security things that you'll know what to do. So, if you have no previous experience at all, I suggest to find some help.

CHARLES: Right. It's like saying, "Hey, you've never written a data layer before but you want to go out and you want to write a massively distributed system where you have all these notes talking to each other. You're not going to read the O'Reilly book 'How to Build Distributed Systems' in a week and go out and do the same thing." It's the same thing with security. You need to understand the entire context. And there's no substitute for experience.

PHILIPPE: Sorry, I actually like that comparison because in a sense, you're right, it's like these other very complex topics you don't expect to learn that in a week or a month and right a functioning data layer. But the difference is if you fail at writing that data layer, your application is probably not going to work. While if you fail at securing the application or seeing potential vulnerabilities, it's still going to work just a bit more than you anticipated. It's going to result leaking all your data to an attacker or opening all doors so that they can gain access to your server, stuff like that. So, I would say that the consequences of not getting it right are, at least in the beginning, very invisible. It's only after things happened that it's like, "Oh, crap!" And you should pay attention to that.

CHARLES: Yeah. And then you have these back doors and these leaks that are set in stone and may be very hard to change.

PHILIPPE: Yeah, absolutely. And honestly, the worst part of the breach is for a company, it might be reputation damage. But what you really should be worried about is all that personal information that's being leaked to, most cases, publicly leaked to anyone but in other cases, it's sold on [inaudible] markets and actually abused by people looking for someone else's identity or credit card information or stuff like that. And the companies usually get away with some bad press, maybe a small dip in their stock price but eventually, they'll bounce back. But it's the users that suffer from these breaches for a very, very long time.

TARAS: What do you see the kind of hot zones around concerns that companies have around security? Because I imagine it's hard to be concerned about everything, so they're probably thinking about specific things like this thing worries us. Like what kind of things do you see companies and teams need to worry about?

PHILIPPE: That's an interesting question. You have all different kinds of companies and all different levels of security awareness and what they're worrying about. I would say if you have the companies that are not very good at or don't have very much security knowledge, they're probably not to worry about things because otherwise, they would have started investing in improving their practices. If you look at the companies that are at least very aware of the landscape, I'm not saying that anybody here is perfect, but some of the companies are actually doing quite a good job. One of the most interesting challenges today is dealing with dependencies. So, all of your packages, you depend on npm, Maven, Python packages, Ruby gems, and so on. All of them form a huge attack factor in most applications today. That's definitely a problem that a lot of companies struggle with and it's very hard to find good solutions.

TARAS: GitHub recently, I saw their vulnerability alert service that I've been getting a lot of notifications from on some of the open source libraries that we use. They have a lot of dependencies. And a lot of projects have the same dependencies. So, the moment that one notification goes on, it like lights up on all of the GitHub repos that I have. So, I have been going through and like updating dependencies and all those libraries.

PHILIPPE: Yeah, that's a very good example. Absolutely. A lot of the projects we build today usually start out by installing a bunch of dependencies. Even before you've written the first line of code, you already have a massive code base that you are relying upon. And a single vulnerability in a code base might be enough, it's not always the case, but it might be enough to compromise your application. And that leaves you, as a developer, in a very hard place because you haven't written any lines of code yet. You have built a vulnerable application and that starting point can be very terrifying. So, there's a lot of reports on this. And actually, if you want some numbers, 78% of the vulnerabilities discovered in existing applications like the ones you mentioned. If GitHub alerts you like, "Hey, there's a problem in one of your dependencies," it's often even an indirect dependency, meaning that you include a framework, if you include React or Express or whatever you're building, that one of your dependencies of one of those projects actually has a vulnerability. If you look at the trees of these packages, they get quite big that it's not dozens but it's thousands of packages that we're talking about.

CHARLES: Yeah that's the other thing is how do you know how to interpret these security vulnerabilities because some of them, we get a lot of security vulnerabilities for node packages but we only use them for our development tools to build frontend. So, if we're building a React application and there's some security vulnerability in some node packages that we're using in our build tool, then that doesn't actually get deployed to the frontend. So, maybe it's not a concern but if we were actually using it to build a server, then it would be absolutely critical. And so, how do you evaluate because the same security vulnerability is not a vulnerability in one context, but might be in another or maybe I'm thinking about it wrong. You see what I mean?

PHILIPPE: Yeah, sure. I totally get what you mean. Actually, I have observed the same things. I also get these security alerts on my projects, and sometimes it's devDependency, so it seems like we don't need to care about that. You're right in the sense that you have to assess the criticality of such a report. So, they will have a rating, a severity rating saying like, "This is a minor issue," or, "This is a major issue," that should be a first indication. And then a second thing to look at is, of course, how are these things used in practice. It's not because it's a devDependency that it's not exploitable because it all depends on what is the vulnerability. If there's an intentional malicious backdoor in the library and you're building that on your build server, it might give an attacker access to your build server. So, that might not be something you actually want to do. So in that case, it does matter. Of course, if it's only stuff you run locally, you can say like, "OK, this is less important." But usually, updating or fixing these vulnerabilities also requires less effort because there's no building and deploying to production servers either. So, it's a matter of staying up-to-date with these.

And one of the things that people struggle with is handling this in a lot of different applications. You mentioned you had a lot of GitHub repos and the vulnerability starts popping up in all of them and you have to fix and update all of them. You can imagine that major companies struggle with that, as well, especially if you have quite a few different technologies. Managing all of that is insanely hard.

CHARLES: Right, because you just usually look at it and you're like, "Oh, I've got to download this." And maybe, "I haven't used it this repo for a while. I've got to clone it up, I've got to update the dependency. I've got to make sure I run all my tests locally, then run all the tests in CI and make sure I didn't break anything by upgrading. I might have fixed closed security hole but broken my functionality." And so, make sure that that is all intact and then push it out to production. Even on the small, it's like I'm looking, "OK, maybe this is going to take me 30 to 45 minutes." But if you have four or five of those things, you're looking at half your day or maybe even the whole day being gone and that's if you have the processes in place to do those automated verification. If you have a very high confidence in your deployment pipeline which I don't think a lot of places have. So, it sounds like these are complementary, like you really need in order to keep a secure application, you have to keep it up-to-date because I think what I'm hearing is you should just evaluate all the threats. You should fix it if you can. The first part of my question is, am I kidding myself when I say, "Oh, I can ignore this one because it's just local or it's just a devDependency."

PHILIPPE: The answer to that question is briefly, I would say they are less critical.

CHARLES: That's cool.

PHILIPPE: In general, the rule is update if you can. And actually some of the tools out there that monitor vulnerabilities, they will automatically create a pull request in your repo saying to upgrade to this version and then you can automatically run your tests if you have them, and you can very quickly see whether some conflicts are generated by updating that dependency - yes or no. And in most cases, if it's a minor version bump, it's going to work as expected and you can easily push out the new version without that vulnerability. So, I would say fix if you can. If it goes quickly, then definitely fix them. But I would focus on non-devDependencies first instead of devDependencies.

CHARLES: Yeah.

PHILIPPE: Second thing I wanted to add is you paint a very grim picture saying you have to spend a lot of time updating these issues and I can totally understand that happening the very first time you look into this. There's going to be some stuff in there, I can guarantee that. But if you do this regularly, the effort becomes less and less because once you have up-to-date libraries, the problem is bad but it's not like we have 50 new vulnerabilities every day, fortunately.

CHARLES: Right.

PHILIPPE: So, once you have done that, it's going to be a bit less intensive than you might anticipate at first glance. Of course, if you're using these projects, if you're reusing the same library, then you'll have to update them everywhere. That's the downside, of course.

CHARLES: It's probably a little bit dangerous to be assessing the criticality of the security threats yourself if you're not an expert, and kind of in the same way, it's dangerous to be assessing an architecture if you don't have an expertise in our architecture, I guess is the thing, because you might not understand the threat.

PHILIPPE: Yeah, that's, again, absolutely true. It again depends very much on how it's deployed and what it's used for. That's going to be one important aspect. Another thing that might be very useful is, how deep is the dependency that creates the vulnerability or has the vulnerability? Because for example, if you have your tree of dependencies, if you dependency is like five or six levels deep, the chances of malicious data are reaching that specific vulnerability, and that specific library is going to be fairly small. Because usually, libraries have a lot of features and you only use part of them in your application. So, the other one is address of the features is just sitting there and if it's never used and it's also not exploitable. So, that might play a role as well.

I saw a presentation about a month or two months ago from how Uber manages these things and they struggled with a lot of those things as well. And they eventually decided that they really care about vulnerabilities going three levels deep. And something that goes deeper is considered to be less relevant or less urgent to update because chances of exploitability are going to be very small.

CHARLES: That's actually really interesting.

TARAS: One thing that got me thinking about something that is actually happening right now. A friend of mine has a WordPress site that was hacked. But what's interesting about WordPress, I think the fact that WordPress site was hacked is not really a surprise but I think what's interesting about that is that the frequency and the sophistication of these attacks has increased. The tooling has improved also in the WordPress ecosystem. But at the same time, I think there is actually more people that are aware of the kind of exploits that could be done. There are a lot of people going after WordPress sites, but it kind of makes me think that there's probably going to be a time when the vectors of attack for web applications are going to become pretty well known as well. Because of the architecture, there are a fewer of them. But as the awareness of the actual architecture becomes more common, I think the angles of attack are going to become more interesting. Like one of the things that I was reading about a couple days ago is that there are some researchers that found a way to attract users based on a combination of JavaScript APIs that are available in the browser. So, they are actually able to fingerprint users based on the kind of things that they're using, the application for [inaudible] extensions they have installed. I think people are going to get more creative. And that's kind of scary because we've seen this happen already in WordPress and people are greedy. So, there are going to be ways. I think there's going to be more people looking at how to get into and how to exploit these vulnerabilities.

PHILIPPE: Yeah. That's actually a couple of very good examples that illustrate the underlying issue. So, this browser-based tracking of users, it's called browser fingerprinting and it's been going on for a while. Back when I did my PhD, I had colleagues working on those things and you have other people at universities doing research on this. And yes, you can use things like JavaScript APIs in the browser to identify a particular user with a very high probability. It's not perfect but it's usually enough to identify a user for ad tracking or those purposes.

By the way, these things also have a legitimate purpose. So, they are also used to keep track of a particular user to prevent things like session hijacking or detect problem logins or stuff like that, so they can also have a legitimate use case next to tracking. But they very clearly show how security will always be a cat and mouse game. Tracking used to be easy. You just set a cookie in a browser and the cookie was there next time and you knew who the user was. And then, users became a bit more savvy. You had browser extensions trying to block listings because let's be honest, they're kind of shady. So, users probably don't want that. And then the attacker started moving towards other things and getting more advanced. And you see that in other areas of security, as well. So, I consider that a good thing because as we make things harder for attackers, they will have to get more creative and it will become more difficult to exploit or to take advantage of applications. That's the good side. The bad side or the dark side of that equation is that unfortunately, the vulnerabilities are not going away. It's not because we now have these somewhat more advanced attacks using advanced features or even CView-based vulnerabilities that the old things like SQL injection and [inaudible] have disappeared in applications. That's also not true and that means that it makes it just a bit more harder for everyone on the defensive side to build more secure applications. You're going to have to know about the old stuff and you have to learn about the new stuff.

CHARLES: Again, we come back to that idea. It's all a bit overwhelming. Aside from the solution of like, "Hey, let's hire Phillippe. Let's hire some other security expert." We were actually in your training, and obviously, I don't want to divulge all the secrets or whatever. If we were to attend your training, what do you see is the most important thing for people to know?

PHILIPPE: There's no secrets there. [Chuckles] What I teach is web security. I kind of like to think I teach that in a very structured and methodical way. But in the end, there's no secrets and I don't mind talking about this here on the podcast because I honestly believe that everyone should know as much as they can about security.

What do I teach? I can talk about specifics but I can also talk about generic things. One of the general takeaways is that one of the best things in my opinion that a developer can do is realize when they don't know something and actually admit that they don't know something, instead of just doing something. Maybe having like a brief thought like, "Hmm, is this secure? Well, it's probably good. I'm going to deploy it anyway. We'll see what happens." That is not the right way of doing things. If you do something and you recognize like, "Hey, this might be security sensitive. We're dealing with customer information here. We're dealing with healthcare information. We might want to look at what plays a role here," and then you can go ask someone who does. You probably have a colleague with a bit more security knowledge, so you can ask him like, "Hey Jim, or whatever your name is, do you think that this is OK or should we do something special here?" Very much like you are doing, asking me questions right here. That's one important takeaway that I hope everyone leaves with after a training class because not knowing something and realizing that you don't know it allows you to find someone who actually does. That still leaves us with that point which you wanted to sidestep.

CHARLES: [Chuckles]

PHILIPPE: A second thing is to realize that security is not a target. It's not something you're going to hit. It's not a holy goal that after working really hard for two years, you're going to hit this security milestone and you're done. It's always going to be a cat and mouse game. It's always going to be a moving target but that's OK. That's how things are. And that's the same with all other things in the world essentially. It's an evolving topic and you'll need to be ready to evolve with that as well.

TARAS: One of the challenges that I see into quite often in teams is that at the individual level, people really try to do their best, maybe the best of their abilities. But it's often, when it comes to being part of a group, it's often like they do best within the kind of cultural environment that exists. I'm curious if you've seen good or kind of environments or cultures for engineering teams that are conducive to good security. Are there kind of systems or processes the companies put in place that you've seen to be very effective in preventing problems? Have you encountered anything like this?

PHILIPPE: Ideally, you have developers that are very well educated about security but honestly, it's going to be insanely hard to find these people because a developer not only has to be educated about security, they also need to know about UI design and JavaScript frameworks and other frameworks and all of these things. And it's virtually impossible to find someone up-to-date on all of these things. So, what most companies do today that seems to work quite well, even though it's very hard to judge whether it's working or not, is they work with security champions. So, you typically have a dev team and within a dev team, you would have a security champion, one or two or five, depends on how large your teams are, of course, that is knowledgeable about security. So, that developer has some knowledge. He's not an expert but he knows about common attacks and common dangers and how to potentially address them in the application. So, having that person embedded in the team allows the team to be security aware because when you have a team meeting like, "Hey, how are we going to solve this particular problem?" That person will be able to inject security knowledge like, "Hey, that seems like a good idea but if we're using SQL in the backend, we need to ensure that we don't suffer from SQL injection." Or if you're using a NoSQL database, it's going to be NoSQL injection and so on. And that already elevates the level of security in the team.

And then, of course, security champions themselves are not going to be security experts. They're mainly developers just with a security focus. So, they should be able to escalate problems up to people with more knowledge, like a security team which can be a small security team within their organization that people can easily reach out to, to ask like, "Hey, we're doing something here and I know that this is security relevant and I'm not entirely sure what's happening here. So, can we get a review of this part of your application?" Or, "Can you guys sit on the meeting to see what's going on and what's happening there?" And I think that structure also makes sense. It's still going to be hard to build secure applications because there's still a lot of things to address, but at least, your teams get some awareness. And then of course, you can help your security champions to become better and they will get better over time. You can augment them with the security architects. You can train your security champions separately with more in-depth knowledge and so on. And that veteran or that setup seems to work quite well in many large organizations today.

CHARLES: Yeah. I like that. It gets me to thinking, so having the having the security champions, having people who have this as part of, not their specialization, but at least part of their focus, being in the room, being part of the conversation because we try and do that and provide that service when it comes to UI but we also have a bunch of processes that kind of automate the awareness of quality. So, the classic one is your CI pipeline, your deployment pipeline. So, you're automating your advancement to production. You're automating your QA. It's still no substitute for having someone who's thinking about how to have that quality outcome but you still have some way of verifying that the outcome is quality. Are there tools out there that you can do to kind of keep your project on the security Rails. I'm thinking something that we we've done recently is having preview apps, so that we get a tight feedback loop of being able to deploy a preview version of your application that's on a branch but it's talking to a real backend. There's a lot of more software and services that are supporting this and it's kind of become an integral part of our workflow. So, testing automated deployment preview apps, there's this kind of suite of tools to make sure that the feedback loops are tight and that the quality is verified even though you have people, you also have people guiding that quality. It's just making sure that the standards are met. Is there a similar set of tools and processes in the security space so that we've got these champions out there, they're being part of the conversations. They're making suggestions but they can't be everywhere at once. And is there a way to make sure that the kind of the ways that they're guiding the application, just verifying that the application is going in that direction? Or an alarm bell has sounded. We mentioned one which is the automated pull request with the, "Hey, you got this dependency and there was a pull request." Are there more things like that, I guess, is what I'm saying.

PHILIPPE: Yes, there are. But I would dare to say not enough. So yes, you have some security tools you can integrate in your pipeline that do some automated scanning and they tried to find certain issues and alert you of those issues. So, these things do exist but they have their limitations. A tool can scan an application. Some of the findings are going to be easy and fairly trivial, but it's good to have the check in place nonetheless. But some of the more advanced issues are very likely to be undetectable by those automated tools because they require a large amount of skill and expertise to actually craft and exploit to abuse that particular feature in an application. So, we do have some limitations but I like discretion because I do believe that we need to leverage these mechanisms to ensure that we can improve the security quality of our applications. A very simple thing you can do is you can run an automated dependency check when you build the application and you can use that to decide to halt deployment when it's a severe vulnerability or go ahead anyway when you consider this to be acceptable because if you automate all of those things, things can go wrong as well. We can talk about that in a second.

So yeah, these things can be done. But what I strongly encourage people to do to ensure that they can kind of improve the code quality is to flag certain known bad code patterns. So if you're building an Angular or a React application, if you're using functions that output go directly into the template, that's going to be very dangerous. So, we know these functions in Angular, they're called bypassSecurityTrustHtml, bypass security should be kind of a trigger and this kind of security irrelevant. And in React, that property is called Dangerously Set innerHTML, also indicating like a 'developer watch out what you're doing'. So, what you could do is you could set up code scanning tools that actually flag these things whenever they appear in application because sometimes people make mistakes. You hire an intern and they don't really know the impact of using that property and they use it anyway which would cause cross-site scripting vulnerability. If you're code scanning to flag these things ensures that it doesn't get pushed to production unless it's a benign case which is actually approved to be in there, then you can definitely stop some of these attacks coming on for sure or some of these vulnerabilities happening.

TARAS: I think the hardest thing to understand is when someone doesn't understand what they're doing that what they will create is so cryptic that I think any tool that tries to figure out what it is that person is doing I think will have a really hard time. The person making the thing doesn't understand what they're doing, then the system is not going to understand what they're doing which makes me think that one of the things that we think about a lot at Frontside is this idea of trying to understand the system from the outside as kind of looking at a system as a black box and wonder what kind of tools are available specifically for inspecting the application from the outside, like as if somehow understanding what the application is doing based on what's actually going on inside of the runtime and then notifying someone that there could be something off in the application, but through exercising the [inaudible] things like, for example, memory leaks is not something you can catch unless you have a test suite that has like a thousand tests and then you will see over time that your application is actually leaking memory. But if you run individual tests, you'll never see that. I wonder if there's anything like that for security where at runtime, there's actually a way to understand that there might be some kind of a pattern that's incorrect in the application.

PHILIPPE: If only, if only. It depends on who you ask. There is such a concept that's called Dynamic Application Security Testing. Essentially, what you do there is you run the application, you feed it all kinds of inputs, and you monitor when something bad happens. And that means that you have detected vulnerability. So, these things do exist. But unfortunately, their efficiency is not always that good. It very much depends on what kind of security problems you're trying to detect. And they can, for example, detect some instances of things like cross-site scripting or SQL injection or things like that. But there will always be limitations. I've seen tools like that being run as an application where you actually know there's a vulnerability because it has been exploited. There is a manual written exploits and the tool still doesn't find any vulnerabilities which is not surprising, because these things are really hard to make an abstraction of to be able to find that in an automated way with a tool. If you would have such a tool that would be, I think, that [inaudible] would be a lot better. I think there's a lot of funders working on that. But at the moment, those tools are not going to be our savior to build more secure applications.

CHARLES: Yes. I mean, it's kind of like linting, right? Or you can make tests. We've been through this kind of all the features or the aspects that we want our application to have, whether it be accessibility. There's certainly a very comprehensive suite of lint level checks that you can run to make sure that your application is accessible. You can run a suite of three thousand things and if it triggers any of these things, then yes, your application won't be accessible but it's not a substitute for thinking through the accessibility architecture. The same thing goes with code linting. You're not going to solve bugs with a linter that makes sure that it's formatted and that you're declaring your variables right and that you're not shadowing things. But you can definitely eliminate a whole classes of things that might be put in there just for maybe even you know what you're doing and you're just forgetful.

PHILIPPE: Yes, these rules exist, as well. They're not extensive but there are linting rules for Angular used for security, for example. But the problem in linting is that they are very useful to find potential instances of security relevant features or security relevant functionality. But the linting rule alone cannot decide whether something is OK or not. Just to give you a very simple example, if you use the bypassSecurityTrustHtml function, if you give that function a static snippet of HTML, that's going to be fine unless you write your own attack essentially. But if you feed that function user inputs, you're going to be in a lot of trouble. And making that distinction with a linter is going to be difficult unless you have a static string in the arguments. But if once you start having that from variables to dynamically decide to have a different code path, then that's going to be very, very difficult to decide automatically. So, yes, you can use that to find the places in the application where you should be looking for, in this example, a cross-site scripting in Angular but the linting alone is not going to give you an answer whether this is OK or not. That still requires knowledge of how Angular handles this things, what happens, and how you can do things safely.

TARAS: Sounds like we keep going back to nothing beats having knowledgeable developers.

PHILIPPE: Yes. Unfortunately, that is true. However, with that said, I want to highlight that frameworks like Angular, well mainly Angular, make things a lot better for developers because yes, you still need knowledgeable developers but the ways to introduce a cross-site scripting vulnerability in an Angular application are actually very, very limited. It's not going to be one, but there's going to be maybe three or four things you need to be aware of, and then you should be set. While if you would have done the same for PHP, it's going to be 50,000 things you need to be aware of that are potentially dangerous. So, yes, frameworks and libraries and all of these abstractions make it a lot better and I really like that. That's why I always refer to abstract things away in a library so that you actually have the ability to look for this dangerous code patterns using linting rules in your code base and that you can, at least, inspect the go to see whether it's OK or not, even though you might not be able to make an automatic decision. You, at least, know where to look and what to approve or how to change in the code base.

TARAS: I think that's one of the things that oftentimes is not taken into account that the frameworks are different. And I think of big differences in how much -- like right now, the most popular framework, I think, React. But it's such a thin layer, it's such a small part of the framework that you can hardly call it a framework. But it is something that companies rely on. But then when you consider how much of that code that you need to write, to make React into a complete framework for your company, the amount of code that your team has to write versus the amount of code that your team has to write when you use something like Angular or Ember, there's definitely a lot less parts of the framework that you need to write or a lot less parts of the framework you need to choose from what's available in the ecosystem. Like in Angler and Ember, and I'm not sure what the story is with the view, but the pieces, they come from kind of a trusted source and they've been kind of battle tested against a lot of applications. But I don't think that enters into consideration when companies are choosing between Angular or whatever that might be because they're thinking like what is going to be easiest for us. What is going be [inaudible] for developers? They're not thinking about how much of the framework are we going to need to put together to make this work.

CHARLES: I can say it sounds, Taras, like almost what you're saying is by using the frameworks that have been battle tested, you actually get to avail yourself of code that actually has security champions kind of baked into it, right? Is that what you were saying? You keep coming back to 'you need developers who are knowledgeable about security', and if you're using kind of a larger framework that covers more use cases, you're going to get that. Do you think that that is generally true, Philippe?

PHILIPPE: Yeah. I think it is and that's why I mentioned that I liked Angular before because Angular actually does offer a full framework. And because they do that, they made a lot of choices for developers and some of these choices have a very, very big and positive impact on security. On the other hand, if you make those decisions, you become an opinionated framework and some people don't like that. They actually want the freedom to follow their own paths and then a less full featured framework like React might be an easier way to go.

CHARLES: But I think what happens is folks don't enter into that decision with their eyes open to the fact that they then now need to be their own security champion because they just don't even see it. We said the most dangerous thing is the things that you don't know.

PHILIPPE: Yeah, absolutely. And I totally agree. That's something that's at least a couple of years and probably still today, many companies moving into this space struggle like, "Which framework do we choose and why do we choose one or the other and which one will still be there in three years because we don't want to switch to another thing in three years," which is risky to our developers. I like that you said that Angular has this security champion knowledge built in because in Angular 2 and every version behind it, but the new version of Angular essentially, they spent a lot of time on security and they learned from their mistakes in the first version because there were some and they took that and they built a more robust framework with security built in by design or by out-of-the-box. Angular offers, for example, very strong protection against cross-site scripting. It's just there, it's always on and unless you actively sidestep it, it's going to protect you. And that's one of the things I really like about Angular and how they did that.

CHARLES: Yeah, that's one of the things that I really like too because I remember there was a blog post back, this is probably, I don't know, almost 10 years ago now, maybe seven or eight years, where someone was comparing why they were more interested in using, their servers were implemented in Ruby and why it was better to use Rails than just Sinatra which is just a very, very, very lightweight HTTP framework. And one of the things that he was pointing to was this new vulnerability was discovered and if you were using Rails, the middle way where the middle square stack is managed by the framework, you just upgrade a minor version of Rails. And now, by default, there's this middleware that prevents this entire class of attack.

PHILIPPE: Was that a cross-site request forgery?

CHARLES: I think it might have been.

PHILIPPE: I think Rails was one of the first to offer built in automatically on support for that. So yeah, that was a very good early example of how that can work really well.

CHARLES: And the advantage from the developers' standpoint, because the contrast that, if you'd been writing your application in Sinatra which is this is very, very low level based right on top of rack and you're managing the middleware stack yourself and there are no opinions, then not only do you have to like fix this security vulnerability, you have to understand it. You have to get to do a lot of research to really come up with what's going on, how is this going to affect my application and then I can deploy a fix. And that's like a huge amount of time, whereas you have the freedom to not even understand the attack. I mean, it's always better to understand but you can defer that understanding invariably knowing that you're kind of invulnerable to it. And I think for people who enjoy kind of pretending, not pretending, but that the security world doesn't exist and say, "Hey, I want to focus and specialize on these other areas and attain deep knowledge there." It's very reassuring to know that if a defense for a novel attack comes out, I can avail myself of it just by bumping a version number.

PHILIPPE: Yeah, absolutely. If you have everything in place to actually upgrade to that version that fixes those, that's a preferable solution. Towards the future, I believe it's going to be crucial to ensure that we can actually keep things up-to-date because everything that's being built today is going to require continuous updates for the lifetime of the application. I definitely hope that the frameworks get better and more secure and start following these patterns of naming the potentially insecure functions with something that indicates that they are insecure. I think that's definitely a good way forward.

CHARLES: Yeah. Can I ask one more question? Because this is something that is always something that I wonder about whenever you talk about any aspect of a system. And part of it is folks will not appreciate good architecture until they've experienced some sort of pain associated with not having that architecture in place. Their project fails because they couldn't implement a set of features without it taking months and years and they just ran out of runway, ran out of deadline. Those types of people who've been on those projects appreciate having a nimble system internally, good tooling. Folks who have experienced good tooling understand how much time they could save, and so, have a very low tolerance for bad tooling. A tool takes too long or is misbehaved or is not well put together, they just can't stand because they know how much time they're losing with security. Is there a way to get people to care about it without having some sort of breach, without having gotten smacked in the face? When you do your trainings, is it generally, "Hey, someone has experienced a breach here, and so they want to bring you in." Or is there some way to get people raise awareness of the problems they don't have to experience that pain but can just experience only the benefit?

PHILIPPE: That's, again, a very good question and that's also a very good illustration of why security is so hard. Because if you get everything right, nothing happens.

[Laughter]

PHILIPPE: Or it might be if nothing happens, that nobody cares enough to actually try something against your application. So, there's no positive confirmation if you've done a good job. You can keep putting things off but eventually, there's going to be vulnerability and it's a matter of how you respond to it. We recently had a cross-site scripting in Google's homepage, one of the most visited pages on the web. And somebody figured out that there were some weird browser thing that could be abused and that resulted in a vulnerability on, let's say, such a simple page. So, even there, things can go wrong. So, what would be a good way to draw with some awareness about this is I would recommend following some simple new resources or some Twitter feeds. I have some security relevant articles there but plenty of other people in the industry have as well. And when you read such an article about security incidents, just think about whether this could happen to you or not. And that should probably scare the shit out of you. Simple examples like the Equifax breach, one of the biggest, most impactful breaches of the past few years happened because of an Apache library that was not updated. I think the Apache library, they had a known vulnerability in there. We knew about it. We had a patch, yet it took too long to install that patch and the attackers abused that vulnerability. This is something that probably can happen to each and every one of us because the attacks started, I think, 72 hours after the vulnerability and the patch had been published. So, ask yourself, "Would I have updated my servers in three days after I got that vulnerability report on GitHub?" Yes or no. And if the answer is no, then the same thing can happen to you.

Other cases: Magecart is a very big problem, people injecting credit card skimming malware in the JavaScript library. Are you including third party JavaScript libraries? If yes, then chances are that this can happen to you. And there's nothing preventing someone from exploiting that. It's probably just because you got lucky that nobody tried to do that. And the same thing you see now with all these attacks npm packages where people actively try to get you to install a malicious package as one of your dependencies. And again, everybody can fall victim to these things. So, if you read the articles with that mindset, I probably guess that your security awareness will grow rapidly and you will start caring about that very fast.

CHARLES: Yeah.

TARAS: Lots to think about.

CHARLES: Yeah, there's lots to think about because the next thing that occurs to me is how do you even know if you've been targeted. Because a good attacker is not even going to let you know.

PHILIPPE: Yeah.

CHARLES: It's just better to siphon off your blood, like you said, than to kill the -- you want to be a vampire bat and come to the same cow every night and just take a little bit of blood rather than the lion that kills the cow and then the cow's gone.

PHILIPPE: I would say constant monitoring is going to be crucial and you need that data for all kinds of different purposes. You need to monitor everything that happens, first of all, for a post-mortem analysis. If something happens, you want to be able to see how bad it was. This user apparently got a full admin access and if you have decent monitoring, you will be able to retrace his steps to see what they did or did not get. So, that is one very good use case. A second use case is you can use that data to detect attacks. Usually when the attacks are noisy, it's an automated scanning tool but it might be an attacker trying to do things. Again, that may be something very useful for you to act on to see if there is a problem to prevent that user from connecting, or so on. And then, another very good use case of these things is actually inspecting the logs manually as an ops engineer or whatever, who is responsible for doing that, because that might again yield new insights. I've been talking to someone who said that they discovered an abuse of one of their APIs just by looking at the logs manually and detecting a strange pattern and looking and digging deeper into it. And the automated monitoring tools that they had installed that trigger on certain events like a mass amount of requests to the authentication and stuff like that, they did not catch this particular abuse. So, I would say monitoring there is absolutely crucial, for sure.

TARAS: So, the takeaway is higher attentive knowledgeable developers who will learn about security.

PHILIPPE: I would say the takeaway is security knowledge is essential for every developer. So, I encourage every developer to at least have a little bit of interest in security. I'm not saying that everyone should be a security expert. We should at least know about that the most common vulnerabilities in web applications, what they mean, what they might result in, and what to be on the lookout for. So yes, I think that's one of the crucial things to start with. And then within an organization, you should have someone to fall back on in case that there are security relevant things that you actually can talk to someone who does see a bigger picture or maybe the full security picture to decide whether these things are a problem or not.

I think we're closing or nearing the end here, but one of the things we haven't talked about is how to actually get started in security. What if you are interested in security after hearing this podcast and you want to get started? I want to give you just a few pointers so that you actually know where to look.

One of the first things to look at is OWASP. And OWASP is the Open Web Application Security Project. It's essentially a nonprofit that has the mission to improve the security posture or knowledge of developers, and they have a lot of resources on various different topics. They have a lot of tools available and things like that. What you want to start with as a developer is the OWASP Top 10, which is a list of the 10 most common vulnerabilities that exist in applications, just to open your eyes like these things exist in applications today and are definitely a problem. And then, there's a complementary Top 10 called the Proactive Controls and that's about how you, as a developer, can actually prevent these things. So, what should we know about implementing security, which guidelines should we follow. And these two documents are a very good place to start. And then there is a huge community that's actually mostly very eager to help people figure out the right way of doing things and solving these problems we have in our ecosystems.

TARAS: Awesome. That's great. Thank you very much.

CHARLES: Yeah. I'll take that in. That is really, really helpful. Well, thank you very much, Philippe, for coming on and talking about security. I actually feel a lot better rather than usually I'm thinking about securities kind of stresses me out. [Laughs]

PHILIPPE: You can bury the problem but it's going to return in the future anyway, so you might as well get onboard and start learning. It's not that scary if you actually -- it's a lot of fun. So, you should learn about security.

CHARLES: Well, I am looking forward to diving in. If anyone wants to get in touch with you, how would they do that on Twitter or Email?

PHILIPPE: Yeah, sure. I'm on Twitter. I'm always happy to chat about security. You can reach me by Email as well. I'm very easy to reach. And I'll be happy to help out people with questions. Sure.

CHARLES: All right. Thank you so much, Philippe.

Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @TheFrontside or over just plain old Email at contact@frontside.io. Thanks and see you next time.

Jun 13 2019

49mins

Play

An Analysis of NativeScript Mobile Platform

Podcast cover
Read more

In this internal Frontside Podcast episode, Charles, Taras, and Jeffrey analyze the NativeScript Mobile Platform.

Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES: Hello and welcome to The Frontside Podcast, a place where we talk about user interfaces and everything that you need to know to build them right. My name is Charles, a developer here at Frontside. With me today are Taras and Jeffrey.

TARAS: Hello everyone.

CHARLES: Today, we're going to be talking about NativeScript, in particular, and evaluating technologies and frameworks, kind of at the meta level. So, I'm kind of excited about it because we've been pretty heavily involved with NativeScript for the past three months or so. And so, we've gotten to look at it both from beginners' eyes being kind of totally fresh to the platform, but then actually having to start to pump up against some of the edge cases which is what always ends up happening when you actually use a framework for real. Let's get started.

TARAS: All right. I think there's a lot of things that we could talk about because when we would start looking at NativeScript, the length that we were looking at NativeScript through this is that this platform that our client is going to be using for doing development of large applications. So, what does NativeScript need to have to be able to support potentially hundreds of developers building apps? We started looking at it and one things that made us consider NativeScript early on was it kind of provides a platform that allows you to encode in JavaScript and run it on mobile. And we saw this kind of emergence of Angular and Vue.js running on top of NativeScript. So, those things together is kind of exciting.

CHARLES: There was also an implementation in progress of React and there were a couple of spikes of Ember also running on top of NativeScript. So, my first impression was initially very favorable. The onboarding experience is actually pretty nice because it was JavaScript and the application was interpreted, there's the ability to completely and totally dynamically change the application at runtime. So, they have essentially an application called the NativeScript Playground which lets you flash a QR code at it and then it will go in to the URL associated with that QR code and it will download all of the assets for a NativeScript application running at that URL. So, all the JavaScript, all the templates, all the whatever, it'll pull it down, it will actually start running like within that app. So, the Playground app then becomes your actual app that you want to use. There's no App Store, no TestFlight, no Google Play. There's no gatekeeping to delivering your application into a running app. And I thought that was really, really cool and really, really compelling.

TARAS: We should clarify that this is specifically for preview purposes because if you're going to be shipping the application to production, you still need to go through all those things before...

CHARLES: Yes.

TARAS: But the onboarding process, you could just install the preview app and then you can point a QR code and it will open that app, whether it's in Angular or in Vue, that app will open up in the preview app and you have a native app that you could play around with.

CHARLES: Right.

JEFFREY: And that's key both for the engineers who are playing around with this and building this and also really key for the non-engineers who are part of the team to be able to really easily spin up and see what the engineers on the team are working on.

CHARLES: That's exactly why we thought, "Hey, we want to be able to use this mechanism for preview apps." In the same way on the server side, you have preview apps associated with a pull request. When we saw this, what we immediately wanted to do was have a bot post a comment onto a pull request with a QR code, so that anybody could just, boom, test out this app on their phone.

TARAS: We ultimately ended up setting that up but not quite that way because the original idea of being able to have something like danger bot post the QR code to the comments, you can kind of point out with your phone and open the preview app, that didn't actually pan out. Charles tried to implement that. What happened there?

CHARLES: What it actually turned out was that the preview functionality was dependent on a central server, a central NativeScript server. So rather than kind of statically bundling the assets and just saying 'these assets are this URL and just pull them in and bootstrap your NativeScript application that way', it required a lot of extra stuff. So, it required you to be running a Webpack Dev Server that was building your assets and then basically registering and doing some port forwarding with that dev server to a central NativeScript service that was provided by the company that underpins NativeScript. And that connection needed to be hot and live the whole time for that to work.

So, while it was really cool that you could get the QR codes up and running, unfortunately that functionality could not be decoupled from the hot update and the central service. Those central services were kind of hard coded into the tools.

TARAS: Yeah. So we eventually ended up implementing the preview apps that we wanted but we ended up using Appetize.io to essentially -- the process there is you build the app, you upload the app to Appetize and then danger bot embeds a link to a URL where you can open that app and it will essentially stream like it's running somewhere in a simulator for iOS, an emulator for Android and it will stream a video of that and you can interact with it, kind of like a VNC setup.

CHARLES: Yeah.

TARAS: And that actually accomplished the goal. It's just we weren't able to do the way that we thought we were hoping to do it straight off with the preview app mechanism.

CHARLES: It accomplished the goal. And Appetize is an incredible service that lets you preview the apps on pretty much any type of Android device, any type of iOS device, right there inside of a pull request. But what it didn't allow us to do was pop up your actual device, your actual phone and scan a QR code off of the pull request and pull down the assets. That would have been amazing. But it doesn't always work out that way. And I don't know if that would work long term anyhow because you can't pull down native libraries over the wire and funk them in. That's a big, big no-no. So, the process does have limitations. But nevertheless, that part was really cool.

TARAS: Yeah. That was kind of the entry point, the onboarding. And then I think one of the things that was kind of, I remember at the time when we were talking about the NativeScript architecture because we were starting to understand more about how it works. The idea itself is really kind of amazing actually because you have this V8 where you can run your JavaScript code and then they're kind of wired together on iOS and Android. They're wired to the native implementation. So when you're interacting with it, I think the thing that's really great about NativeScript is that the runtime environment for JavaScript essentially gives you API access. In JavaScript, you could say, "I want to create a Java view," and there will be a Java view that's rendered in the actual native device. You're using the same -- the APIs that you find on the Android docs or iOS docs, all of those APIs are available to you as JavaScript. So, you [crosstalk] as JavaScript. And it's seamless, right?

CHARLES: Yeah, and it makes it very, very handy. The language is different but the APIs are exactly the same. There is an attempt to make cross-platform components and cross-platform classes that serve the needs on both platforms and then delegate to the platform on which you happen to be running. But those are not mandatory, and the low level APIs are always available to you. An example of this is in iOS, kind of the core foundational object is NSObject. All the controllers, the views, the things, all of them are descended from this object. I can go from object and I can go in from JavaScript and I can just say {let object = new NSObject} and boom! I've got a reference to the actual object and I can pass it around to any other iOS API.

That is really, really powerful that there's nothing off limits. There's nothing at an arm's distance. There's really not much you can't do because all of those things are available to you. There's nothing that's off limits. That means that they can build cross-platform components on top of those APIs. Whereas a sort of system like React Native which does have cross-platform components, that's kind of where the base layer is but you can't crack open the hatch and go down the next level and start mucking around, unless you want to actually start meddling with the React Native source code or recompiling Swift in Java code.

TARAS: For me, I think this architecture is probably my favorite part of NativeScript.

JEFFREY: Mine too.

CHARLES: Yeah, me too.

TARAS: I really like this part. I kind of hope that everything else is as clever as that was.

CHARLES: Because among other things, it allowed us to write a Bluetooth. We were able to implement Bluetooth using nothing but JavaScript. We didn't actually have to go down and do any Swift and do any asynchronous message passing between the iOS libraries and the JavaScript libraries. It's like, "No." We've just got a very simple cross-platform interface that instantiates an implementation for Android and an implementation for iOS, but both of them are like JavaScript. And so, it really is you're doing native development but it's JavaScript all the way down.

TARAS: Yeah. And when you're writing plugins, your plugin is actually JavaScript plugin that is assuming iOS APIs and Android APIs.

CHARLES: Yeah. And if you have to have a native plugin like a CocoaPod or an Android Package, you just install it and you can instantiate it from JavaScript. There's no fuss, no muss, no ceremony. It's just like, "Hey, I want to use the..." what was the one we like to use? The Material-UI floating button which is a CocoaPod. You download it, you link it into your application, and then you just instantiate it from JavaScript.

TARAS: That was really cool. The challenging part was that a lot of that kind of awesomeness, like everything around it wasn't quite as polished. And so, one of the big things is that like around tooling, because one of the things about having grown up in a way like in the Ember community, in a sense, we have a certain expectation of what the level of polish from tooling that we would expect. And it's kind of supported in the way like when you look at how React or React Native tooling is, even Angular tooling, it's very polished. You kind of expect to see what you need to see when you're looking at a CLI input and you don't see anything else. That level of polish. I think part of the changes that they're going through, maybe that's part of the reason but that same level of polish isn't available around the tooling.

CHARLES: There are these fantastic qualities about the platform and it is amazing. We were using Angular and a lot of people are using Vue and things like that and that actually is pretty incredible. And there is nice tooling, there is command line stuff, but we started to run into issues where, for example, it was very clear that we were pretty much, as far as I could tell, one of the very, very few people running a NativeScript project on CircleCI or in a CI environment at all. It had capability for testing, both for acceptance testing and for unit testing, but it required changes to the core framework and the core tools in order to get those tests to work in a CI environment.

JEFFREY: Before we kind of get into the testing story there, some of the issues were around determinism of reliably reproducing your whole NativeScript environment and stack every time because that's such a key feature of doing it. And on a CI server, it's like, "Hey, we need this to load in the same exact packages every time." And so, we ran into challenges there.

TARAS: I think we spent almost two days. There's example projects in different combinations. One thing that was off was that there's a pattern that is applied in a lot of the plugins in NativeScript ecosystem is installing things. So, you run npm install and npm install will generate some files. And so, when we're trying to move it over to a CI, there were files, like there's hooks, like TypeScript hooks that were excluded that you can ignore, but they were necessary to compile the TypeScript. And so, what was happening is when we're running these at CI, the application, we would build the app but the app would crash the moment that you start it. And the reason for that was that the JavaScript files that were transpiled from TypeScript to JavaScript, those JavaScript files were actually never included because they were never transpiled in CI because the hooks directory, like we weren't preserving it between our tasks and so...

CHARLES: Right. We weren't caching. This was an artifact of the install. And so, we were caching the install, so essentially the yarn.lock was not changing. But the directory was not getting generated unless the cache key changed.

TARAS: And we spent spent quite a lot of time...

CHARLES: Two or three days out.

TARAS: Yeah.

CHARLES: What that said is, "Oh, nobody's really running this in CI." Nobody's actually building an app from scratch every time.

TARAS: There are people in NativeScript team that actually does a great job of documenting. They did have example projects that exist but sometimes that example project doesn't fit like a perfect combination of what you're looking for. There was an example project that was showing how to run on CI but it didn't use TypeScript. And so, that's where we lost a lot of time.

CHARLES: Right.

JEFFREY: So, let's talk about testing since that's kind of the core, the most important part of why you even want continuous integration capabilities to begin with. What did we run into there? What did it look like?

TARAS: Well, I think it's safe to say that we were really on a bleeding edge of testing capabilities in NativeScript ecosystem with Angular, at least. But I think it was still an interesting project. We were using the latest builds. And I have to say I think this is one of those things that's going to be kind of consistent through this, is like the people in NativeScript team are amazing. They're so easy to work with. They're so accommodating. When we ask for stuff, they're on it. But it was a lot of things we're trying to figure out like how do we run unit tests, what can we do. Ideally, we wanted to run, first and foremost, we started with how do we run functional testing. So we spent quite a lot of time trying to get Appium set up. I spent a good two to three weeks on that and it was not productively spent time.

CHARLES: I think ultimately, we had to pull back from it. And there were a number of reasons. Part of that is there are multiple paradigms for how you can build your NativeScript application. So as we speak, there's a move towards using Webpack to build all of your JavaScript in your style sheet assets because it's very much like a React Native application. You've got style sheets, you've got JavaScript assets, that some of them might be in TypeScript, some of them you might be using Babel, and you need to actually transpile them down to include them in a way that your underlying JavaScript runtime is going to be able to understand. But that wasn't always so. They have their own build system and packaging system, they kind of used the TypeScript compiler ad-hoc, if you were using TypeScript, which we were. And so, this was kind of this orthogonal complexity, I guess, where you have your unit testing and it has to play nice with this one package or Webpack.

There were multiple ways to package your app. And so, we ran into problems where, like TypeScript kept coming up as a problem and the way in which we were bundling our assets. So, in order to get TypeScript to work, we kind of had to get Webpack running. But the problem is it felt like three quarters of the tooling wasn't Webpack compatible yet. And so, it meant that other pieces of the build were breaking because of this. And so, we had to be on the bleeding edge of several different aspects of the runtime. And the problem is when you're on the bleeding edge, that can break other stuff.

TARAS: But there's complexity in running on native platforms that I think a lot of this complexity is kind of leaking to development experience because one of the challenges is your tests need to run on the native device in the application. So, you have to build the app. You have to push the app into the actual device. So, there's like all the setup of installing the at the app on the device.

CHARLES: You have to launch the simulator.

TARAS: Yeah, right.

CHARLES: To make sure the device is connected.

TARAS: And you run your tests in there. So, that created kind of this situation where we say let's just kind of set Appium aside and just use unit testing which is a very small fraction of the kind of testing that we actually want to do. It will test very little. But let's just do that because getting functional testing to work was really kind of not going anywhere. So once we start doing unit testing, one of the challenges is that it takes like 30 seconds to start your tests. And then, if you for whatever reason, made a mistake, the moment you cancel the build, it leaves, like it doesn't clean up of itself well. So, it leaves processes running in the background. And so now, you spend another like 10 to 15 minutes Googling around for a cookie, "How do you find these processes and stop them?" So, we eventually settled on having a script that does that, but this is the kind of things you have to end up doing because there's a bunch of things that are wired together, but they're not wired together in a way that is seamless. And so, you end up kind of just debugging a lot of stuff where you just want to run some tests but you end up doing all these other stuff.

CHARLES: Right.

TARAS: And you spend a couple of minutes just doing something that you'd expect to happen in like 20 seconds.

CHARLES: Right. There is a feeling that every aspect of the system is coupled to every other aspect of the system in kind of varying ways of interconnectedness. And that's not what you want for a very, very complex system. You want it to be extremely modular.

So, I think we should keep the command line tool. There's probably a separate discussion, I think, about that. But you have to close the book on the Appium and the unit testing. I think the other problem was that you have to run these things on simulators. On macOS, that's not a problem because the simulators ship with X code. And so, you don't actually require an external service. Whereas in CI on Android, it's very unlikely that you're going to have Android emulators on hand because they require a separate virtual machine. Android emulation is actually quite heavy. If you're running through Android Studio or something locally, you essentially need VirtualBox or some equivalent to run your Android simulator because you actually need that simulated hardware. If I understand correctly, that was actually not something that had been really accounted for. It was that you might want to be running simulators not on the same machine as what you were developing on or what the actual that you were building on.

TARAS: Yeah, a lot of the tooling seems to be designed around this idea that you're going to be building and running everything on your machine. And so, you can spin up a virtual machine easily. But in CircleCI, for example, they don't support running a virtual machine inside of a Docker container because for that, you need a feature of a virtualization that is not supported in many CI platforms. You have to run a parallel server if you want to have like Appium running, for example. You need to have a separate server running like an Azure or a Google Cloud somewhere that is able to run virtualized servers that have a host machine that's being guest systems that are running the actual Android emulators of different versions. And so, when I started doing research in this, there are companies that are doing this really well but it's not unusual to be using hardware from Amazon that costs thousands and thousands of dollars per month.

I think for anyone who's getting into mobile development, I would say the hidden gem of Android world is Genymotion. Those that do a lot of Android development, they know about it. But Genymotion has both like a desktop environment and it has SaaS offering that they're in the process of releasing. And so, what it allows you to do is when you run it locally or on your local machine, it allows you to create a virtual machine that is running in VirtualBox and then it allows you to run kind of optimized environment for running Android. And when you do that, it's really fast. It's very smooth. It makes running Android devices locally as easy as it is to run iOS devices on macOS.

CHARLES: I remember starting out and trying to actually just get any Android emulator running on my Mac and I couldn't even do it.

JEFFREY: It was such a huge time saver.

CHARLES: Yeah.

TARAS: And to have this Saas offering is really great because you could basically create your virtual machines on demand and then you install into a virtual machine from your CI server and then you run your tests there. That's kind of the key that I found to be able to run tests and automate it against emulated devices for Android. Genymotion is really great.

CHARLES: Yeah. Again that's the kind of thing that you need when you're in CI. And so, one of the things, I think, one of our discoveries is that there just isn't -- when we started working on this and we haven't seen a culture of running these tools in the cloud and accounting for the fact that you might have not all of the tools running on the same machine.

From, I would say, the beginning, I remember the kind of the diagnostics command didn't work but we were running it on a CI server. So, there's a diagnostics command that you run to see do you have this, do you have that, do you have that. It would work and give meaningful results when I wanted to debug my CI server because when we were initially getting set up, something wasn't building right, there was some dependency missing. And I just wanted a diagnosis but it was trying to install all those tools for me. And I was like, "No, no, no. I don't want you to do anything. I don't want to install them. I'm going to be doing all of that as part of the setup of the CI environment. It's going to be installed, it's going to be cached. I don't want you to just try and like massage my system into a suitable state for NativeScript development. I just want you to diagnose what is wrong. Tell me, am I missing this compiler? Maybe I've got the wrong version of Android SDK. Tell me what's going on." And I couldn't get that to work. That was very frustrating. I think it was because the kind of bulk of the assumptions was that it was going to be individual developers working on their own laptops or their own desktop computers to build, to test, to distribute these applications. I think that's becoming less and less the case. I mean, at this point, that's not a way that we're willing to operate.

TARAS: And we eventually figured out how to do all this stuff, right?

CHARLES: Yeah, we have.

JEFFREY: We have.

TARAS: We have the entire process working but it took a lot longer than one would imagine. It took all the time that we had allocated to it which we thought was very generous amount of time but it took like almost a month to get everything set up. The great part of this is that we do have now everything working. And so, there's a repo where people could take a look if they want to get all stuff working on CI, but it took quite a bit of work in figuring out.

CHARLES: Yeah. Actually, I think worth probably a Screencast to show some of those capabilities because it is really exciting. I mean, when you actually think about the pipeline in its entirety. But we never were able to get functional testing working.

TARAS: And then the challenge here is that because we were essentially looking at NativeScript, going back to this question like, "What do we need to be able to have like hundreds of developers potentially running on this platform?" And so there's a lot of considerations and this tool is just one of them. I think the other one that is a big one is like what are the capabilities of the view layer because that's where most of developers were spending most of their time. We got stuck a little bit about that because I spent a lot of time working in the view layer. The thing that was really great and the thing that I really liked about it is the fact that you have a collection of components that you can use in Angular. You render it as component and then that component is going to look correctly on iOS and is going to look correctly on Android. From a single code base, it's building appropriate components for iOS and Android. What I think is really confusing in that case, though, is because the Android and iOS components don't have parity in a sense. They don't behave exactly the same. And there is also a kind of a reputation in the NativeScript documentation that Android tends to be slower, much slower than iOS. And so, when you start to run into performance problems and you start to run into those pretty fast because it is not really clear what is necessary to not optimize NativeScript, when you start to run into performance problems, it's not really clear like where is it coming from. Right now, the profiling that they have for the UI is very limited. They're kind of in the process of migrating over to chrome.debugger, but profiling in chrome.debugger is not implemented. You can do performance optimization using Android tooling but that's only going to tell you performance of the Java side, or the iOS side is not going to tell you the performance of the code that's running inside of JavaScript. It's not really clear what is causing the problem. If you don't know what's happening, you kind of write it off as like, "I think it's just Android being slow." In reality, when you actually start to dig deeper, you realize there's things about the Android implementation of the components that are different or the views that are different than iOS. And it's the differences that add up to weird performance problems. That's probably the thing that gave me the most hesitation because one of the things that made me think like if we want to be able to give this to a team of like 50 people, we need to have our own view layer because we cannot rely on components. An example of this would be, they have a list ticker on iOS, it doesn't omit change events when you scroll. If the list is moving, it change events and not omit it. But on Android, every time that a different item shows up on a screen, it changes the selection. And so now, you've got this view that's a meeting on Android as a meeting change events. I made an issue around this and the response was that while there's a workaround that you can have for this, but that's hard. Work around is not a solution.

CHARLES: Right. When you have a leaky abstraction like that.

TARAS: Part of the problem is because people use leak abstraction. And so, what's happened in Native -- we actually got on the call with NativeScript core team and they're excellent in really being very helpful, understanding what the problems are, and providing pass on making things better. But what's happened as a result of having this leaky abstraction is that people are relying on the leak. And so now, the leak is the API. And so, we can't change that.

JEFFREY: Right.

CHARLES: And the answer that you really need there is, "We can't change that without breaking stuff. Here's our migration path for deprecating this and introducing a new API." And that gets more into the process stuff and it seems like the process for making changes to the underlying API, I think, could use a little love in the sense that it's kind of opaque as to where the platform is going. There's not a concept of like an [RSC], there's no roadmap about what to expect. What is this API going to look like in the future? Is this stable? If I were writing a software and someone said, "Hey, there's this leaky abstraction," I think my reaction would be, "We've got to fix this." And we also have to acknowledge that there are users who may depend on this. And so, we have to be very deliberate about it.

TARAS: The challenge with this too is that NativeScript kind of outgrew its hands because I think originally, it wasn't meant to be hosting Angular and hosting Vue. Vue didn't exist. Angular didn't exist when NativeScript started. So I think what's happened is that these views that were available, I wouldn't call them components because they don't act like components, but they're exposed in Angular like components but the API feel like Vue objects. So these Vue objects that you consume, that you render in Angular, for example, or in Vue.js, they are the same APIs that NativeScript had before Angular and Vue.js.

CHARLES: Right. You know what? It feels like there's a MVC framework, like a Circa 2010, 2012 MVC framework that has now become the foundational layer for Vue frameworks that have had significant advances in the way we conceive of model in Vue and how data is generated and passed around and how views are rendered off of the data and how reactivity is changed. But there's still, the underlying platform has not evolved. And in fact, this was originally user-facing APIs and now these APIs have become foundational for other user-facing APIs but haven't had the iteration and evolution to make them robust.

TARAS: And flexible enough. As a result, you have the situation where not only is it really super easy to deoptimize the views simply because the requirements of keeping performance expectations are not obvious. One of the things that I found is that the list which is, lists are like 50% of most applications. Before I go into the problem with list, the nice thing about lists in NativeScript is that because they're interacting directly with native APIs, you have really fast list when they're optimized. They're really easy to work with. But they easily get deoptimized by the fact that the expectation to keep the list fast, you have to use this API in NativeScript called array observable and observable. And this is not to be confused with like...

CHARLES: [Inaudible] observables?

TARAS: Yeah.

CHARLES: It's not to be confused, but in fact, every conversation involves a lot of confusion. Because we were using observables, right?

TARAS: And we were actually using observables. So, we're using observable [inaudible] and we're using this array observables and object observables. And so, it's necessary for NativeScript to, essentially what it expects for list to be fast, is it expects that it's going to receive an array observable which is an object that wraps an array because it needs to know when an order or length of data rate changes. So what happens when you pass an array observable, a NativeScript array observable into a list? It will listen for change events on that object. But if you want to change the value of each of the items, like if you want to change a property on the object and have your view remain optimized, the array observable has to have an observable object which allows NativeScript ListView to listen for changes, property changes on the object. You pass this array observable which contains observables that ListView listens for changes on to make sure that it knows how to correctly apply this change to the list. If you don't have this magic, like if you haven't figured out this recipe for ListView performance success, you're going to have a really hard time because it's really not clear at what point and how this thing got deoptimized, why has it just gotten slower.

CHARLES: There's a lot of iteration that needs to happen there and it's not clear what the plan, what the priority, or even how you will even begin to go about this. Because I think that the internal working is that it seems basically to be controlled by one company. I don't recall seeing any contribution from anybody except for Progress which is Progress Incorporated is the company that's kind of the controlling interest, the original company that developed it.

TARAS: The way this showed itself very practically is that to make changes too -- so they have a ListView which comes with NativeScript public and there's RadListView which is the component that has a lot of stuff on it. Like if you want to pull to refresh or if you want to do like laser loading a data or if you want to do a filtering, you want to do -- so most people use RadListView. But RadListView, you can install, so there's no limitation when you build to install it, and your node modules has the source code for that. But the source code, the original TypeScript code, untranspiled code is not publicly available. They have a process for doing this and it's very nice that everybody's very kind and very accommodating. You send an email, they'll give you access to this repo and then you'll have the ability to contribute. NativeScript core team is very helpful and they're open to contributions. There are changes that need to be done to the Angular implementation to make it faster without having to put the requirements of the observable thing, and so they can give you a path to make that stuff happen but it's not open source in the sense that it's not a traditional open source that we would kind of expect. So, there's all kinds of hoops that you need to jump through and the source code is very difficult to read because it's transpiled from TypeScript to JavaScript.

CHARLES: And there was a certain level of opacity in terms of process. For example, I filed an issue which was actually a blocker. For us, it was actually causing our Android build not to work. I didn't hear anything about it. And then, all of a sudden like four days later, a fix came through referencing another repository on which this thing depended with. There was not a lot of context service. So it was obviously referencing a bunch of context that probably happened between two people in a face-to-face conversation. But I couldn't really tell what was going on, why it was an issue, because there was no comment. It was just a pull request that was referencing this issue. I never got a notification. I actually had to go and be like, "Hey, I really would like for this issue to be solved. I wonder if I..." I was actually going to post a, "Hey, is there any progress on this?" Or, "Is there any way that I can help? What can I do to get this looked at?" And I saw that there was another pull request that had referenced my issue. And it was merged and I looked down, but then there was no indication of when this would be available for public release, how I might be able to work around it. And so, the strange loop that didn't get connected was, "Hey, you've got a user who files an issue. You actually use this as the impetus to fix the issue and make a release." But then that whole process was completely invisible to me.

TARAS: You know what? It sounds like you wanted for it to work [inaudible] but you got a pulling mechanism.

CHARLES: Yeah, exactly. Well, I wanted someone to say like, "Hey, here's what's going on, and we're looking right into it." Or, "We're going to look into it in like two months," or, "We can't address this now. But here's a workaround for it." Or, "I don't have a workaround." That's just kind of the expectation that you have when you're playing with open source. In many ways, it does not feel like an open source project.

TARAS: Let's just do a quick note about Saas. Jeffrey, what did you find about the styling of NativeScript views?

JEFFREY: All the components that come kind of shipped as part of the NativeScript core set of components all have styles attached to them. They have CSS attached to them. And as part of the standard data script workflow, with your build toy, you have SaaS available which is very nice. But actually on a recent project, we're not using Saas at all. We're simply using post-CSS and we were able to kick out some CSS variables that turned out to be really nice for theming. So as kind of a future friendly experiment, we were trying to have a light theme and a dark theme since that is very recently now a core part of Android and very likely will be part of iOS this year, where there's kind of a light theme and a dark theme for everything. We were trying to do that. The simplest way to do that with standard web tools is with CSS variables. You can have the flexibility, you have the theming with those. It's so nice. You just, "Hey, my primary color is this color in one scenario and it's this color in another." And we just didn't really have the flexibility to do that with SaaS by itself. And so, that's kind of a limitation of the tooling right now that I hope in the future, we'll have some more sophisticated CSS tools. And really, NativeScript's move toward Webpack and having that as a primary part of the workflow really opens up that possibility that I hope somebody runs with in the near future.

TARAS: Yes, let's bring it all back together.

CHARLES: Can we pause for a moment? Because I actually do think it's important that we at least touch on the command line. I can give a little bit of a kind rant in here but I think that's actually something really important that we have to talk specifically about that.

The other thing that I wanted to touch on very briefly as we kind of draw to the close is the command line tooling, in particular in NativeScript. I think that this is probably one of the weakest points of the platform. And again, I don't want to disparage anybody working on NativeScript. It's an extraordinarily complex problem. This is a command line tool that needs to manage launching simulators, installing things into simulators, pushing code to those simulators. It needs to handle hot updates to things that it's running on, devices and simulators. So, it needs to be building JavaScript assets either with Babel or with TypeScript. It needs to be building those SaaS assets that you were just talking about, image assets. But it needs to be doing all of this for two platforms, so it needs to be managing everything that I just described. It needs to be managing on iOS. Everything that I've just described needs to be managed on Android, as well. It needs to work for a single developer's desktop. It also needs to work with all of those components that I just described distributed out in the Cloud. So, we're talking about an extraordinarily complex piece of software. And I think that unfortunately, the NativeScript CLI does not inspire confidence because it can do all of those tasks.

But Taras, you also mentioned often if you stop the process midway, it will leave a thousand things open and they're just spewing output to your console. The console output, unfortunately, means there's a big noise to signal ratio because it puts out all of the content for Webpack. Every little thing that it's doing with any of the devices, it's logging to the console. So, it doesn't give you a sense of control. So, what you really are looking for in terms of a command line is, "Hey, I've got this incredible sprawl of complexity and I want to feel like I'm on top of it." And unfortunately, by leaving these things open and having so much console output and having the console output not be formatted well, there's all kinds of colors. Every single tool that you're using whether it's Webpack or whether it's Karma or whether it's just console outputs that you are happing inside of your NativeScript application, the brand of those tools comes through. Webpack is a great example. Its console output feels very Webpack. So when you've got Webpack content randomly interleaved with your console content from your Mocha content, from Karma, all of these competing brands, it doesn't feel like a cohesive developer experience. And so, I really, really hope that -- so, to the point being where I felt like I could not live with that command line tool without rewriting it myself. If we want to use this platform long term, we'd have to either have an alternative command line tool or really, really, really help the NativeScript team completely and totally rewrite the command line experience.

TARAS: I would love to work on fixing a lot of these parts about NativeScript if there was a way to actually do it in terms of like, if they wanted to pay us to help them kind of bring some of these things to a state that would match. For example, what's available in Ember or available in React CLI, I would love to do that.

CHARLES: React Native, yeah.

TARAS: Yeah, let's do that work. But who knows what's in store? A lot of awesome platform like the idea around NativeScript architecture is fascinating and it's really, really powerful and really wonderful people doing some, trying to tackle really challenging problems, but it's all glued together in a way that doesn't instill confidence. And it just makes everything feel wobbly, just makes it feel like you never know, is it a problem? Where's the problem from? What is causing this?

CHARLES: Yeah. And if I fix this thing, is it going to break something else?

TARAS: Yeah, we've seen it happen actually with one of the solutions that was introduced to a bug that you were referring to earlier.

CHARLES: Yeah. So that was our three months experience working with NativeScript.

TARAS: We are considering other things now, very seriously looking at Flutter as an alternative for the same client, same scenario. Flutter is looking pretty exciting. There's a lot of things that are really good there. So in three months, we'll do another report and talk about Flutter and what we found. So, that's it.

CHARLES: And I will say I'm actually not like super excited about dart but I'm in dart spot.

JEFFREY: That's a whole other conversation for yet another episode.

CHARLES: I think that, to continue the conversation maybe next week, next time we have kind of an internal podcast, is I would like to really talk about platform evaluation because really you need three months, at least, to get a good idea of this. Is this going to work for the next five years? And most of the time, we give it a week or give it a two week. Or someone comes on who's really excited about this one particular technology and you go off on that tangent. I think there's an interesting meta discussion about how do you select technologies. And we don't have time for that now, obviously. But it's definitely something that I want to have in the future.

TARAS: Sounds good. I think that will be a good conversation for sure.

CHARLES: I guess that is kind of the executive summary on NativeScript from our perspective. With us being three months in, I think, like you said, there's a lot there.

Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @TheFrontside or over just plain old email at contact@frontside.io. Thanks and see you next time.

May 24 2019

45mins

Play

Deployment with Luke Melia, Aaron Chambers, and Mattia Gheda

Podcast cover
Read more

Luke Melia, Aaron Chambers, and Mattia Gheda john Taras and Charles to discuss all things deployment!

Luke Melia: Luke has been working with Ember since it was under early development as Sproutcore 2.0. Ember.js powers a SaaS company he co-founded, Yapp, and they funded their business for a couple of years doing Ember consulting under the Yapp Labs moniker. They’re full-time on product now, and his engineering team at Yapp (currently 3 people) maintains around 6 Ember apps. Luke helps to maintain a bunch of popular addons, including ember-cli-deploy, ember-modal-dialog, ember-wormhole, ember-tether, and more. He started the Ember NYC meetup in 2012 and continues to co-organize it today.

Aaron Chambers: Aaron Chambers: Aaron is the co-author of EmberCLI Deploy and is currently an Engineer at Phorest Salon Software, helping them move their desktop product to the web platform. He's been using Ember for 5 years and maintains a number of plugins in the EmberCLI Deploy ecosystem. Aaron loves trying to work out how we can ship JS apps faster, more reliably and with more confidence.

Mattia Gheda: Mattia is a Software Engineer, Ember hacker, Ruby lover and Elixir aficionado. Currently he works as Director of Development for Precision Nutrition where Ember, Ruby and Elixir power several applications. He loves meetups, organizes Ember.js Toronto and co-organizes Elixir Toronto.

Resource:

Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES: Hello and welcome to The Frontside Podcast, a place where we talk about user interfaces and everything that you need to know to build them right. My name is Charles Lowell, a developer here at the Frontside. With me also co-hosting today is Taras Mankovsky. Hey, Taras.

TARAS: Hello, everyone.

CHARLES: Today, we have three special guests that we're going to be talking to. We have Aaron Chambers, Luke Melia, and Mattia Gheda who originally met collaborating on fantastic open source library that we, at the Frontside, have used many, many times that saved us countless hours, saved our clients hundreds of thousands of dollars, if not more. ember-cli-deploy. We're gonna be talking not about that library in particular but around the operations that happen around UI. So, welcome you all.

LUKE: Thanks, it's great to be here.

CHARLES: Like I said, I actually am really excited to have you all on because when we talk about the platform that you develop your UI on, something that often gets short shrift in communities outside of the Ember community is how do I actually deliver that application into users' hands. Because obviously, we don't want it to be working just on our laptop. We want it to be delivered to our users and there are myriad ways that that can happen and it's only gotten more complex since the last time we talked which must have been like three or four years ago. I kind of just have to ask, I think that what you all were talking about then was cutting edge is still cutting edge now but there must have been some pretty incredible developments like in the last three or four years. What have been kind of the new insights that you all have?

LUKE: I think that what we realized as we got started with ember-cli-deploy and a project kind of came together as a combination of a few different open source efforts, something that Aaron was working on, something that our collaborator Mike was working on. We decided to come together under one umbrella, joined forces. And what we realized pretty soon is that deployment needs vary a ton between companies. And so, we are coming from this background in Ember community where we had this attitude where nobody is a special snowflake. We all kind of have the same needs for 90% of what we do. And that's true. I really believe in a lot of that Ember [ethos]. But when it comes to deployment, you know what? A lot of companies are special snowflakes or it's at least is much more fragmented than kind of our needs around on the JavaScript side.

And so, what we decided to do was to try to evolve ember-cli-deploy into a platform essentially, an ecosystem that could let people mix and match plug-ins to do in their organization without locking them into an opinion that might simply be a non-starter in their org.

CHARLES: It's hard enough to have opinions just around the way that your JavaScript code is structured but when it comes to rolling out your app, it really does encompass the entire scope of your application. So, it has to take account of your server. It has to take account of your user base. It has to take account of all the different processes that might be running all over, distributed around the Internet. Maybe somewhere on AWS, maybe somewhere on Legacy servers but it has to consider that in its entirety. So, it's having opinions that span that scope is particularly difficult.

LUKE: Yes. And so, you mentioned a bunch of technical details which are absolutely forcing factors for a lot of words in how they do their deployments but what we found in talking to people that there are also people in political aspects to deployment in many cases. Engineers kind of own the JavaScript code that's running within their app, more or less. But when it comes to pushing the app into the world and a lot of companies, that means they're interacting with sysadmins, ops folks, people who have very strong opinions about what is an allowable and supportable way to get those deployments done and to have that stuff exist in production. And so, we needed to come up with an architecture that was going to support all these kind of varied use cases. And so, we came up with this system of essentially a deployment pipeline and plugins that can work at various stages of that pipeline. And that ecosystem has now grown quite a bit. It's actually, I don't know Aaron if you and Mattia would agree but I think it's probably the best decision that we made in this project because that ecosystem has grown and evolved without us needing to do a ton of work in maintenance. And it's been really great. I think Mattia, you pulled some of the current numbers there.

MATTIA: Yeah. I pulled some numbers just yesterday and we have currently 150 different plugins attached to themselves, to different parts of the pipeline. So some of them are about how to build the assets, some of them are about how to compress them, some of them are about shipping. And they allow people to ship it with different ways like we are seeing [inaudible] with just simply Amazon APIs or Azure APIs and some of them even are about just how to visualize data about your deployments or how to give feedback to the user about what was deployed and represent the information.

And then this is kind of a bit more detail, probably specific to us but we also added this idea of plugin packs. So, in order to help people define their deployment story, we created this ideal of plugin packs. Plugin packs are simply group of plugins. So, plugins grouped together. As a user, if you want to implement what we call a deploy strategy, you can simply install a plugin pack and that will give you all the plugins that allow you to deploy in a specific way. And that's kind of like an optimization that we added just to make it easier for people to share deployment strategies, share ways of deploying applications.

CHARLES: Right. It's almost like an application within the framework.

MATTIA: Yeah, exactly. But to stay on the community side, I think that the interesting part about what Luke was saying which was a great success for us is that all we maintain as the core team for this project is the core infrastructure, so the pipeline and a thousand of plugins. Everything else is community-based. And often, even in my day-to-day work, I end up using plugins that I didn't write and that I don't even maintain. But because the underpinning of it were designed especially by Luke and I are flexible enough. It just like has been very, very stable and very, very reliable for many years. So, I will say definitely the idea of like in the spirit of what Ember is, I guess, creating a shared ecosystem where people can add what they want and extend what was provided has been the one single biggest win of this project.

CHARLES: One of the things that I'm curious about is we've talked about how you're allowing for and kind of embracing the fragmentation that happens in people's kind of the topology of their infrastructure. What do you see is the common threads that really bind every single good deployment strategy together?

MATTIA: My biggest thing here, and we actually have some shared notes about this, but my biggest thing about this is the idea that building and deploying an application for me is divided into three parts. There's building part where you have to decide how to compile your JavaScript application and how to produce some sort of artifact. There is the shipping part where it's about deciding where you're going to put the artifact. And then there is the serving part which is how you show it and deliver it to your users. I think that these three are the underpinnings of any deploy strategy. What we did with this project is just acknowledging that and give each one of these a place. And so, the entirety of what we do in what Luke defined as a pipeline is simply give you a way to customize how you build, customize how you ship, and then customize how you serve.

So yeah, I think that that's kind of the root. And the question that everybody that wants to deploy a modern JavaScript application have to ask themselves is how do I want to build it, how do I want to ship it, and how will I serve it to my users. And these things are completely independent one from the order in the sense that you can have something build it, something ship it and something serve it, and that's what we end up doing in most of our deployments, I find.

CHARLES: It's good to think about those things as soon as you possibly can and make sure that you have all three of those bases covered really before you start adding a whole ton of features.

TARAS: Sprint 0, right?

LUKE: In Agile, we call Sprint 0 the phase the thing you do right in the beginning. You've got a skeleton of an app and then you get the deployment infrastructure going, you have the test infrastructure going, so that there is no task within your actual feature development where you have to do those things. And I think that can be a valuable concept to embrace.

I would just add to Mattia's three points that for deployments, to me, some very simple qualities of a good deployments are repeatability. You need to be able to reliably and consistently run your deployment process. Sounds simple but there's plenty of operations that have run up way too long on manual deployments. So, we don't want to see those rollback capabilities if you have a deployment that you realize was a mistake right after it gets into production. I'm sure none of us have ever experienced that.

CHARLES: That never happens to me.

LUKE: You want to have a method to roll that code back. That's something that can be remarkably complex to do. And so, having some guardrails and some support mechanisms to do that like ember-cli-deploy provides can be really useful. But whatever your approach is, I think that's a necessary quality. And then I think we start to step into kind of more advanced capabilities that a good deployment architecture can provide when we start to think about things like personalization, A/B testing, feature flags, these kinds of things. And that requires more sophistication, but you cannot build that on a deployment foundation that's not solid.

AARON: I think for me, one of the things I've been really thinking about a lot lately, it's a bit of a mindset shift, I think, to get to where the things Mattia was talking about separating those different parts of deployment. And so, I really start to realize the traditional mindset around deployments like I build some stuff and I ship it to the server and then the users get it. But if we can actually stop and actually split our understanding of deployment into two separate phases. One is the building and the physical shipping of the files; and the other one's actually making them available to people. You open up this whole world of our features that you wouldn't normally have. So to be able to actually physically put stuff in production but not yet have it active, as in users don't see it yet but you can preview those versions in production against production databases. And then at some point after the fact, decide, "Okay, I'm now going to route all my users to this new thing," And to be able to do that really easily is massively, massively powerful. And so, to me, the thing I've been thinking about lately is it is a small mindset shift away from packaging everything up and pushing and overwriting what's currently there to being something, again like Luke said, immutable deployments where everything we build and ship sits next to all the other versions and we just decide which one we want to use to look at it any time which leads into then, I guess, A/B testing, feature flags and things. So I guess deployment really is not so much about the physical shipping, that's one part of it. To me, deployment now is shipping of stuff, as in physical deployment and then the releasing it or enabling it or activating it to users.

CHARLES: Or routing it. Sounds like what you're describing is an extraordinarily lightweight process.

AARON: It is, yeah.

CHARLES: To actually route traffic to those files.

AARON: It is. It's incredibly lightweight. That's the amazing thing about it. When you think about it, you're building a few JavaScript files and CSS files and images and putting them on a CDN, and then you just need a tiny web server that basically decides which version of the app you want to serve to people. There's not much to it at all, really.

CHARLES: I mean that's absolutely fascinating, though the capability that you have when you have the ability to have these versions, the same versions or different versions of your application sitting along next to each other and being able to route traffic. But it also seems to me like it introduces a little bit of complexity around version matching because only certain versions are going to be compatible with certain versions of your API. You have different versions of your API talking to the -- so the simplicity of having kind of mutable deployments, so to speak, is that everything is in sync and you don't have to worry about those version mismatches. Is that a problem or this could just be me worrying about nothing? But that's kind of the thing that just immediately jumps out to me is like are there any strategies to manage that complexity?

LUKE: To me, what you're describing, I kind of think of as a feature not a bug. And what I mean by that is that it is very simple to have a mental model of, "Oh, I have a version of my JavaScript code that works with this version of my API." And as long as I kind of deploy those changes together, I'm good to go. The reality is that that's impossible. The JavaScript apps that we write today, people are using anywhere from two seconds at a time to two days at a time. It's not uncommon these days to have some of these dashboard apps. People literally live-in for their job eight hours a day, nine hours a day, keep the browser tab open and come in the next morning and continue. And so, obviously there are some mechanisms we could use to force them to reload that kind of thing. But at some point in most apps, you're going to have a slightly older version of your JavaScript app talking to a slightly newer version of your API for either the span of a minute or perhaps longer depending on their strategies.

So to me, the process of thinking about that and at least being aware of that as an engineer thinking about how your code is going to get from your laptop into the world, I think it's an important step that we not paper over that complexity and that we kind of embrace it and say, "Hey, this is part of life." And so, we need to think about just like we need to think about how your database migrations get into production. That's not something that you can paper over and just have a process that it's going to take care of for you. It requires thought. And I think that this, in the same token, how different versions of your JavaScript app are going to interact with your API requires thought. An exact parallel also had different versions of a native mobile app that go into the app store. How did those interact with different versions of your API? So, I think you're right. There's complexity there. There's ways that we can try to mitigate...

CHARLES: Keep repeating ourselves if we think that that [inaudible] actually doesn't exist even in the simple case?

MATTIA: Yeah. I think that that's to reiterate what Luke is saying. That's exactly the point. You can pretend it's not there but it is and you have no way to avoid it. Once you ship something to a browser, you have no control over it anymore. And so, you have to assume that somebody is going to be using it.

LUKE: Aaron, I think you too, I don't know if you can share it. But you recently told us some stories about kind of what you encountered in your work about this and of how long people were using versions and stuff.

AARON: Yeah. Something that we hadn't sort of put a lot of thought into. But the last place I worked at, we had quite a long lived app and we're using feature flags and we're using launch [inaudible] something and it gives you a list of flags and when they were last requested. And there were also flags that we removed from the code and it was just a matter of waiting until all the users had the most current version of the app and weren't requesting the flag anymore. But this one flag just kept getting requested for months and we just could not work out why. It really sort of opened my eyes up to this exact problem that these long lived apps set in the browser and if you have someone that just doesn't reload the browser or restart the machine or anything, your app can live a lot longer than maybe you actually realize it is. So we're shipping bug fixes, we're shipping new features, and we're all patting ourselves in the back. We fixed this bug but have we really? If your users haven't reloaded the app and gotten the latest version, then you haven't actually fixed the bug for some number of people. And it's really hard to tell them as you think about this and put things in place, really hard to tell what versions are out in the world, how many people are using this buggy version still.

CHARLES: Yeah, that's an excellent point. I haven't even thought about that. I mean, what is the countermeasure?

AARON: We hadn't made it until we came across [crosstalk].

CHARLES: It's nothing quite like getting smacked in the face of the problem to make you aware of it.

AARON: That's right.

CHARLES: So, what's the strategy to deal with that?

AARON: I guess for me, my learnings from that would be from very early on thinking about how we're going to encourage people to reload, to start with, and maybe even have the ability to force a reload and what that means but then that has gotchas as well. You don't want to just reload something when a user is in the middle of writing a big essay or something like that. But definitely thinking about it from the start is one of the things you've got to think about from the start. But I guess something that I'd like to implement and I've kind of thought through but not really explored yet, but the ability to see what versions are out there in the world and there are things I've been thinking about in terms of this little server that serves different versions. Maybe we can start having that kind of tracking what versions are out there and who's using what and being able to see because it would be great to be out to see a live chart or a dashboard or something that sort of shows what versions are out there, which ones we need to be aware of that are still there and even what users are using and what versions they can maybe even move them on, if we need to. But there's definitely a bunch of things that aren't immediately obvious. And I don't know how many people actually think about this early on, but it's critical to actually think about it early on.

MATTIA: Yeah. I was going to share what we do which is very similar to what Aaron said just maybe for the listeners to have some context. The first thing you can do is basically what Gmail does, which is every time a web app sends a request, an [inaudible], it will send the version of the app with the request and the backend can check. And the backend can check if you are sending a request from the same version that is the most recent deployed version. And if it's not, this sends back a header and the same way that Gmail does, it will display a pop up that is like, "Hey, we have a new version if you want reload." And on top of that [inaudible] is that we have a dead man's switch. So if we accidentally deploy a broken version, a part of this process, the frontend application tracks in the headers. And if a special header is sent, it force reloads, which is not nice for the user but it's better and sometimes is critical to do so.

CHARLES: Right. I remember that was that case where Gmail released something. They were doing something with broken service workers and the app got completely and totally borked. I remember that my Twitter blew up, I don't know, about a year ago I think, and one of the problems I don't think they had was they did not have that capability.

MATTIA: I mean, you learned this the hard way sadly. But I think these two things are definitely crucial. And the third one, [inaudible]. I thought, Luke, you had the ability that Aaron was talking about like tracking versions in the world. And I think that's more useful for stats so that you know how often your users update. And then you can make the design decisions based on that and based on how much you want to support in the past.

LUKE: Yeah. We haven't implemented that but it reminds me whenever there's a new iOS version, we do a bunch of mobile work in the app and we're always looking at that adoption curve that's published. A few different analytic services publish it and say, "OK. How fast is iOS 12 adoption? How fast are people leaving behind the old versions?" And that helps to inform how much time you're spending doing bug fixes on old version versus just telling people, "Hey, this is fixed in the new OS. Go get it." But if you are able to see that for your own JavaScript apps, I think that would be pretty hot.

CHARLES: Yeah. Crazy thought here but it almost makes me wonder if there's something to learn from the Erlang community because this is kind of a similar problem. They solved 20 years ago where you have these very, very long running processes. Some of them there's some telephone servers in Sweden. They've been running for over a decade without the process ever coming down. And yet they're even upgrading the version of Erlang that the VM is running. And they have the capability to even upgrade a function like a recursive function as it's running. And there's just a lot of -- I don't know what the specific lessons are but I wonder if that's an area for study because if there's any community that has locked in on hot upgrade, I feel like it's that one.

LUKE: That's a terrific analogy. I bet we could learn a ton. Just hearing that kind of makes me think about how kind of coursed our mental model is about updates to our JavaScript apps. We talked to Aaron, we're talking about kind of this idea of a mutable apps and you have different versions side by side. But the idea of being able to kind of hot upgrade a version with running code in a browser, now that's an ambitious idea. That's something I'd say, "Wow! That kind of thing would be a game changer."

CHARLES: Yeah.

AARON: Makes me feel like we got a whole bunch of work to do.

CHARLES: We welcome them. I'm always happy to give people plenty more work to do. No, but how they manage even being able to do migrations on the memory that's running. I don't know if it's something that's going to be achievable but it sounds kind of like that's the direction that we're heading.

LUKE: It does, as these apps get more complex and they continue to live longer. The idea, the work arounds that Mattia mentioned about kind of showing a message and having a dead man's switch, these are all certainly useful. And today, I would say even like the best practice but they were not what you would want to do if you could magically design any system. If you're taking a magical approach, the app will just be upgraded seamlessly as a user was using it. And they would be none the wiser, the bugs would be fixed. End of story. There's no interruption to their workflow. At least for me personally, I don't really think about that as a possibility but I love the Erlang story and analogy and to say maybe that is a possibility, what would it take? I would obviously take a collaboration across your JavaScript framework, perhaps even JavaScript language features and browser runtime features as well as your backend and deployment mechanism. But I think it's a great avenue for some creative thinking.

CHARLES: I'm curious because when we're talking about this, I'm imagining the perfect evergreen app but there's also feels like there's maybe even a tension that arises because one of the core principles of good UI is you don't yank the rug out from underneath the user. They need to, at some point and we've all been there when the application does something of its own agency, that feels bad. It feels like, "Nope, this is my workspace. I need to be in control of it." The only way that something should move from one place to the other without me being involved is if it's part of some repeatable process that I kicked off. But obviously, things like upgrading the color of a button or fixing a layout bug, those are things that I'm just going to want to have happen automatically. I'm not going to worry about it. But there is this kind of a gradation of features and at what point do you say, "You know what? Upgrading needs to be something that the user explicitly requires," versus, "This is something that we're just going to push. We're going to make that decision for them."

LUKE: Yeah, it's a great question. One of the things that I'm curious what you all think is when you think about the mental model that our users have of working in a browser app, do you think that there is a mental model of, "Oh, when I refresh, I might get a new version." Do people even think about that? Or are they just like your example about a button color changing as kind of a minor thing. I don't even know if I could endure stack. We've all been I think in situations where you do a minor redesign and all of a sudden, all hell breaks loose and users are in revolt. Take the slack icon. So, I think it's a fascinating question.

CHARLES: I don't know. What's the answer? Do you always ask for an upgrade just observing? I don't have any data other than observing people around me who use web applications who don't understand how they actually work under the covers. I don't think it's the expectation that this code, this application is living and changing underneath their feet. I think the general perception is that the analogy to the desktop application where you've got the bundled binary and that's the one you're running is that's the perception.

AARON: I'd say the difference there is that, and with all these new ways of deploying, we're shipping small things faster in multiple, multiple times a day or even an hour. So it's not the sort of thing you really want time to use. There has been an update, need to upgrade as well. And that's the difference between the desktop mentality. And if that's the mentality they have, it sits quite a bit of a shift, I guess.

MATTIA: It makes me think that one of the tools that the users -- if you take a look at the general public, there's probably one tool that everybody can relate to which is Facebook. So, I think if there is a way to say what do people generally expect. There is a business user which I think we are often most familiar with but the general public, probably what they're most familiar with is what happens in Facebook. And I don't use Facebook almost. I haven't used it for a couple of years but I wonder how much of what people experience in Facebook actually impacts the expectations around how applications should behave.

LUKE: I think that's a really good question. I do think your underlying point of you have to know your own users I think is an important one also. Obviously, some folks are going to be more technical than others or some audiences will be more technical than others. But I would even question, Charles, your suggestion that people think of it kind of as like a binary that it stays the same until you refresh. I think people have an idea that web apps improve over time or sometimes they get bugs but hopefully that they improve and change over time, and that there is a tradeoff there that means sometimes there's something new to learn but at the same time, you get new features. But I don't know that people necessarily associate that with and it happens when I hit reload or it only happens when I open a new browser, like I don't know that it's that clear for people in their head.

CHARLES: Right. I can see that. But the question is if the evolution is too stark, I think people tend to get annoyed. If they're in the middle of a workflow or in the middle of a use case and something changes, then it gives it a feeling of instability and non-determinism which I think can be unsettling.

LUKE: Definitely. We all value, as engineers, we value getting into that flow state so much of like, "Oh man, I'm being productive. I don't have any distractions." And you kind of owe that to your users also to be able to let them get into that state with your app and not be throwing up, "Hey, there's new stuff. Reload." "I'm in the middle of something. Sorry."

CHARLES: Yeah. I definitely do the same thing. Sometimes, I let iOS be bugging me to upgrade for a month until I finally start to feel guilty about security and actually do the upgrade.

LUKE: Right.

CHARLES: Although once they started doing it at night, it actually made it a lot better.

LUKE: That's an interesting idea, too. I think there's a natural tension between the lower integration risk that we have as the engineering teams of shipping very frequently. Aaron mentioned shipping a dozen times a day. We certainly have been there and done that as well. I would say on average, we ship a few times a day. But the reason that we do that is because we know the faster we get code into production, the faster we can trust it. So it passes all your tests, it passes your [inaudible], but you don't really know if you're being honest. You don't really know until there's thousands of people using it in production. And yet, this conversation makes me think about there is a tension between how frequently you do that versus your users' kind of comfort level and expectations.

CHARLES: And maybe there is a thing where you can kind of analyze on a per user basis how often they're active in the application and try to push updates on times that are customized to them.

LUKE: Like when a user has been idle for 30 minutes or something like that.

CHARLES: Yep. Or even like track trends over months and see when they're most likely to be idle and schedule it for them.

LUKE: Good point.

CHARLES: Something like that.

TARAS: I have an idea. We should introduce screen savers into web apps. And so when the user stops using the app, just turn on screen saver and do the upgrade.

LUKE: I can see the VC patch. It's after dark but for the web.

CHARLES: Enter install flying toaster.

AARON: It does that to open up the idea as well of that automated checking that things are okay well after the fact because it's all right to sit there maybe activate something and sit there even for an hour and make sure there's no bug request coming in. But if no one's actually received your app, then of course it's not going to come in. And it's very easy to kind of move onto the next thing and forget about that. I guess it's not something that I've ever put a lot of time in and I haven't worked anywhere that's had really great automated checking to see is everything still okay. And I guess it's an interesting thing to start thinking about as well an important thing.

CHARLES: Like actually bundling in diagnostics in with your application to get really fine grained information about kind of status and availability inside the actual app.

AARON: I'm not exactly sure, really. I guess I maybe wasn't thinking about inside the app but I don't know what it looks like exactly. But there is that element of shipping fast and getting stuff out there. But are we really making sure it all works later on when everyone's actually using it.

LUKE: Yeah. This by the way, I just wanted to say when we were talking earlier what are the essential qualities around deploying an app. And this reminds me that one thing that we didn't mention but is very simple is your app should have a version. And it should be unique and traceable back to what was the GitHub, git commit that was the origin of that code. It's just a very simple idea but if you're going to be analyzing errors in production when you have multiple different versions of your JavaScript app running, you're going to need to know what version caused this error and then how do I trace it back and make sure that the code that I'm trying to debug is actually the code that was running when this error happened.

CHARLES: Do you only use just [inaudible] or do you assign like a build number or using SemVer? What's a good strategy?

LUKE: In our case, we use git tags. And so, our CI deployment process for our Ember apps basically looks like this is we work on a PR, we'll merge it to master. If it builds, the master gets deployed into our QA environment automatically using ember-cli-deploy from our CI server. And then once we're happy with how things are on QA, we do git tag or actually I use an ember addon called ember release that does that tagging for me and I'll tag it either in patch minor or major, roughly [inaudible] although it doesn't matter that much in the case of apps. When there is a new tag that builds green on CI, that gets deployed automatically into production by ember-cli-deploy. And so, that's kind of a basic flow. That tagging, just to be clear, the SemVer tag is just going to be number.number.number. You can get more sophisticated than that and I think both Aaron and Mattia have a system where even in the PR stage, there's automatic deployments happening. So maybe one of you want to mention that.

AARON: We're slightly different. Every time we push the pull request, that gets deployed to production. We're able to preview up pull requests in production before we even merge into master which we find super useful to send out links to stakeholders and maybe people that have raised bugs just to get them to verify things are fixed. And then at the point that it's all Google merged, the pull request to master which will automatically do another deploy which is the thing we'll ultimately activate, we activate it manually after the fact. We just do a little bit of sanity checking but we could automatically activate that on merge to master as well. But yeah, the being able to preview a pull request in production is super powerful for us.

CHARLES: That is definitely a nice capability. It's hard. It's one of those certain workflows or patterns or tools that you remember life before them and then after them and it's very hard to go back to life before. I would definitely say kind of the whole concept of preview applications is one of those.

AARON: Absolutely. It's a daunting concept if you're not there, previewing something that's essentially a work in progress and production. And there are some things you want to be careful with, obviously. But for the most part, it's a super valuable thing. As you say, it's a world where once you're there, it's very hard to step back and not be there again.

CHARLES: So I had a question about, we talked about I think it was 162 plugins around the ember-cli-deploy community. What for you all has been the most surprising and delightful plugin to arrive that you never imagined?

MATTIA: That is a good one. I'm pulling up the list. What I can tell you, for me, it's not about a specific plugin. The surprising part was the sheer amount of different strategies that people use for the shipping part. At least, I found that the build part is similar for most people, like most people want to do the things that you're supposed to do. So, you want to build your application and then you want a minify it in some way. And there's a bunch of options there from gzip to more recent technologies. But the way people deliver it to servers and the difference in the solutions, that I think for me has been the biggest thing, where people that ship [inaudible] are people that ship directly to Fastly, people that use FTP files, people that use old FTP, people that use our sync, people that do it over SSH. We have people that ship stuff directly to a database because some databases actually have great support for large files. So we even use it as a storage. We have people that do it in MySequel, people that do it in [inaudible].

CHARLES: It's actually storing the build artifacts inside of a database?

MATTIA: Yes, I've seen them in that. It's kind of interesting like the solutions that people ended up using. And so for me, I think that that's been the most fascinating part. Because as we were saying at the beginning, I'm just seeing now, we even have one for ZooKeeper. I don't have an idea what this does but it's probably related to some sort of registration around the seven-day index. That, to me, I think has been the biggest surprise. Everybody ends up working in a different environment. And so, that flexibility that users need has been by far the most surprising one.

AARON: I think that's also been one of the challenging things, one of the enlightening things for me. I think in the Ember ecosystem, addons and even just Ember itself, it's all about convention of configuration and doing a lot of the stuff that you do for you. I think people expect them to see a lot of point to do the same thing. But the key thing here is it just really automates all the things you would do manually and you need to understand exactly what you want your deployment strategy to be before use them to see a lot of [inaudible] could do for you. You need to decide do I want to install my assets on a CDN and do I want to install my index in Redis or in console or in S3 bucket. You need to know all these things and have decided on all these things and then ember-cli-deploy will make that really easy for you. And this is one of the educational things, I think we still haven't even nailed because there are always people that want to know why this doesn't work. But deployments are complex thing and as what you were saying Mattia, there are so many different variables and variations on doing this that there's no sensible configuration ember-cli-deploy could really provide out of the box, I guess. And so, that's why we ended up with a pipeline that gives you the tools to be flexible enough to support your strategy.

LUKE: I think the closest that we come to the convention is that if any app is using ember-cli-deploy, you can run ember deployment targets or ember deploy prod and ember deploy QA and you can expect that that's going to work. What you don't know is how has it happened to be configured in this project.

Charles, your question about kind of the most surprising thing that's come out of the ecosystem. For me, I would say -- Mattia mentioned plugin packs earlier which are groups of plugins that kind work together well. And so, we've seen some plugin packs like you might expect, like an AWS pack for deploying to AWS. But the more interesting ones to me that we've seen a lot of, companies open source their plugin packs. So what you naturally fall into as a company that's adopting ember-cli-deploy that has multiple ember apps is that you are going to develop your own plugin pack for internal use because generally speaking, companies follow the same deployment pattern for each of their apps. There's usually not any reason to vary that. So then the new thing that happen on top of that is people said, "Why don't I make this open source so other people can kind of see how we do it?" And that's been a really delightful part of the process to kind of get a peek into how other organizations are orchestrating their deployments. And if people are curious about kind of looking at that themselves, you can go on NPM and look for keyword ember-cli-deploy-plugin-pack and pull up all of those. And you can kind of poke around and see what different companies have open source there.

CHARLES: I actually love all three of those answers because it really is for me when you have a constellation of people around a particular problem, it's the surprising solutions that emerge that are some of the most exciting that would have lain hidden otherwise. It would have been kind of buried beneath the source of company A or company B or company C but actually having it all out in the open so that you can inspect it and say, "Wow, where has this solution been all my life?" Something that you have never imagined yourself.

LUKE: It's so funny that you mentioned that because that actually is the origin story way back, I'm talking like 2013 probably when we were very early Ember adopters and we were trying to figure out how do we deploy this thing. We're deploying it with our Rails app, like literally deploying the Ruby code and the JavaScript code together which took forever which is a disaster. And I heard through the grapevine, just exactly what you're saying Charles, where the good ideas are kind of hidden inside of a company. I heard through the grapevine that Square had this approach that they were using where they would deploy their JavaScript assets and then deploy their index HTML file, the contents of that file, into a database, it's Redis in their case, and serve then it out of there. And it empowered all of these interesting situations of like having multiple versions, being able to preview the release, et cetera. And so we then set out to copy that idea because there was nothing open source. So we had to create it ourselves which we did in Ruby which we made it inaccessible to many JavaScript shops in the first place. Then the evolution of that kind of over time and of Mattia and Aaron and Mike and the rest of the community kind of talking together has now moved this into the open source sphere where these ideas are more accessible and we've created an ecosystem encouraging these ideas to stay out in the open. It's so true that there's just gems of ideas that have been created by really brilliant engineers inside of companies that could be benefiting so many people, they just haven't seen the light of day yet.

CHARLES: Yeah, that leads me to my next question. I would say most of the ideas that we've been talking about today really, except for the build part, how Ember specific are they? Obviously, the ember-cli is a great resource and has a lot of great opinions for actually building the JavaScript assets themselves. But the second two phases of the pipeline really can vary freely, if I understand correctly. And so, have you all ever thought about trying to maybe kind of abstract these processes and these plugins so that these same ideas don't remain not just inside of a company but inside a community that spans a set of companies so that it's available for a wider audience? How integrated is it with Ember and what kind of effort would that be?

LUKE: That's a great question. Aaron or Mattia, one of you guys wanna talk a little bit about the history here?

MATTIA: Yeah, sure. We've been thinking about this as well. In the past, a guy on the ember-cli-deploy team, Pepin, has actually started this effort and it kind of prototyped this very idea of separating the ember-cli-deploy part from Ember CLI itself and make it a bit more generic. And he started a project called Deploy JS which I can give you a link for the show notes later. I don't think that the project is currently maintained but definitely the start of the effort is there. And the funny thing is that it was surprisingly easy. I think that we didn't get there mostly because we just all use Ember at work. As you know, open source is mostly motivated by the needs that an individual or a group of people have. But if any of the listeners were very interested in this, I think they should definitely get in touch and we will be happy to talk to them and see what can be done here.

AARON: And also if you look, there's actually a plugin called ember-cli-deploy-create-react-app and there's also ember-cli-build-view. So it can and is being used to deploy non-Ember apps which I think is super interesting because the only real Ember part of it is just using the CLI to discover the addons and plugins. And from then on, it's really out of the hands of Ember. But it sort of leads into a little bit, Luke mentioned this concept of immutable web apps. And I've been thinking a lot about this lately because a deployment strategy that ember-cli-deploy use as an example a lot and it's kind of become [inaudible] ember-cli-deploy in ways, the lightning approach which is this whole idea of splitting or putting your assets in CDN and your index HTML separately maybe in Redis and serving that. I've been trying to work out how I can talk about this to the wider JavaScript community in a non-Ember way and knowing full well that the concept of lightning deployment means nothing to anyone outside of Ember. Just by chance, I was just talking to some people and this terminology of immutable deployments kind of rose. I started searching around and I come across a website called ImmutableWebApps.org which was just scarily the same as what we've been talking about for the last three or four years with ember-cli-deploy. And a way to boil down at a framework-agnostic level, what are the key points that you need to consider when building a JavaScript web app to make it immutable. And it was just really amazing seeing it. This website was put up 3 weeks before I did my Google search coincidentally. And it's basically word for word what we have been talking about the last three or four years, So, someone else in the other side of the world's been coming up with the same ideas in their company like you were talking about earlier and we've reached out to him. I guess that, to me, is sort of the way forward that I want to sort of pursue to try and get these ideas out in a framework-agnostic way to the rest of community and say, "Hey, we thought about deployment in this way. Have you thought about building your app in this way to give you these sorts of capabilities?"

CHARLES: I think the wider community could definitely benefit from that because most of the blog posts and talks I've seen that concern themselves with deployment of single page applications, it's still much more of the tutorial phase. Like, "This is how I achieve getting this deployment strategy." Not, "This is how I repeatedly encode it as a program," and leverage it that way. And so, definitely getting that message out to a wider audience, I think it's a -- what's the word? It's an underserved market.

LUKE: Yeah. I really like this idea also. I think about this ImmutableWebApps.org, if you look at it. It's sort of a manifesto with conceptual description of what are the qualities that your app has to have to qualify as an immutable web app which I think is kind of a funny idea but one that people can start to get their heads around and compare that description to their own apps and say how do I hit this or fall short. And it reminds me a lot of kind of the idea of Twelve-Factor App which is an idea that I think came out of Heroku originally. And it's an idea of a backend app that is portable to be able to easily move across different hosts and easily be scalable to different instances of [inaudible] if your app obeys all of these things then it's going to do well under those circumstances, that will satisfy those needs. So, I think it's a great way of thinking and probably maybe even a better entree into the conversation with the wider community than a library, this is certainly a library called ember-cli-deploy.

CHARLES: This is a fantastic discussion. It's definitely reminded me of some of the best practices that I haven't thought about in a long time and definitely opened my eyes to some new ones and some new developments. So often, we can be focused on how our apps work internally, like how the JavaScript code works that we can just kind of -- what's the saying -- lose sight of the forest through the trees or I can't remember. It's like too busy looking down the end of your nose to see past your face. I've probably mangled both of those adages, but maybe 60% of two mangled adages is at least equal to one. This is something that we need to be thinking about more, that everybody needs to be thinking about more. This is actually a very exciting, very useful problem space and I'm just really grateful that you guys came on to talk to us about it. So, thank you, Mattia. Thank you, Luke. Thank you, Aaron.

MATTIA: Thank you so much. It was a great time.

AARON: It was a pleasure. Thanks for having us.

LUKE: Thanks so much.

CHARLES: Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @TheFrontside or over just plain old email at contact@frontside.io. Thanks and see you next time.

May 02 2019

48mins

Play

Pull Requests with Joe LeBlanc

Podcast cover
Read more

Joe joins the panelists to talk about pull request etiquette.

Joe Leblanc: Joe first learned to code on a Zenith computer his dad brought home from work. It had this built in blue LCD monitor and ran on 5 1/4" floppy disks. He used spreadsheets for work and Joe was interested. They spent about an hour going over macros together and he took off from there. Long after the Zenith died, the open-source content management system Joomla! landed in the center of his attention. Joe found himself writing a book about Joomla programming, authoring video tutorials about Joomla for lynda.com, giving Joomla talks, and helping organize Joomla conferences.

Since his time in the Joomla community, he's picked up Node, Rails, React, and other frameworks. He's currently coding at True Link Financial and working on a few hobby-projects as well.

Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES: Hello and welcome to The Frontside Podcast, the place where we talk about user interfaces and everything that you need to know to build them right. My name is Charles Lowell, a developer here at The Frontside. And with me also is Taras. Hello, Taras.

TARAS: Hello, hello.

CHARLES: Today, we're going to be continuing our conversation about platforms, as always, but in particular the pillar of your platform that has to do with how you collaborate on code. It's an important one. And so, we're spending some time on it. And with us to talk about this today is Joe LeBlanc who is a senior software engineer at True Link Financial in our very own beloved Austin, Texas. Hey, Joe.

JOE: Hi.

CHARLES: Thanks for coming on the show. We're going to be talking about collaboration and I was thinking we could kick off the discussion talking about pull requests because that's typically one of the ways that we collaborate on code. That's really where the rubber tends to hit the road. You have particular interest in the dynamics of a pull request. What experience kind of led you to that interest?

JOE: My background has been doing a lot of work as a freelancer. And when you work as a freelancer or do agency work, a lot of times you are really the only software developer that is around or you're working with maybe one or two other people. And I didn't get a lot of opportunity for pull request reviews. Mine was in that space. And then when I moved to a full time job at a place where I was one among maybe half a dozen or a dozen engineers, that's when I really began to get interested in how I could be better at giving pull request reviews and also submitting pull requests that people want to review.

CHARLES: What made you notice the need for this?

JOE: What would happen is someone would submit a pull request and there would suddenly be just dozens and dozens of comments coming in that were just kind of difficult to keep track of and often were maybe talking about things that didn't necessarily need to be reviewed as a part of that pull request or addressed, maybe things that could be caught by a linter.

CHARLES: Right.

JOE: And other tools that are a little bit easier to receive feedback from. Like it's easier to have a tool tell you your spacing is off than have me tell that to you.

CHARLES: Right. And really that seems like the spacing is off, that's kind of like if you need to deliver that feedback, that's kind of like not what you want to lead with. If you don't have linting in place like after all other issues are sorted out.

JOE: Yeah.

CHARLES: But it sounds like what you're describing is people who have kind of swarmed over a pull request each with their own kind of pet peeve issue.

JOE: Yeah. And then you're just left with this long list of comments to go back and address and you're pushing up more and more commits. And by the end of it, you could have more than 100 comments on this PR. And you thought that you were going to get this done in a day or two, and then suddenly it's the end of the sprint and it's like, "Oh fine, then I get to merge this."

CHARLES: And typically, it's actually, in my opinion, an anti-pattern when you have like 10 mergers that happen on the second last day of the sprint or the last day of the sprint. There's always issues with that, right? Ideally, you are merging code throughout the course of the sprint. It kind of like defeats the purpose almost. I guess you're doing your integration in shorter periods but even so, tons of stuff is bound to break when everybody's pushing and everyone's rushing.

JOE: Yup.

TARAS: So, what would happen in that situation, in your experience, when you have this pull request that a lot of people are commenting on and some of the comments could be addressed by linters. Would you guys go back and actually implement linting? Did that happen? Would you put linting in place so that you'd have consistent formatting on the projects to reduce that kind of feedback? What would be the resolution or a way to improve on a process so you could actually get better feedback?

JOE: Yes, we did install a linter and that's something that you can either run locally or you can also run in your continuous integration environment. And it's really good to run it locally first because then you can just catch it before you even submit the PR. And that was something that definitely helped cut down on these sorts of comments and even debates over style. You just have this one thing that is maintaining that style or rather telling you when you're wrong for you, and then you don't have to have all these comments coming in as you submit your PR.

And another thing that we would do is we would use, even beyond just things like whitespace and trailing newlines and those sorts of issues, sometimes you would have some of these issues that would come in that wouldn't necessarily be a trailing newline or whitespace issue or something like that but it would be purely someone's sense of style. And in those situations, we came up with a bit of a standard where if you wanted to make a comment that was strictly style that wasn't something that you necessarily wanted to block the pull request over, we would use emojis that had specific [inaudible] thing. So in our case, we would use either a beer emoji or the cocktail emoji to kind of say, "Here, take this with a drink."

CHARLES: [Laughs]

JOE: This is not something that's super serious. It's a suggestion, "Here, let's discuss this over beers."

[Laughter]

TARAS: It's a really nice approach to friendlify the actual pull request because people can get really -- I think another way people would say this is like when you say that it's not 'I believe it whole strongly'. How does that go, Charles?

CHARLES: It's strong opinions, weakly held.

JOE: Yes.

CHARLES: Right. You're kind of making an assertion about the way things ought to be or often missing from that is how important it is to you. You've made this thing like it should be like this. That's a very black and white statement, very firmly cut between right and wrong. But how important it is to you if it goes the other way is often missing from the conversation.

TARAS: So Joe, what would happen once you made this change? Did you see a shift in the kind of feedback you were getting? What did it look like after this?

JOE: After we started implementing the ideas of putting on emojis to signify things that were not super serious, it became a lot easier to just picture all those comments and define the things that really needed to address. And it was a lot faster. I was just able to address these very specific problems and do that first. And then if I had time later, come back and address those style issues.

CHARLES: Right. Was the kind of the good feedback there the whole time but it was just obscured by the noise of these kind of exterior issues or side issues? Or did you find that because people's thoughts were more given over to the heart of the matter, that you actually got more comments directly relating to kind of the meat of the implementation?

JOE: I think it was really more that the good comments really were buried in there. A lot of times when you see critical comments that seem to be petty, that can really trigger your emotions. And so, I feel like a lot of the times the emotional state that we would get into would prevent us from seeing the good comments that people were leaving, and we would get a little too angry about things that really weren't that big of a deal.

CHARLES: Would the tension kind of noticeably rise?

JOE: Yeah. I actually remember several instances at my job where -- this is when I was working in office and in the same room as all my co-workers. And one of my co-workers would know that I was reading his review by the number of sighs that were coming out of me.

[Laughter]

JOE: Yes, I definitely had this reaction that people could tell. Implementing things like these emojis definitely helped. They didn't always fix everything, though. One thing that would happen too is even after filtering out kind of the noise comments and getting into the meat of the matter, we would still wind up in situations where I would want to use one design pattern and another engineer would come along and say, "No, you should do something different," and we would have these arguments.

CHARLES: Right. And those are harder to resolve too because those are strong opinions strongly held.

JOE: Yes.

CHARLES: But it's still a step forward, right? You've eliminated the passionate arguments about the strong opinions weakly held. And so you're able to now grapple with 'how do you resolve these arguments' of these stronger opinions that are more strongly held.

JOE: Yeah. Part of the way that we handled this was we came up with a way of being able to have a debate about something on a pull request, yet also keep in mind that we're working under deadlines and that ultimately, it might not be that big of a deal.

TARAS: I don't know if you experienced this but there's this kind of a feeling where you know you have a deadline that you want to reach and this feedback is kind of not really helpful to that end goal. But at the same time, there feels like a trade off for you, like trading off quality by accepting something that people don't agree being fully ready.

JOE: Yeah.

TARAS: But then there is, I guess the tests seem to be like you have some tests and the tests are passing. So, thumbs up. [Laughs]

CHARLES: Yeah. Well, here's the thing. This thought just occurred to me when I heard that is when we are reviewing code, probably the most important code to review is actually not the implementation but it's the tests. Because why do people get so care mad about the code being right? It's because they know how expensive it is to get it wrong. And we all have our different opinions of what's right and wrong and what's going to cost us more. But at the end of the day, we're all trying to save us, save ourselves and our team, pain in the future. The tool that we have to do that is the code review. We're trying to make sure the code is "good". But usually what ends up happening, and I know I'm certainly guilty of this, is I'm focusing on the implementation and I look in and say, "Oh yes, there are tests." I don't really look at the test and say, "Is this a good test?" Because a good test is going to lower that cost that we're so afraid of in the future, and that drives us to want to make sure the code is as close to what we conceive of as perfect as it's likely to get and still hit our deadline. It sounds to me like maybe what we really ought to be doing is reviewing the quality of the test because if you have an excellent test, then the shape of the code almost is unimportant or changing the shape of the code, more importantly, is very cheap. So if you do find that the implementation is suboptimal, the cost of rejiggering it into a better configuration is going to be very low.

JOE: I've discovered so many times where you are running a test that was written previously either by somebody else or by yourself, and you realize that the test is testing the wrong thing entirely.

CHARLES: Yeah, coverage is spotty. It covers 10% of your cases.

JOE: It can also might have a stub or a mock in there somewhere that is hiding something that you weren't counting on. Or it could just be you pull up this screen and test then it saves but you're not really testing what it's saving.

CHARLES: Right. Can I actually proffer a suggestion? At least I'm going to try and hold myself to going forward, is that when I review a pull request or some proposed change to a code base, I'm going to review the test first.

JOE: Ooh, that's a great idea.

CHARLES: I'm going to try and start with the test and avert my eyes from the implementation, no matter how curious I am how this person accomplished the solution. I'm going to hide from the solution and focus on the verification. And if I have faith, first and foremost, in the verification, then my faith in the implementation is secondary.

TARAS: You know, Charles, I really like this. I think it'd be really interesting to try this out and see what kind of impact it creates. It makes me think that there is actually something really useful in here in terms of providing feedback because the challenge is that when you're giving feedback to someone on a pull request, ideally that feedback is going to be useful and it's going to give the person something to think about in regards to the work that they did. And I think a good way of pushing to work back onto the person in a way that allows them space to internalize what they need to do is identifying cases that might be missing because a lot of times, we test happy paths. We don't test the things that we didn't think about. And that's where the devil is in details. It's in the things that we don't test. Those are the kind of problems that we're trying to prevent with having good code. But really the best way to kind of flush those things out is to make sure we have the right test coverage for it. So yeah, I think if you add to that just when you're looking at a pull request is to go, is a test coverage missing the following cases and actually surfacing not. I'm actually thinking like a danger task. A danger is a tool for adding information to pull requests so that it's kind of like an automated mechanism to comment on pull requests. And so I'm thinking like having something that does what [JAS] does. [JAS] has this thing where they will check based on the last git commit. It will figure out what are the tests that have been added since the last commit and then it will actually show you. So when you run tests, it only runs the tests that you kind of impacted which is something that could be used to surface tests that were written in this commit. I mean, we also have that information in the actual commit itself but being able to see 'these are the tests that were written or added' and then you could use that and figure out what else is missing from this. This could be like a way to kind of add onto this process, something a little bit more automated so you could actually highlight this information very easily in the commit itself.

CHARLES: It's funny, my mind was wandering off towards GitHub. When do you use danger and when do you use GitHub actions? GitHub actions have been kind of like brewing and fermenting in my mind.

TARAS: One nice thing is that I mean, danger is a little awkward in that it creates one single commit at a topic. It creates a comment first time you commit and then it keeps pushing information into a comment which shows up at the top of your comment thread.

CHARLES: Right.

TARAS: But that's not how you read comments. You read like top to bottom. You don't read top to bottom to top. And so, I think the nice thing about the actions is that because it's using checks like GitHub checks that actually is part of a different area, it's part of the validation area as opposed to part of the actual comment conversation. This seems like a more appropriate place to put their information rather than the first comment.

CHARLES: Yeah. So, do we decide on kind of what the ultimate resolution is for when you have these high order conflicts?

JOE: Yeah. So that's one thing that we came up with that I'm really proud of. And so what we do and at least at that job that I was at, what we did in those situations was we said, "OK, if you've gone back and forth on this two or three times and there's still not an agreement as to what should be done or there is no compromise, what you should do is let the author of the pull request go ahead and merge that code," because you want to merge that code, get it out to production, make sure it's serving our users and our clients. And then what you want to do is take that conflict and record it somewhere to specify this interpersonal conflict not a code conflict or a merge conflict or anything like that. So you take this interpersonal conflict, the topic that was being discussed and you put that -- we were just putting it in a Google doc and leaving it there. And then every couple of weeks or maybe every couple of sprints, we would go through that Google doc and just talk about topics that had come up. And then really more often than not, what we would find is that we didn't care about the topic anymore.

CHARLES: [Laughs] It's amazing how time has a way of cooling passion over things that really don't matter.

JOE: Yeah. And for the things that did matter, we usually have like a really good discussion. Sometimes, we even came up with something different entirely based on just having more people in the conversation and thinking about this problem.

TARAS: I like this because there's a bit of process around it. It's kind of like a retrospective on pull request passions.

JOE: Yes.

TARAS: It sounds like a healthy thing to do for a team, especially. And I think just allocating some breathing room to go through these kind of things could be really important cumulatively over time, especially.

CHARLES: Yeah, definitely. It sounds almost like the pull request is then ongoing. You have this collaboration that happens kind of at the front of the change but then is rippling outwards and onwards and hopefully then has an impact on future changes, like if you're disagreeing. So, would you qualify that the types of issues that end up living in this Google document as architectural issues or just, 'hey this is the way we need to talk to each other and this was off and we need to fix this' or is it both?

JOE: It was mainly more architectural decisions that was coming up in this, sometimes like code style issues as well. Not so much the interpersonal issues, like the interpersonal issues would come during retros.

CHARLES: OK, because it sounds to me like it's not. Then you have the added benefit that your architecture does get to improve because clearly if there's some disagreement about this, then there's some sort of tension there. There's someone perceiving that some problem is not being resolved by a particular implementation. And so it's good to just at least surface this issue again and again. What was your experience with kind of revisiting the architectural issues? Was it mostly 'well, this wasn't really an issue' or is it 'wow, I'm really glad we took note of this because now we have these three ways of thinking about things' and it turns out that given three or four months more experience, this is something that we should be doing.

JOE: Yeah, it is definitely a mix of both but I'm kind of leaning more towards the latter. Like we're just glad that we bring things up. Occasionally, there will be one or two topics that will come up that we still would resolve and we would have to agree to disagree on something. Or in some cases, I think we would allow one of the lead engineers to just make the final decision on something. But that was pretty rare.

CHARLES: So, apart from appealing to either time, the passage of time or just kind of taking a brain dump of all the different architectural options or deferring to a more senior engineer and just kind of asking them to step in and make the call, are there any processes that you can put in place say to kind of mechanically come up with a decision? Like any set of values that you can use as a ruler to kind of line up a set of things to kind of attach weight to one particular solution? For example, one of the things that I always think of is when I have internal conflict about a decision, there's kind of a part of me that feels like this might be the right way and maybe another way is the right way. And so, a tool that I will use is what's internally consistent with the rest of the system. So, even if I've had an insight that something really ought to be framed,one solution should be framed in a certain way. If there's another solution that's more internally consistent with the way things are existing, then I'll use that as a discriminator to say, "OK, that's one thing that I can use to measure a solution against, that I'm not emotionally tied to." It feels more objective. And if you have those tools, you can kind of pile them up and see which pile ends up being higher for each solution. Do you see what I mean?

JOE: I definitely lean more towards keeping things consistent with what's already there. Personally, I probably do that to a fault. Sometimes I need to branch out a little more and get comfortable with possibly breaking something. But you always need to weigh whether it's something that is acceptable to break. Like a lot of the systems I've worked with involve money and it's really important that people either have money when they need it or are charged the correct amount when you charge them with things.

CHARLES: They get really mad when you charge them too much. [Laughs]

JOE: Isn't it amazing? [Laughs]

CHARLES: Yeah.

JOE: It could be that you are OK with breaking this other end of it where you might charge them the correct amount but the fulfillment of that product or the service or whatever you're selling, you might have a little more wiggle room in that part of the code base. Where if you break something, somebody calls in and says it's broken, and then you're just able to fix it.

CHARLES: Is there anything else? If folks are struggling with the way their pull requests work or their code reviews work and they're experiencing friction and tension with their teammates, is there anything else they can do?

JOE: What I would suggest is definitely look at the pull request template feature that is in GitHub and make use of that. Because what you can do with that is come up with several sections of the pull request template and say, "These are the questions that we want to ask before we submit a PR." And if you have those questions and those sections right when you go on that template and people read through that as they're making their pull request, they can often catch problems before they get to another reviewer. I can't tell you how many times I have submitted a pull request, started to write the description and [inaudible] these questions and realized, "Oh, this is wrong," or it's broken or it's totally the wrong thing.

CHARLES: Right. It's funny and I feel like you want to be asking those questions not when you're submitting the pull request but before you're even writing the code.

JOE: Yes.

CHARLES: But a lot of times, we don't do that until afterwards and you're like, "Oh man!" Just this process of forcing myself to think about the problem in this way or think about it holistically totally recasts my implementation.

JOE: Yeah, things like, "How can this break after we merge it?"

[Laughter]

CHARLES: Yeah, that's actually a great question, that of one you're talking on pull request.

JOE: Oh yes, definitely.

CHARLES: Like imagining the [inaudible] scenarios.

JOE: Yep.

CHARLES: Man, it's almost like you want to read the -- what are the other questions that you have?

JOE: So the ones that we have are, first of all, what is this pull request? How does it fulfill the requirements for the ticket? And then how can this break after we merge it? Are there any post-merge tasks that need to be run? If you need to get on a server and run a task or do something after it has gone to production, that's a place to document that there. And then the final question is, how is this tested?

CHARLES: Right. That's a good one. And like I said, I think that's the one where I'm going to be starting when I'm both thinking about how my changes will be reviewed and how I review changes.

JOE: Maybe I should consider moving that to the top position.

CHARLES: [Laughs] I mean, it is like. I'm just thinking about it and it does seem like we go straight to the implementation because that's what's fascinating to us. And so we might have way too much bias for the importance of the implementation over the importance of the test because I have to say I am so guilty of when I'm reviewing. Just looking for the fact that it has a test and not looking for what the test does.

JOE: Yeah.

CHARLES: And only referring back to the test and trying to understand how the test accomplishes its task. If I don't understand the implementation, I'm like, "Oh, let me look at the test and see how this code is used to the test." That might be problematic but I know that's definitely a little personal goal that's coming out of this podcast for me.

TARAS: Excellent.

CHARLES: All right. Joe, do you have anything else that's coming up? Any talks? Any meet-ups that you're going to? Any announcements? Anything that you want to plug?

JOE: I have a website at JLLeBlanc.com and that's where I am.

CHARLES: All right. Well, fantastic. Thank you so much for coming on.

JOE: All right. Thank you.

CHARLES: Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @TheFrontside or over just plain old email at Contact@Frontside.io. Thanks and see you next time.

Apr 11 2019

30mins

Play

Frontend/Backend Team Collaboration with Sam Joseph

Podcast cover
Read more

Sam joins the panelists to talk about frontend and backend team collaboration.

Resources:

Sam Joseph is a CoFounder of AgileVentures, a charity that helps groups of volunteers gather online to develop open source solutions for other charities all around the world. Sam’s been mucking about with computers since the early 80s and followed the traditional education system through to a PhD in Neural Nets. Next he went all industry, researching mobile agents at Toshiba in Japan, going freelance and then swung back to academia to research peer to peer system and collaborative systems. He now spends the majority of his time trying to make AgileVentures a sustainable charity enterprise, with occasional moonlighting as a contract programmer. Check out his blog at nonprofits.agileventures.org.

Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

TRANSCRIPT:

CHARLES: Welcome to The Frontside Podcast, the place where we talk about user interfaces and everything that you need to know to build it right. My name is Charles Lowell, a developer here at the Frontside. With today also is TARAS: Mankovski.

TARAS:: Hello, hello.

CHARLES: Hey, TARAS:. Today we're going to be continuing our theme when we think about UI platforms and web platforms, continuing the theme of collaboration and with us to talk about this is Sam Joseph. Welcome, Sam.

SAM: Hi, thanks for having me.

CHARLES: We've already talked a great deal about how the way in which your team collaborates and the communication that happens between your team and between the different pieces of software, your system, form one of the pillars of the platform that you can't just take lightly. You need to actually be intentful about that. I was thinking we could kind of start today's discussion, kind of talking about some of those collaborations.

One that we've probably all encountered, which is usually teams will be split into people who are focused on frontend, people who are focused on backend systems, kind of the services that make sure that all of the nodes that are running on our laptops and our desktops and stuff are running smoothly and error-free and obviously, those two groups of people can sometimes arrive with different sets of priorities and how do we resolve those priorities to make sure that that communication flows freely.

TARAS:: What's interesting about these frontend and the backend teams is that our users are not seeing that separation. They only see one thing. They only touch one thing. They actually see as one group but there's tends of be this kind of split between the frontend and the backend. It's kind of interesting that how the user get into this.

SAM: Yeah. Obviously in some teams, there's a very clear cut distinction between people at the backend and working with the components that are serving JSON over the API and there are some people who are very, very focused on the frontend and drilling CSS and a number of bits and pieces or even just staying explicitly on the design or UX design and there's a mythical full stack developer who is up and down the platform. It doesn't run exactly in parallel but there is this key thing which is almost how much sympathy or empathy can you have for another person who is not you, trying to use something that you set up.

If there was a direct parallel, you'd say, "Obviously, all people who will be working on the frontend are more of that sort of person and perhaps, the people on the backend are not so much that sort of person," but actually, I think you can have people who are doing backend stuff and they're designing API is very, very thoughtfully or the kind of people that consumes those APIs and sometimes, you can have people who are very, very focused on the design and the aesthetics when not necessarily so plugged into how will someone else use this, how will it fits into their lifestyle, which might be very different from my own, so that's maybe another axis, if you know what I mean apart from this sort of pure technical [inaudible]. Does that makes any sense?

TARAS:: Yeah. What's interesting is that everyone is trying to do a great job. Everyone is setting out to do something really good. What people's way of expressing good might be different, so if someone could be really focused on the quality of their code like they want to do their version of doing a really, really good job is doing the best code they could write. Sometimes that doesn't necessarily equate to the best user experience. I think everyone that I've met -- engineers who are writing code, almost everyone that I know is trying to do a really good job. If everyone is doing a good job, it doesn't necessarily always equate to the best user experience.

CHARLES: I think we had a reference that everyone wants to make sure that their system is of the highest quality possible but quality in of itself is not an absolute value. It's relative. In other words, a good definition of quality is how well a system is fit to a purpose. If it fits very well to that purpose, then we say it's of high quality but if it fits very poorly to a particular purpose, then we say, "This is a system of low quality," but what constitutes something of high quality is relative to the purpose. The question is, what is the purpose of writing the frontend system? What is the purpose of writing the backend system? So it seems to me you're going to have a lot of dissonance if the purpose is divergent but if they both share the same purpose, then that is kind of standard quality on both sides. Does that make sense?

SAM: That makes sense and what even more makes it tricky is that actually, purposes can evolve over time and so the thing with the frontend that needed to do with the business today is different tomorrow and the day after. The trick then is sort of the frontend and the backend, as people try to do a great their job, there's this question of how far ahead are we looking so you can talk about important things like sort of asking close modification about [inaudible] extension and there's a [inaudible] of different coding heuristics as best practice that say, you should kind of go in this direction.

Sometimes, that will be the hill they want to die on. It's like, "This code needs to be this way because of this thing," and the idea is that it's sort of future-proofing them but I think the messy reality is that sometimes, the thing that you put in place to future-proof, the whole system gets canned. In two weeks, the business changes and what have you, so the extra effort that you are pushing in on one day to protect yourself against changes in the future actually gets lost and perhaps, if only you'd be able to deliver a shortcut pack, this one feature that that will got the next line of funding in, either one advocates cutting corners but... you know what I mean? Like --?

CHARLES: Yeah, absolutely. I just wanted to chime in and vehemently agree with you that the purpose can change radically, especially if you're in an evolved environment. What constitutes the good code or the good quality solution can vary just as radically because it needs to fit that purpose.

TARAS:: When we talk about frontend and the backend, it seems there's this relationship, which is kind of positional like there's the frontend and then there's the backend. One is in sequence from front to back. It's the direction with the frontend. It's closer maybe to the user where the backend is not. But I think in practice, it's probably more like the left and the right hand where when you have to do two things together, you actually need to do two things to coordinate together.

What you describe about and this is I think where taking shortcuts sometimes can actually be a good thing is that if you say, "I'm going to spend the next few weeks on a backend and doing this specific thing," and then you find out that 10 days into the iteration that it doesn't fit, the better approach to synchronize would be to stop the action and realign. But if you persevere, then you might overstep and you actually be out of sync. It makes me think that the art of creating cohesive organizations from development perspective is to create context where the two kind of work in synergy together. I think that's where a good tooling that allows its synergy to exist. I think that's a lot of the work of actually creating systems where the user experience is kind of cohesive and integrated.

CHARLES: Actually, there was something that you just mentioned in there but it's actually a little nugget that I'm curious to explore and that is if you're 10 days in to like a 14-day piece of work and you realize that it isn't right and it doesn't fit with the whole, right action is to throw it away. I feel like software organizations and this might be a little bit off topic but it's something that really is interesting to me, so please indulge me. I feel like we see the product being the kind of the code output, even in Agile environments and there is an aversion to throwing away code that has been written.

There's the, I would say, incentive is to go ahead and persevere because developer time is so expensive. These are days out of people lives, so they've already invested 10 days. Why not just have four more days just so you have it. When in fact, you actually have more if you just throw that work in the trash because it's an obstruction that's not needed. It's a piece of weight. It's actually something that you now are going to own and it's actually going to be cheaper in the long run and it can be more beneficial to your organization to not own it.

Do you all see that happen kind of play out where people become attached to software that they've invested and will make the decision to hold on to it? Say, we'll spend two more days to complete it, even though it's the wrong thing, like identifying which things need to be thrown away? I feel like we don't actually do that very aggressively -- to say, "You know what? We need to not complete this work."

SAM: I think that is a really tricky one because I think whatever people might say, when you spend time working on something, you become emotionally attached to it. I would say strongly that how logical or rational or whatever sort of a person you are, my experience is being that people working on things want to see them used.

I think in a situation with version control where you can keep things on a branch, they can be useful explorations, things don't have to be thrown away in their entirety. Our charity is doing work for the National Health Service in the UK where there's a lot of employees in Europe and I've noticed the user interface that we're supplying them and I was suggesting some sort of minor tweaks and one of the developers has sort of run with the entire, big, advanced search feature that partially solves the problem but brings in a lot of other bits and pieces. He does some good work there. It's a great work but I think he kind of ran further than I was expecting down that line and I think we have now found a much simpler solution that rather than bringing an entire cabinet, it's a little shading off the side of the existing one.

But I think that doesn't have to be a loss and I was reassuring him that the work was valuable and that there is interesting learnings there and potentially, we might use it in the future version of the project and now of course, the way that stuff is moving on, just how it's going to branch, it doesn't mean you can then sort of magically lose some of the work.

In that case of doing this 10 days out of 14 and so on, the real question is can you get any value from switching somebody that fast? If they spent 10 days in and they realized they don't need that thing, it's like can you switch them quickly onto something else where they'll do four days or might they as well, just finish up there and leave that on a branch. That's a pull request that gets closed and it's there to go back to. That's kind of depends team by team about how quickly you can repurpose people missed rigs, if you know what I mean.

CHARLES: I absolutely agree but the key thing is leaving it on a branch and not integrating it into production, like not actually deploying it because I feel like when we see a feature, it's like we've got to get this thing done and it's done and we're just going to get it in versus now that we've learned how to do this, let's put that on a branch and let's rewrite it and let's take a different approach, rather than just being married to the idea that we're going to get it right the first time or that now is the time for this feature because we understand the complexity of what it actually takes. It is a tricky question. I'm wondering if our processes don't include enough implicit experimentation. We talked a little bit about this on a prior podcast that if they don't include some incentive to not deploy features just because you have them.

TARAS:: I think from a business perspective, there's not a lot of model to evaluate a certain thing. We don't have really any effective way of evaluating learning. You can measure code by the numbers of lines of code that somebody wrote and how much of code was shipped but you can't really evaluate how much was learned and how much was persisted.

I think a lot of this work around supporting effective collaboration is in building a cumulative systems of knowledge like why is a good Git history useful is because it has a built in mechanism for understanding history. It gives you a way to return back to time in your project and understand the context of that specific change and I think this is something that really good teams do this really well. They will respect this because when it's necessary, it is valuable to have this. When you don't have it and you are a year into the project and something happened and then, you are using the only thing you have, which is your troubleshooting skills or you are trying to figure something, as supposed to going back and relying on that history, on that knowledge that's built into the system, what I'm trying to get to is there is an element that is inherent to our development process, which is we don't have a way to quantify it. We have no way to really evaluate it. I think this is the problem that makes it difficult to throw away work because you can measure the amount of time they'll spend but you can't measure the amount of learning that was acquired.

CHARLES: Right. Thatís true.

SAM: I think if you have a positive team environment, if you have your team that is stable in there -- your contacts are coming in, the money is there, everybody is there and you've got the team, you don't necessarily be able to measure the learning because the output of the team will be good. They're kind of doing that in the background but I think individual comments are not going to want to pay for learning experience. It's certainly not at the rate of software developer's cost.

Although, the throwing-aways, if you're familiar with the [inaudible] book, which I think is a [inaudible] is the author, you know, build one, to throw away, you will anyway. We have a frontend mob that we run weekly doing frontend stuff and CSS and so on and we've done like it's a mockup for the client and there's this question about in my [inaudible] and so Iím saying to other developers there, "Let's not get too attached to this. We may need to throw it away and it will be a great learning to sort of restart on that." The [inaudible] to look quite nicely and the client might get attached to it. They want to ship it and I'm like, "Well, actually there's a lot of other stuff that needs to happen," and so, it's a minefield. It really, really is.

TARAS:: I think some of the business relationships that we have to this kind of client consulting company relationship, that relationship might not always create the fit to purpose team for specific challenge. You can have a company that has a lot of employees but their team and their team dynamics might not be fit to purpose to the problem they're trying to solve. If you're building a platform over a long time, you want to be creating that space where that learning gets accumulated over time. It doesn't matter what necessarily the relationship is between the companies that are participating in this process. I think what matter is what are you producing as a result.

If you're working together with people from different companies and they're working together and they're building this kind of an environment where you can collaboratively build things together and then people learning from the output and that knowledge is being carried over and people are being able to stack their knowledge continuously over time, if you're creating that environment and you're building a large platform for example, then you are creating a build to purpose kind of technology team to fit what you're looking to accomplish and that'll include consulting companies. I think those team, they're kind of secondary but I think it's just [inaudible] to that. It's like building a quality development organization to fit the purpose of building whatever it is that you're trying to build. If you're building something small, you might have a small team that'll able to build that effectively. If you have something big, you could have a big team that is building that effectively but building that team in a way that is appropriate to what you're trying to accomplish, I think that's the real challenge that company struggles to do.

CHARLES: Yeah. In the context of these real teams, I guess what can you put in place given that you have backend teams, frontend teams and each one of these needs to be specialized. This is kind of the power of the way that people work is that we're allowed to specialize and there's power in specialization. You can have someone who can be super focused on making sure that you have these high throughput backend systems that are resilient and fault tolerant and all these wonderful things, so that they don't necessarily have to have the entire context of the system that they're working on inside their head at a given time and the same thing someone working on CSS or working on a frontend system architecture, which has grown to be a very complex problem domain in its own right.

But these teams can be working in a context with a purpose is whip-lashing around and changing quite drastically and so, how do you then keep that in sync? Because we've identified purpose as being actually something that's quite a dynamic value. How do you keep these teams keyed in and focused and so, that they're kind of locked in on that similar purpose of they're going to be adapting the systems for work that they're responsible for and specialties that they're responsible for to match that?

SAM: It's a good question and I have my own bias view there --

CHARLES: I would love to hear it.

SAM: I can't bear working on things where I don't understand in great detail how that end user has experienced or what is the effect overall that it's trying to be achieved. You know, I've been working in boards between the industry and the charity for like 30 years and I know people are commenting like, "You were the person who wants to understand everything."

I think there were some people who are maybe quite satisfied who get ticket off the general board or what have you and work on that thing and if the API specs have filled in, so that's it. Go home and they weren't losing a sleep over it. But I think if youíre going to be dealing with these changes and sympathetic that the changes is coming down the path, I think you need to have some degree of empathy with the people who are driving the changes. Do you know what I mean?

CHARLES: Oh, absolutely yeah.

SAM: You know, as we've mentioned already, I think people get attached to what they build and I don't think you can do anything about that. They will get attached to what they build.

CHARLES: Yeah, we've all experienced it. It's so true. It's impossible to [inaudible].

SAM: Yeah. We can try and target it but it's sort of part of our nature. I think there are sort of the tools of the design sprint, of the design jam, of these sorts of things where if you can build to some level is obviously that it can't be shipped but can actually tease out the needs of the user, the designs sprint, the Google team, they have an example with kind of like this robot for hotels where it's like the whole thing is they can have robots that can deliver toothpaste to your door if you run out of toothpaste. In this design, obviously, that would be a huge undertaking to make that as a sort of our production system but they used like a remote control of a robot to simulate all of the key touch points when the client of that hotel, will actually interacts with that robot.

I think the same is true when you're building these systems if you can, again going to touch back on we don't fit a code in that way that I think an ever better thing is not be building the code in the first place and to build an almost a non-code system and then, maybe some of the backend are not going to be interested in the results of what people have learned through this kind of user experience trial before they [inaudible] forever but I think it's seeing other people try to use the end system that puts you in place where you can be empathetic.

I argue for the bias point of view that I think people on the backend of Netflix, trying to optimizing the streams in that or the other, they need to spend some time watching Netflix or at least watching the comedy of Netflix in order to empathize of how their work relates to the end experience and then, when those end experience needs changed, it's important for them to make change on the backend. Do you know what I mean?

CHARLES: Right, exactly. They have to watch through that kind of glass window where they can actually help the person. They can't give them any information. The only levers of control they have is through their own work, making the thing changes that they need to make on the backend, so that one of those users is going to have a good user experience.

This idea reminds me of something which I believe is the case at Heroku where they rotate everybody in the company through pager duty, which I think is kind of a brilliant idea where the entire team is responsible for providing technical support to the end users. When a problem arises, you get to understand what it's like to be someone who's trying to use the system. You get to exposed to the satisfaction, whether an issue was resolved or when it's working correctly or your pager is quiet for your entire shift but you can also get exposed to the frustration that you're engendering in the users if something doesn't go quite according to plan.

SAM: Yeah. I'm not [inaudible].

TARAS:: There is movement in this direction because there is actually a term that has been floating around. It's like a product engineer. It's someone who thinks about the product. These kind of people tend to have the highest value in Silicon Valley as someone who thinks about the product as the outcomes supposed to their code as the outcome. Even in the Agile space, there seems to be movement in that direction because I think one of the challenges, like to do that, you have to understand how you fit into the bigger picture. I could see it being really difficult. Imagine if you're working at a bank or something and being pager duty at a bank would be impossible, simply because their organization is either too big or too sensitive.

The way that their company is dealing with that is they have these group of people where you have a product manager working closely with the frontend engineer and the backend engineer so there's this exchange that is happening where people get to understand the consequences of their actions a little bit more. They understand how things fit together. There is definitely a movement happening in that direction that I think just the more, the better because one of the big differences now is the things that we make are very palatable to the users and the quality of the user experience that users are now expecting is much higher than they were in the past. I think the world is changing.

CHARLES: Yeah. We mentioned this when we were talking before the show but going to the University of Michigan as I did, I was in school with a bunch of mechanical engineers. Their goal was to go work for auto companies: Chevrolet and Ford and everything like that but their mindset was very much, I think the product engineering mindset. All of them loved cars and wanted to be part of building the coolest, most comfortable, most responsive cars. For whatever definition of a good car -- there are a lot of them -- they wanted to design good cars but they were into cars and they were into the experience of driving a car, so what's the equivalent then of that product engineer in software?

I think that every conversation that I had with any of the mechanical engineers who were going into the auto industry, I'm sure there are some of them that were out there but it wasn't about carburetors or whatever. They really wanted to just get into the building of cars. In some aspects, I'm sure they all integrated in some way or another but it was a group that was very focused on that outcome.

SAM: In the UK, I'm hearing this term, 'service designer' a lot that's coming up. Are saying that, Charles, you get the feeling that even getting into software, I'm not so excited about using their own? Like the car, I want this amazing car and then I can drive the car and I'll experience the car --?

CHARLES: Right and other people will experience the car that I've helped design.

SAM: Right and in some way, in software, it's almost like the software itself there's this kind of mathematical beauty of its own and it's like always been more important than the other software heads around me like the software that I wrote. I mean, the user? At the conference and all the software guys love it, that's the key objective, isn't it? Yeah.

I work with a lot of learning developers who are rightly focused on wanting to improve their skills and wanting to level up in particular tech stacks that are the ones that will lead to jobs and the future and financial stability and so on but I kind of wish I could inculcate in them this desire that the user experience was the higher goal than which bit of code does this or the other.

Maybe, I'm being unfair to them when I say that. Coding is hard. There's a lot strange concepts to grasp than sort of grappling with those concepts -- how does this work or why does this work or should I use this function or should I use this method or what have you. I don't know. I think [inaudible] bit a wall when I'm sort of saying like, "Let's think about it through the user perspective, what does the user needs here." It feels like they want the safer answer of what's the correct solution here, how should this be refactored because you need a skinny controller or a fat model or whatever happens to be. I don't know if that's --

CHARLES: Right, like what's going to make my life easier, in a sense of if I'm going to be maintaining this code and it's important. It's very important. You want to be happy in your job, you want to be happy in your work and so, you want to have the skinny control or the fat model because it's going to lower the risk of pain in your future, supposedly. I definitely understand but that can't be the only incentive. I think it's what I'm hearing.

The thought just occurred to me, I actually don't know that much about game development. I know the game industry is certainly has got its problems and I don't know much about the development culture inside the game industry. I do wonder what it's like because it does seem to be somewhat analogous to something like the car industry, where if you're a developer in the game industry, you're probably extremely focused on the user experience. You just have to be. It's so incredibly saturated and competitive for just eyeballs and thumb on controllers. Like I said, I don't really know anything about the game industry when it comes to development but I'm wondering if there's an analogy there.

SAM: Yeah. It sounds strong to me and as much of my cousin who works in the game industry and I sort of taught game programming and work through a lot of it. The game industry has got these sort of user testing experience baked in and it kind of like repeat, repeat, repeat, repeat. There's this sort of constant cycle of doing that and in the somewhat wider software industry, in a certain extent, pay lip service to that.

CHARLES: The game industry, they live and die by that. The code lives and dies by how it actually plays with real users.

SAM: Yes and it feels like in the general world, there's a lot more user interfaces that they just sort of struggle on whatever market dynamics people need to use, these existing vested interest, these banks, these supermarkets and what have you. I guess the market is too big, almost. It's not covered enough but do we really want our industry to be so cutthroat? I don't know.

CHARLES: Yeah. We've kind of seen an example of a couple of industries: the car industry, the game industry which is kind of adjacent to where most software development happens but they have this concept of exhaustive user testing, kind of the pager duty if you will, where you get to experience that. So how do we, on our team and when I say our team, I mean anyone who happens to be listening, what's the equivalent of pager duty for the applications that we write? How can we plug ourselves into that cycle of user-ship so we can actually experience it in a real and repeatable way.

SAM: In the work that we're doing with the NHS, they have a similar program. There's a lot of people who are working purely death by desk jobs but they do have a framework for us to go and observe in the hospitals and emergency rooms and so on. I guess the 'how can we achieve that outside of those clients who have those framework in place,' it seems like maybe we need a version of the cycle where a different person gets to be the product owner and tries to represent what the users are experience each week.

I think almost though, it seems like we need more of that thing you mentioned, Charles which is like kind of being behind the one-way silvered mirror as some sort of framework that connects the loop between some of the individual developers are doing and their experience of how the users are seeing things. I think that's going to be difficult to introduce. Just to sort of back up a little bit with them, I see this kind of idea of Agile, which says, "Let's be using the software which is the thing that's kind of changing and evolving unless you already deployed and you are in the maintenance mode from the beginning and you're getting that thing out there so you have your one-week or two-week or three-week cycles where you keep on having touch points," and I think that's better than touching once every two years.

As malleable as modern software potentially is, it's too sticky. It's like you were saying Charles before with that deploying or whatever, these are all gyros and so, you kind of need a really good mechanism for mocking out interfaces and having users experience and there's a couple of [inaudible], vision and there's something else that's sophisticated systems on the UX space where you can put together sort of a simulation of your interface relatively cheaply and you can run these things and then have sort of video capture of the users interacting with the system.

I think the difficult part that we have in the software industry is we are in this thing, where there's not enough people wanting to be software developers, partly with I guess the cars and the games, is there's so many people want to be game designers that the industry can kind of set the terms but we're in this inverse situation here with software developers where software developers are so much in demand, they can kind of say, "Don't put too much pressure on me. I'll kind of like, I'll go off to different company."

You know, we kind of have to like herd the cats, as they say into allowing these high powered software developers to go in the direction as they want to go in and so, it may be difficult to impose something like that, where you're getting software developers to really experience the end user's pain.

I guess what one has to do is somehow, create a narrative to sell the excitement of the end user experience and the beautiful end user experience to software developers such that they're fully out of themselves and say, "Yes, I want to see how the users are using my software." Do you know what I mean?

CHARLES: Yeah.

TARAS:: Yeah. Because a lot of company had this process where there'll be a product manager, there'll be a designer, they'll design through mock ups and they'll just hand it over to developers to build. There could be some backend, some frontend. I think if you're doing that, there is no emotional engagement between the creators of the actual implementation and the people who are going to be using that.

One way to do that is to try to add more of this actual 'however you do it.' You know, introduce more of the personal experience of the users to go along with the actual mock ups so that people understand like, there's actually be a person on the other end that are going to experience this and create some kind of emotional engagement between these two end. How you do that, I think that's a big question but I think if the organization is simply just throwing designs over to development, that's where the part of the problem is and trying to --

CHARLES: Well, actually, that's a fantastic point because ultimately, we've talked about emotional engagement and attachment to the software being a liability but the reason it exists and the reason it's intrinsic to the way we do things is that's how things get built. We get emotional attachment to it. It's the impetus. It's the driving force. It's literally the emotive force causes us to go forth and do a thing. It's why we're going to do it as a good job.

Like you said TARAS:, if you just kind of throwing your tickets over the wall, there is no emotional engagement and so people will look for it wherever they can find it. In the absence of an emotional engagement, they will create their own which is good quality software that's 'clean' code because people need to find that meaning and purpose in their work. Maybe the answer is trying to really help them connect that emotional experience back to that purpose of the user experience, so that there is no vacuum to fill with kind of synthetic purpose, if you will.

SAM: That's a great point. The charity in AgileVentures that I run, when we get things right, that's the thing that happens in that. We go for transparency at open source. We have these regular cycles where the charity client is using the software in a Hangout with us, with the developers who work on these things and the developers whether they're in a Hangout Live, whether they're watching the video a week later, they see the charity end user struggle with the feature that volunteer developers have been working on and they make that attachments. It's more than just 'I want to learn and level up in this thing.' It's like 'I want to make this feature work for this end user' and we're very lucky to have these charities who allow us to do that level of transparency.

The difficulty often comes that I would see in our paid projects where I would love to be recording the key stakeholders using the system but for whatever political reason, you can't always get that. I think that process of actually connecting the developer with the person who using it and doing that reliably, so that they can have that empathy and then get that emotional connection. It's just tricky in the real world.

CHARLES: It's very hard. One thing that we've deployed on past projects, which I think has worked fairly well is kind of putting a moratorium on product owners writing stories or writing tickets and actually having the developers collaborate with the product owners to write the tickets. In other words, instead of kind of catching a ticket that's thrown over the wall, really making sure that they understand what is the context under which this thing is being developed and then almost as in a code review sense, having the product owner saying, "Actually, the reason we're doing it is this," and having a review process where the developer is actually creating the story in support from the product owner.

Because ultimately if they have that context, then coming up with the implementation is going to be much easier. It's really is about facilitating that upfront learning than being able to do the actual work. That's something but a lot of organizations are resistant to that because the product owners really want to say like, "No, I want it to go this way and I want to just hand this to a developer and I want them to do it."

It's kind of wrestling in control and saying, "You've got veto power over this. You're the editor but we really want to make sure that you're on same page with the developer." In cases where we have been able to kind of have product owners make that shift where the developers are owning the story, oh sorry, or the primary authors of the story and more of the code reviewers of the story, if you will, implementation just seems to go so much more smoothly.

The questions, the key points kind of come out of the front of the process and by the time you start actually working on the thing, the manager or the product owners has the confidence that the developer understands what is involved in making it work.

SAM: Getting that done, what one will say something as tricky to do, I think in the ideal world, you have more of the developers involved in the design sprints or the design jams. The logistics of it are you can't really afford to have all those developers in all of those soft meetings where they are coding away. That sounds a great medium there. I think there's various organizations that do sort of their kick offs where their stories are kind of if not code designed, then there's cooperative voting on the complexities of the stories and making sure that they have folks understand the stories that they're working on and there might have some very different organization that are going to have different constraints on how much time that the organization feels, the different people can be allowed to spent on different sorts of activities. It's sort of tricky, isn't it?

CHARLES: It is definitely tricky because any time that you allocate for people, that's the most expensive resource that you have, so you want to be smart about it.

SAM: Which comes back to that issue then again of repurposing people. If you've got that feature that you go over 10 days, can you say to that person, "Can I switch you over onto this other thing for four days?" Maybe logically, that would be a better outcome for the sprint but maybe emotionally, that person is so attached to that. They are not fighting that front.

The biggest trouble for me, I think over the last 30 years is I assumed that the logical stuff was tantamount. These things like the skinny controller and the fat model, we'd agreed on that that was correct and so, that's why I should pushed for, that's why I should fight for because it's the truth or whatever and actually, maybe I'll sling back around in another 30 years, you can choose your battles because the level of emotional pressure on people, then the whole thing just explodes and nothing gets done. You know what I mean?

TARAS:: Sam, I have experienced something very similar and I think about this all the time. It's just how many perfect things are perfectly acceptable to people in using a different lens when I think about technology because the more skilled you are, I think quite often, people become more pedantic about how they approach things but in practice, the more things that you see that are not written but you realize how really imperfect they are and how in many ways, they fit the world very well, people use it. It's being used by many people in its imperfect state, so there is something like this hole between perfect implementation and the role of this perfect implementation in the world that I think that duality is really interesting.

CHARLES: Yeah. There's a fantastic paper written by, I think it was Richard Garfield back in the late-90s or early 2000s called 'Less is More,' where he kind of talks about this exact tension and he calls it the MIT school of software and the Berkeley school of software and I think Garfield came from the MIT school but basically, the whole thing was saying the Berkeley School is actually right and the name of the paper is 'Worst is Better.' It's a really interesting essay but just talking about working things that are in people's hands will beat the best design every single time because those are the systems that get used and improved.

There's a lot more to it than that. He talks about this exact fundamental tension and obviously, no system exists on either one of those perfect poles -- the MIT school or the Berkeley school. I think what he was trying to point out is that exact fundamental tension that we're talking about, the quest for the 'correct solution' and the quest for a solution. I'm actually have to go and re-read it. I remember it being an intriguing paper.

SAM: I guess the thing that makes me think of is sort of related to philosophy value. I read recently about this, attachment to outcomes. I'm going to segue back but we've got this discussion in one of our teams about, this is sort of microservices versus monoliths and in reading this, I doubt microservices, which is actually pretty radical in some of its suggestions that they're talking about it in Greater Than Code Slack where it's sort of saying that actually, if you keep your microservices small enough, then a lot of the things like code quality become actually kind of irrelevant it sort of almost argues that a lot of the things that we like about the Agile and these things are our ceremonies that are only necessary because we trying to feed these monoliths.

I'm not saying it is true. Iím just saying it's a pretty radical position that itís taking and it certainly, captures the minds of some people in our organization. Some things that I've been trying to make some simple changes just to smooth off some of the edges of the platform that we're working with and I had this respect of like, "No, we can't just make those simultaneous. We need to move to microservices." They're great and it will be perfect and they will allow expansion and so on and my immediate reaction is sort of, "I'm glad we don't have to build microservices because [inaudible]. It's all about attachment to outcomes, so am I attached trying to automate part of my week that I think will have some positive goals in the future? No one's got a crystal ball. That's only a guess on my part.

If I've got some folks who are excited about doing this thing with microservices, maybe I should empower them in that and I should started a mobile microservices and we kind of playing with these set of framework and so on. It's interesting stuff as I'm working my way through the book. At the same time, while we've been doing that, I've actually delegated it to someone else to sort out this sort of thing and I've actually kind of addressed the problem what we would need the microservices for through a different mechanism but still, the microservices thing rolls on and I kind of think, "Well, is that all a complete waste?" But then actually maybe, in two years down the line, it will turn out and that will be a beautiful thing that will enable things.

The further that I go on, the more I say, "I just don't know," and actually, if I can detach myself from caring too much about the outcomes one way or the other, I think it's both long and enjoy myself but it's a tricky thing when you got to pay the rent and pay the bills and so on. I don't see any resolutions to that. I love people being able to use things and get stuff done. I'm so excited about that and I guess, I will keep pushing all of the developers that I'm with towards trying to have a better empathy or understanding of their end users because I think software is more fun when you're connected to the end people who are using that.

CHARLES: Absolutely. Microservices reminds me kind of the experience that we've had with the microstates library. Because we've kind of identified this one very small slice of a problem, a 'refactor,' it's basically a ground up rewrite. If you've got a library that is a couple of hundred lines of code but it presents a uniform API, then you can rewrite it internally. You can refactor it by doing a ground up rewrite and the cardinal rule or the cardinal sin is you're never supposed to do a rewrite.

SAM: Yes and that's the microservices that they're saying. It's like you should rewrite everything all the time, basically.

CHARLES: Exactly. You should always be rewriting it.

SAM: Yeah. Should be able to throw it away because it's a microservice and it can be rewritten in a week and if you're throwing one away, it's only a week worth of work and you can keep on moving away. It is an amazing idea.

CHARLES: I have to read that book. Anyhow, we're going to have you on again to explore --

SAM: Well, let's do another one on microservices. It was a great fun. Definitely, it's time to wrap up but I really appreciate you having me on the show and being able to discuss all these topics.

CHARLES: Fantastic and I can't wait to talk about microservices. This might be the impetus that I need to finally actually go learn about microservices in depth.

SAM: I'll recommend Richard Rodger's book, 'The Tao of Microservices.'

CHARLES: All right. Fantastic. Anything we should mention, any upcoming engagements or podcast that you're going to be on?

SAM: Iím always on the lookout for charities, developers, people who want to help, you can find us at AgileVentures.org. We're busy trying to help great causes. We're trying to help people learn about software development and getting involved in Agile and kind of like experience the real software in action, software development where you ideally interact with the end charity users and see how they're benefitting from the product. We love any support and help. You can get involved in that, if you want to give a little bit to open source and open development. We go for transparency. We have more mob programming sessions and scrum and meetings online/in-house every day, so just come and check out AgileVentures.org and maybe, see you [inaudible].

CHARLES: Yeah and if they wanted to say, reach out to you over email or Twitter, how would they get in touch?

SAM: It's Sam@AgileVentures.org and I am at @tansakuu on Twitter. Hit me on Twitter or just at Sam@AgileVentures.org.

CHARLES: All right. Well, fantastic. Thank you, Sam. Also, if you need any help with your frontend platform, you know where to get in touch with us. We're at @TheFrontside on Twitter or Info@Frontside.io. Thank you, TARAS:. Thank you, Sam and we will see everybody next time.

Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @TheFrontside or over just plain old email at Contact@Frontside.io. Thanks and see you next time.

Mar 29 2019

49mins

Play

Team Collaboration with Jacob Stoebel

Podcast cover
Read more

Jacob joins the panelists to talk about team collaboration based on his RubyConf 2017 talk, Code Reviews: Honesty, Kindness, Inspiration: Pick Three.

Jacob Stoebel is a software developer living in Berea, KY. He spends his days writing web applications in Ruby, JavaScript, and Python, working with data, and leveling up as a software engineer. He works and studies at Berea College. You can find out more about Jacob at jstoebel.com.

Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

TRANSCRIPT:

CHARLES: Hello and welcome to The Frontside Podcast, the place where we talk about user interfaces and everything that you need to know to build it right. My name is Charles Lowell, a developer at the Frontside. With me today from Frontside also, is Taras Mankovski.

TARAS: Hello, hello.

CHARLES: Hello, Taras and today, we're going to be talking like we do every time about a piece of the platform that you used to develop user interfaces frontside at your company or organization or wherever it is that you build software. Today we're going to be talking about a piece of the platform that's very, very critical that often gets short shrift or is excluded entirely from what people think of when they think about their tech stack and that's how we as teams collaborate to build and maintain and produce the quality software that we can.

With us today is Jacob Stoebel. Welcome, Jacob.

JACOB: Hello.

CHARLES: Now, what is it that you do, in your day-to-day?

JACOB: I'm a full-stack developer for a little company called ePublishing and I mostly work in Rails and React.

CHARLES: Rails and React and so, when we were searching for people to talk about how we collaborate these teams, Mandy suggested you because of a talk that you've given at a RubyConf, specifically about code reviews, which I think are actually a huge piece of the collaboration process because it's a major forum where team members get to interact with each other and it's the gateway for making sure code quality is maintained but more than that, I think it's a learning -- a place where we learn. I learned so much both as a reviewer and as someone who is submitting my work and so, it's actually a very important part of the software development process. You have a lot of great examples of how to not do code reviews.

JACOB: Yeah, I think I may have been a little bit too indulgent in that talk. I had a lot of fun. I did some research from other people, mainly from anecdotes. I had research from talking to people about really, all the anti-patterns that come out of code reviews. It seems like every few weeks, I'll see a tweet that says something along the lines about how code reviews are broken.

I don't really know about that and I have to say, I think I'm kind of lucky at my job that I think they're done in a way that really leaves me feeling pretty positive and that's certainly a good thing but I think what it comes down to -- I'm going to sort of talk about where these ideas come from in a minute -- is that we often have code reviews that for one -- and you can tell me how this is for you too -- often the code review is happening at a point so late in the process, where the feedback that you get may not be actionable. Have you experienced that?

JACOB: Before I answer that question, just to kind of echo the sentiment and maybe I'm being presumptuous, I feel like the code reviews that we do are actually very positive, so I haven't got to experience firsthand. Although I have seen conversations on GitHub where it looks kind of like a Celebrity Chef, where you have someone doing the code reviews like Gordon Ramsay up there just screaming and someone has put this plate of food in front of them and kind of picking it apart. That one is extreme but this is actually something that I struggle with, what you were talking about, what is the appropriate point at which to get feedback.

I agree that you want to get feedback as soon as possible and sometimes, when you've invested weeks and weeks into something or you're like at Mile 100 and they're like, "You know, at Mile 2, you were supposed to turn right," and now, you're off in the forest and you've been tracking 98 miles in the wrong direction.

JACOB: Yeah and the work is due, right? This needs to get ships tomorrow.

CHARLES: Right, so you've got massive pressure. This is something that I struggle with myself is when is an appropriate time to really try and be public about what it is that you're doing.

JACOB: Yeah. I think that is a really good question and I think what you're getting at and I would agree is that, the sooner, the better and when you can tighten the intervals between feedback is probably better.

I'll just take a step back and I'm going to take a longer route to go and get to my point. I am a career changer and before I was in this career, I was a high school theater teacher, so it was really different and I won't give answer why I changed other than this is the greatest [inaudible]. But one thing that I really struggled with is I was working with teenagers and I really wanted to see them grow and improve but at the same time, these were kids and they have fragile egos and I don't want to tear them down, so I came across this really interesting framework for feedback. It is called the Liz Lerman's Critical Response Process and I give credit to her in the talk.

This comes out of the dance world. Liz Lerman is a pretty accomplished dancer and choreographer and what she found is that in the dance world and I think it's not too dissimilar from our industry is that the feedback received was often given in a way that was leaving people feel really torn down and mostly, not feeling inspired to go back to their work and make it better. It really felt like feedback is about giving you a grade. It's like, "I'm going to tell you how good of a job you do," and there's certainly a time and place for that but the inspiring question -- I guess the rhetorical question -- that she made is, "Shouldn't the big point about feedback be that it makes you so excited about the work you're doing that you just can't wait to go back to your keyboard and keep working on it," and she found or in the case, back to the dance studio, it really sort of structures this framework for how people can give feedback to a creator and that could be a creator of anything.

You mentioned cooking. This could be about food as well that sort of set some guardrails for how we can give feedback that is useful, it's inspiring and it's kind. I'm going to really distinguish between kind and nice. Nice would be, here I'm going to say things that are only pleasant about your work but what I mean by kind is feedback that is really taking care of your team and making them feel like they are respected and cared for as human beings so they don't go home every Friday afternoon and cry on their couch and just drag and coming back on a Monday. That's kind of the basic idea of why we need, maybe a kind of a framework for getting feedback.

CHARLES: I agree totally. It's almost like you need to conceive of your feedback is not a gate to quality but a gift to embolden somebody. It's like they've just been doing battle with this code, with this problem, they've been grappling with it and it needs someone to wipe their brow and maybe give them a stiff drink, so that they can get back into the ring and be invigorated.

JACOB: Yeah.

TARAS: What I'm hearing in this conversation so far is it's kind of like a tone or way of communicating to the person receiving the feedback but sometimes, no matter what your tone is, depending on how your team is set up, depending on the context of your actual code review, it can still kind of land in the wrong place? Have you experienced that where the team conditions impact your ability to actually provide feedback?

JACOB: I think I know what you mean. I think what you're talking about is this sort of the organizational structures that are set up are sort of lend themselves to certain modes of feedback and discourage others. I will give this example that I've heard from numerous people and I think this is what you're getting at is feedback that's given at the end. Just like I said, feedback that is given the day before, it has to be shared. Or feedback that it's almost like if we work in an environment where sort of like the hair on fire environment where it's like everything was due yesterday, that's not an environment that's going to be conducive to slowing down, taking a step back and saying like, "Let's point out what we think this work is doing? What questions we have about it? Who has opinions about the direction that's being taken on it?" And really zoom out a little bit more.

CHARLES: Do you have any concrete examples of how that early stage feedback has taken place at your work? Is it just over Slack? Maybe, the whole people need to step back from the idea that working on one-week iterations or two-week iterations and at the end of the iteration, everything is due and everything will get merged at that point. How do you kind of break up that structure to say, "No, we're going to try and have checkpoints and milestones?" What are deliverables that you can decompose the work into that fit inside the framework, that are inside the big deliverables, which is sensibly, the feature that you're working on?

JACOB: I think first of all, it's the way this thing goes about. I don't think this framework works over Slack. I think it has to be immediate, I think it has to be either in personal or on Skype or Hangouts or something.

One of the points of this type of feedback is really about checking in with each other as a team and I think what's great about Slack is that it lets you leave a message and walk away and the unfortunate thing about Slack is it's not meant for sort of attaching how people feel about things or what people's reactions were to them. I think you all have to be in the same space, hopefully with your camera on, if possible and really, just sort of checking in with each other as a team.

I can point to times with my team that I think that's really worked out. I think you made a good point when it comes to really getting started with a project because there are times where I made the mistake of not trying to get feedback from my team early on when I was going to start a project and as you can imagine, did that really cost me? It's really getting feedback from my team and my supervisor about the direction this is going to want to go and I'll say from experience, I work on legacy codebases and as you can probably imagine, it's easy to paint yourself into a corner and what the thing about legacy code is that you don't know what pitfalls you're working into. You can get started and you can sort of find into the process, there is some reality about this code that you didn't know and it is really getting in the way of your work that had you known about it, it would have saved you a lot of trouble because you could have planned your way around it.

Then the fortunate thing is drawing on the wisdom from people who they know about it because they struggled with it already if they've been around longer than I have. I think that's a really good point. This is a good thing to do. As you're getting started and you can be sharing this is my just initial idea of how I'm going to go about this. What my tech stack is or my understanding of the problem space, all of the above and to really sort of check your assumptions and see if they really check out.

CHARLES: I want to circle back a little bit to something you'd mentioned before and that is you want to be kind in a code review and I would say, you probably want to be kind anytime you're giving feedback or interacting with your teammates but what's the process that you can go through if you find yourself struggling? How can I deliver this message that I want to deliver and have it come across this kind? What's a process or checklist that I can go so that I don't open my mouth up and say something that I can't take back or that is going to land wrong?

JACOB: I'm glad you asked that and I think that this is a great way to introduce the guidelines for the critical response process. There's basically four steps to it and the way it's structured is you have the person who has created the thing. It could be the person or the team that has created the feature. You have other people who are responders to it. They could be people within your team or they could be others in the organization who are invested in your success. They're not here to tear you down or give you a grade. They're here because they want to see the project do well and then, there's going to be someone who is a facilitator.

The way it works and I think this is directly addressing your question is there are guardrails that are going to help you not steer into territory that is going to leave someone feeling torn down. The first step to this process is all the responders are going to say statements of meaning about the work and what I mean by that is you're going to make things that stand out to you that are not attached with an opinion. You could say, for example, "This project is using React hooks. That's something I noticed." You're not going to say, "I think that's a good idea or a bad idea," but you're going to say, "This is what I'm noticing," and that is something that can get jotted down because that's going to be fodder for discussion later on like, "Let's talk about this new feature in React," and let's talk about it later. We can talk about if we think that's a good thing or not or what the implications are for that.

The next step after that is that the person who has made the work of the team, they get a chance to ask questions of everybody else about what they thought. You've probably noticed that in a lot of pull requests, someone puts their work out there and then, the next thing that happens is everybody starts commenting on the work and saying what they think about it. This flips it. At first, you as the creator, start asking questions or you can say, "This thing over here, I think it's maybe a little wonky or it's a little hacky but I couldn't think of a better way to do it. What do you think? Do you think it's right? Do you think it's a bad idea?" Or this bit over here, "I'm pulling in the third party library. I think maybe, it's worth it but what do you think? Is it not worth it?"

CHARLES: I like that because ultimately, the people who have created the thing are the most familiar with the problem space. They've spent a lot of time thinking about it and so, they can actually direct the conversation to the parts of the implementation that really are the most iffy and the most unknown because everybody is going to feel this need to comment but it's the classic case of bikeshedding really, the true experts or the people who've just been spent all this time implementing and so, people will comment up to their greatest point of familiarity with the problem, which could actually be not that great. Can you say like, "Can you really focus your mental energy on this?" I like that a lot.

JACOB: Yeah and really, it sets a good tone. The assumption behind all of this is that the creator, like you said knows the work best and really ought to be in the driver seat when it comes to feedback, so it really sets the tone there and say like, "The creator knows what's most important. Let's give them the first opportunity to frame that," and I know plenty of times like if I'm asked to review a PR and I'm not given any kind of prompt or direction, I will probably scroll to the file. Maybe I'll find the file in that PR that I'm actually familiar with or I'll look for some pattern that's something that's like, "I understand what's going on here. I can give feedback," and if it's all that other thing over there, I'm completely confused, I'm going to ignore it.

But the creator could illuminate me a little bit, then I would understand a little bit more and then, I'm in a position to comment on that thing that I otherwise wouldn't have understood. That's the second step. The third step is now the responders get a chance to ask their questions but the important thing about that is that there are neutral questions. The assumption here is that the responders need to better understand the context from which this code was written in order to be able to give their opinion.

Here's an example coming from the Rails world. I could say, "Tell me your thought process for using Factory Bot," in this example. If anyone doesn't know that is somewhat of a contentious issue in the Rails community, rather than just coming out guns blazing and say like, "I think this was a bad idea." It's like, "Tell me your process because I want to understand the context you came from and then we can together evaluate if coming from that context makes sense or if it doesn't," so neutral question. They have to be neutral questions.

By the way and we've probably all heard this before, this is not a neutral question, "What were you thinking when you decided to do blah-blah-blah-blah-blah." Everyone knows what that means.

CHARLES: Right. I was going to say, you can be very, very passive-aggressive with questions.

JACOB: Yeah, exactly and this is a place where having a facilitator can really help -- facilitator who is not one of the creators. Someone can just say, "Let's back up. Can you rephrase that without the opinion embedded? Can you just say that again?" and with that again, those are some of the guardrails that keep us on track.

After the responders have been able to ask their questions how hopefully everyone has a better understanding of the context from which the code was written, then it's time for opinions with consent of the creators. The way it works is the responders can say, "I have an opinion about using React hooks in this codebase. Would you like to hear it?" The responders can say, "Yes, please. Let me know," or they can say, "No, thank you," because the responders, having the best knowledge of the context, might know that that feedback is not useful. Maybe, they have a really good reason for using React hooks or maybe, they know what's coming in the future or maybe, they know if there's no time to fix it now or it's not worth fixing now. They know the tradeoffs best and so again, those are guardrails and it puts the creator in the place of saying, "That's actually not useful feedback right now. Let's just not use it." They can say, "Yes, please tell me," or they can say, "No, thank you. Let's not talk about that."

CHARLES: In the context of a pull request, the process you're describing could be played out in any number of media but in the context of a pull request, the creator is the person actually submitting the code, how do you handle the issue of who pushes the merge button if there's still some opinions that haven't been voiced? For example, a creator says, "No, I don't think that's helpful feedback," is the assumption of then the creator can go ahead and just push the merge button? Or is it basically saying, "I don't want to hear your opinion," relatively rare?

JACOB: That is a great question. I think I tried to get at this in the talk. Before one thing, I am probably, like most people don't have the option to just say to my manager, "No, I don't want to hear your opinion." I get that. There is a contrast between the pure version of the feedback process and then reality. They have to balance. But teams can sort of work out the way they give feedback.

I have example of an anecdote that someone shared with me once, when I was doing research for this talk. There was this sort of agreement with the manager that their manager wouldn't give feedback on Friday afternoon. They just wouldn't. Everyone preferred that. Everyone sort of wanted, say on Friday afternoons, I'm really going to be just focusing on winding down for the week and I don't want to get dumped on a bunch of feedback. The point being, even though you can't just blanket-ignore feedback, you can work out circumstances with your team for the best way that feedback can be given and circumstances under which, it can be politely declined.

CHARLES: I see.

TARAS: I'm curious about a different part of this because a lot of this is how to give feedback but I'm really curious about the why part. I think many of us take it for granted that good code reviews are very valuable because a lot of teams that I've encountered that are taking on really big challenges but didn't have a code review process and so, one of things I'm kind of curious is for people or for teams that don't have it in place, what kind of symptoms can they observe in their daily operations that would suggest that maybe code review is something that they need to put in place.

CHARLES: Taras, you're talking about kind of the places where we've seen where the culture is just push everything to a branch, there's a 1000 commits there, open up a pull request and no description, no name. It's like, "Here's this thing. I'm taking comments for the next three hours and if everything goes well, let just merge." Are you talking about those kind of situations where really a big culture of pull request or just feedback around change is very, very nascent.?

TARAS: Yeah, a lot of times, it's the 'ship it' culture, like this is getting in the way of shipping it.

CHARLES: So you're saying like how you sell the entire idea?

TARAS: Yeah and if someone is listening who is noticing that, a lot of people would know that we're not really doing code reviews but what are the symptoms that they could be observing that could say like really, this is we need to change, like this can't continue.

JACOB: Yeah. One thing that occurs to me as if there's all high level of surprise when you read it, when pull requests are read, it's like, "Oh, you went in that direction." I think that could be an indication that maybe, we could have checked in at the halfway point or even sooner because there seems to be differences of perspective on where this is going or where it should end up that's why [inaudible].

TARAS: At what point do you think people would see this? Is this something that would happen kind of way down the road? Like actually, "How did this end up in the codebase?"

CHARLES: That's surprising except deferred even further, right?

JACOB: Yeah, who wrote this last few. In having that kind of feedback process, let's sort of take a look at where this project has come in the last few months and see if we can sort of learn from what went well and what didn't.

CHARLES: This is related to the concept of surprise, if you see a proliferation of many different patterns to accomplish the same thing, that means the communication is not there. People are not learning from each other and not kind of creating their own code culture together. If that's missing and that manifests itself in all kinds of ways, in bugs, in weird development set up that takes me two hours to get this local environment set up and things like that, if you're seeing those things, chances are you need some way to come together in a code review culture is really, really good for that.

JACOB: Yeah and I think in the ideal code review culture, everyone that's sort of brought on board is saying, "I am going to share responsibility in this patch, doing what it's supposed to do and not breaking anything or not burning down the world." You mentioned the 'ship it' culture, I could imagine toxic cultures where the person who shipped it, if it broke everything, it's on them to fix it and they have to get woken up or whatever and the idea about feedback is now we're sort of forming a community around what was done, so it's like the person that push the merge button isn't the only person involved. It's like if we have a culture where we say everyone that participated in the PR is collectively sharing the consequences and that will happen eventually and say like, everyone who's on this thread, it's on all of us if something goes wrong to fix it. I think that is certainly something that one would hope you'd see.

TARAS: I'm curious, where do you see this kind of observations usually come from because sometimes, I can imagine there, being a developer, coming on the team and they're like, "Why wouldn't I do code reviews?" and they're like, "Well, we have always not done code reviews," but then there could be someone like a product manager or somebody who's like, "Why wouldn't I do code reviews? We should start code reviews." Have you seen ways of introducing these ideas to teams that have worked out well?

JACOB: Yeah and I should probably say that, I'm not a manager, I'm certainly not an expert in how this works so I actually don't have a really great example, personally. I can churn example from working in a previous team, where everyone secretly wanted to be more collaborative but didn't know how to do it because if I'm the first person that puts myself out there and no one knows else knows how to collaborate with me, how to reciprocate, then what was the point? For the top 5% of teams that are just really have great energy together, they don't need a framework to do this sort of thing. They're just doing it naturally.

I think for everybody else, we need some kind of guidelines to do this because for better or worse, this is the industry that we're in right now. It isn't built around us. We don't know how to function this way and it's no one's fault. It's just sort of that's the way we've all sort of learn to work in this industry and I suspect we're not the only industry like this but we need some kind of guidelines to do it.

TARAS: Maybe it's a difficult question to answer. It varies probably from team-to-team.

JACOB: Yeah.

CHARLES: Yeah and maybe, we can kind of shift the question just a little bit because I have one that sits alongside not quite the same question but I think related and can bridge it and maybe we can find the answer there. We've talked about a little bit and there's definitely more to unpack there, how to give feedback that's kind and honest and I think to... What was the third?

JACOB: Inspiring.

CHARLES: Inspiring, that's right. What about from the flip side? This is something that actually comes up with us as consultants but I think it's something that other people will encounter in their jobs too, is what do you do when you are the creator and you're trying to present or share and ultimately solicit feedback from a stakeholder, the CEO of your company, one of your clients, one of your customers and you see them engaging in kind of a mode of feedback that's less than constructive. They're nitpicking on things that aren't really important and maybe, this could be completely and totally inadvertent. Their intention could be that they're trying to help you out but it's really not being helping at all and it's kind of like either tearing you down or just not being productive and it makes you feel... What's the word I'm looking for? Just brings an air of contention to the conversation that's not really helpful to producing the best result for everybody involved. How do you, as the creator actually engage with those people in a positive way and kind of help establish the guardrails and be that agent of change when those guardrails don't exist in their minds yet?

JACOB: Yeah, how do you do it, right? It's probably rare that there's going to be an environment where someone's going to say, "We're going to do this framework for feedback," and everyone are onboard from Day 1.

I think one of the things that you're probably getting at is the frustration that happens when people start giving feedback. You have this perception that they're giving you feedback that's unuseful and they're only giving it because that's the only thing they can think of or --

CHARLES: Exactly. They got to say something, they need to contribute their two cents to the conversation and some people are trying to be helpful and just aren't. Some people are just trying to appear smart.

JACOB: Yeah and it's like, "Oh, boy. They're just really off."

CHARLES: But you can derail the main narrative that you're trying to establish and get off in the weeds, kind of skirmishing with these people and after that happens, you're like, "Wait, no. I don't want to end up over there. I was trying to tell a story."

JACOB: When I gave this talk once, one idea that someone threw out was when you make a PR, mark up your own code first with everything that you want to draw people's attention to. I think you were getting at this as like, "One frustrating thing is when people getting feedback seem to have less invested than you do." They sort of just flying by and dumping on you and they probably, actually couldn't care less and that can be frustrating.

When you're engaging with people that maybe don't know how to get deeply involved in it yet or maybe don't have the time to, you can sort of gently nudge them to sort of like, "I want to talk about this part of it." It's almost like you're giving the answers to the test, which was like, "If you want to be part of the smart conversation, comment over here," and I think from the responders perspective, just speaking personally, I would appreciate that. It gives me an opportunity to feel like I'm actually being useful. We can all probably point to times where all the code review we have to do feels like homework that isn't the best use for our time. It's like, maybe the responder can make us feel like our opinions matter and they will be put to good use when you tell them, "If you comment here, it will probably be put to good use."

I think the sort of the way to get started is the responder can just say, "I would really like feedback over here," and I can't speak for everybody but I would suspect that more people than you think would be more than happy to be gently guided in what feedback they should get.

CHARLES: Right. I'm thinking how you would do this, for example in the context of a demo that you're giving to stakeholders because there's maybe the inkling of the idea to do that but it's often presented as an apology like, "This screen doesn't work," or, "Your handling isn't quite right. Sorry we're going to fix that." Look at the stuff that's over here and maybe, the way to frame that is a really hard problem based on the legacy architecture that we have for displaying errors and I'm not quite sure how to do it in a robust way and I could really use some feedback there. It's an area of exploration or something that we really need to focus on. You put that out there as I'm demoing this main functionality right now.

JACOB: Yes. You sort of say like, "Here's something to watch out for and please, let us know what you think," and then after you could say like, "As a team, we thought up these three possible solutions but we didn't want to move on them because we wanted to get your feedback first, so please impart on us what is the right way to do it." You know, flattery goes a long way.

TARAS: I have this imagery coming up in my mind as I'm hearing you guys talk about this that I keep seeing this kind of difference between a line and a triangle, where a line is kind of a tug and pull between, "Did I do this right? No, I didn't do this right." I think those are kind of a bad code review because it's very personal. It's not really where you kind of want to go and then the other way is like a triangle where the end goal that you're trying to get to is somewhere that is beyond both places where you let two people: the recipient of the code review and the giver of the feedback, the end goal that you want to get to is actually someplace else.

When we deliver the code and we say, "My code is done," then it invites this kind of linear feedback where it's like, "No, you didn't do it right," but in the other way, "This is where I got to so far. Tell me where you think we could take it next. If you don't think we need to change anything, then we're done. We can merge this."

JACOB: Yes. The very intelligent Jessica Kerr said recently on another podcast, there's no such thing as done but you can always make it better. I think you're getting at that point. That's a great question, by the way to ask of your responders -- where should we go next? Where do you see this going next? What will make it better? What would make it more robust? What would make us happier six months down the line when we're looking back on this? But I think of it as the firing range. That's the analogy I gave where it was like, "You put a pull request out and you assert that it is done and if no one can find fault with it, then it's done," and I just don't think that that makes sense. I don't think, really most people actually want to work that way. It's maybe not necessarily even healthy but I think everyone can find, at least I hope, a process that invites them to come into the process and share the way they see things. It can hopefully be enriching and just make everyone feel better along the way.

TARAS: This sounds to me like the inspiration part of this conversation because one thing I really like about working at Frontside, I'm an to experienced developer but I know that Frontside is dedicated to doing something much greater than I can do as an individual. When I ask, "What do you think about this?" then I know the feedback is going to be because there something always that is just beyond the reach that could be a little bit better than what we have right now and it's not until I can actually get the feedback from Charles, from Jeffrey and hear, "What do you guys think? Is this it?" and they're like, "What about this?" and I know that the next step is going to be a little bit or maybe even a lot better than what I have right now. I think that's the part that inspires me. I know I have actually gone a little bit further beyond my personal ability to get us to that vision of like this being much better than we could do as individuals.

JACOB: Yes and as opposed, "Oh, I screw it up and now, I have to fix it." The framework of find all the errors that I made, even though there may be errors. Let's be honest, there could be things that would be really bad, that would be a security problem but for a growth mindset, to think about it, it's like, "This is a way to make it better."

CHARLES: One of the things that you can do to help that is try and find inside every change, try and see the potential that hasn't yet been realized but is enabled by this change, like what exciting paths does this change unlock in your mind, right? And then you can share that. That's a great way to 'inspiration is infectious,' so if you can find inspiration to change, then you might be able to share that with the creators. If something is very personally exciting to you when you see something, maybe spend a little time searching for what you find to be exciting about it.

JACOB: Yeah, nice. That's great.

CHARLES: Yeah, we might have touched on that. I just bring that up because so often, I'll see the changes that people on my team are submitting and sometimes, it can feel like a cloud burst of like, "We could do this or we could do this or we could do this," and it's great to get excited.

JACOB: Yeah. I can share -- recently, there was an example. My coworker opened a pull request and it unlocked something for me where I said like, "You know what? This is making me think about this other thing that I've always hated doing with our legacy codebase. We should do that thing you're doing more because boy, would it make life easier?" and I wonder if it would be worth the time in refactoring X, Y, Z because then I would never ask you A, B, C again and I would be so much happier.

TARAS: It makes me think that we need to revisit our pull request template like what would that like? What would be the sections on a pull request template that would facilitate this kind of way?

CHARLES: I don't know. I know we're almost a time but before we even get into that, this is a question I have because we talk about pull requests templates, how do you match the amount of process with the scope of the change? Because it's probably a little bit heavy weight if I want to fix a typo and say, "Now we're going to have the creators lay out their case that they're going to ask questions of the reviewers and then the reviewers are going to ask questions and once everybody's been able to write down things that they observe the questions that they have, completely and totally divorced from opinion, now we can talk about opinion that is welcomed." If you're capitalizing one letter, that's probably a little bit heavy weight but on the flip side, it's probably absolutely warranted if it's a major feature that's going to be affecting a huge portions of your revenue stream and so, this is one of the problems I have with pull requests templates is they are one-size, fits-all.

Sometimes, a pull request template feels like it's supporting you and sometimes, it feels like the epitome of busy work and I have to spend all this time deleting the sections for the pull requests template. I wish there were ways you could choose different pull requests templates. Maybe, there are.

JACOB: So do I.

CHARLES: That's a little bit of a quibble that I have is the amount of ritual and ceremony that you have to go through is fixed with a pull request template but it seems like you want to match the amount of process to the scope of the change.

JACOB: Yeah, you do and the flipside is also joy. You don't want to give someone too little homework when you really needed the same work. Maybe there's a way to say, when a pull request is opened, the person who opened it or the manager or maybe someone else, has the option to sort of tag the pull request as saying, "This is going to be a bigger conversation. We cannot merge it until we have a face-to-face conversation first with these various stakeholders." Maybe, one of the questions, the first form that everyone gets for every PR gets is what level of PR is this? Is this a quick change? And what people that you just need some quick feedback on? Or is this the long form that you need where there needs to be a sit down with these people?

TARAS: Actually, I was thinking what's the adjustment that could be made for pull request is to say, "Here's what a complete pull request looks like. You can opt into whatever section you want," so you decide based on your pull request, what the actual sections of this template are appropriate but then someone can, on the flip side say, "Look, I would really like to learn about the motivations of this pull request. Can you add motivation section?" and understand that from your pull requests.

JACOB: Absolutely. You filled out Section A, please answer Section B and C. Yeah, cool.

TARAS: Yeah. Because I think what part of the challenge is that we have people look to us, especially people who have a lot of experience, or developers look to us to know what is the right thing to do sometimes or often and I think it's helpful to be a little bit more gentle and saying, "You can decide what is the right amount of detail to set the right conditions for this code review but I will give you some directions for what are things you can consider in including.

JACOB: Absolutely.

CHARLES: All right. We're a little bit over time, so we should probably go ahead and wrap it up. You are absolutely right, Jacob. This is a topic that keeps on giving. We didn't really move much beyond the pull request and feedback and stuff but we moved a lot around within that topic because it's a big one. Jacob, is there anything that we should mention? Any upcoming talks, meetups?

JACOB: No, I have a one-year old and it's all about work and family in this part of the life, so I have nothing to speak of at this point. You can come find me on Twitter as JStoebel.

CHARLES: Okay, awesome. Well, thank you so much, Jacob for coming and talking to us. This is a very rich topic. We only really scratched the surface. That's it for our episode today and we'll see you next time.

Thank you for listening. If you or someone you know has something to say about building user interfaces that simply must be heard, please get in touch with us. We can be found on Twitter at @TheFrontside or over just plain old email at Contact@Frontside.io Thanks and see you next time.

Mar 14 2019

44mins

Play

What’s in a UI platform?

Podcast cover
Read more

Here it is, folks! The first episode of our newly rebranded "Frontside Platform Podcast". In this episode, we talk about why platform? What is going to come out of these conversations over time? Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer.

We also want to get people thinking categorically, rather than in the moment:

  • Planning strategically
  • Recognizing and knowing obstacles - matching
  • Don't want the framework, but still have the problems
  • Virtue is a weakness in different contexts
  • Laziness is a virtue but also a weakness
  • Not a question of can
  • Shared problem

Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Jan 24 2019

24mins

Play

117: The Frontside Podcast 2019 Reboot Preview

Podcast cover
Read more

We're rebooting The Frontside podcast with a focus on JavaScript platforms. We'll zero-in on what it takes to build, maintain and grow web applications using popular JavaScript frameworks and tools. Join us for deep architectural conversations, interviews with fascinating speakers and stories from the the trenches of building large platforms.

Dec 27 2018

2mins

Play

116: Styled Components and Functional CSS with Kris Van Houten

Podcast cover
Read more

Special Guest:

Kris Van Houten: @krivaten | krivaten.com

In this episode, we are joined by Kris Van Houten to chat about Functional CSS and Styled Components: pros and cons, the problems that they are trying to solve, and how to choose between one or the other.

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

DAVID: Hello, everyone. Welcome to Episode 116 of The Frontside Podcast. I'm David Keathley, a software developer here at Frontside and I'll be your host for today's episode. Joining me as a co-host is Jeffrey Cherewaty.

JEFFREY:: Hey, there.

DAVID: And we've got an amazing guest with us today, Kris Van Houten. Kris is the author of a very interesting functional CSS library called Elassus, a good friend of mine and today, we're going to be talking about how that compares with another new pattern I've been seeing everywhere recently: styled-components. Hi Kris.

KRIS: Hey, how's it going?

DAVID: Doing great. Let's just go ahead and jump into things. Kris, you want to give us a little introduction to your functional CSS library?

KRIS: Yeah, for sure. I guess, first of all about me. I've been primarily a frontend developer for about seven or eight years now. I don't know, at some point you start losing count but over the years, I worked largely within the Ember framework but I've also worked within React and Vue. Like most of developers over the years, you tend to require an affinity towards certain areas of software development and for me, those areas tend to lean towards accessibility but also, learning how to write CSS with what I call like the future in mind.

To try to keep the story as short as possible but a couple of years ago, I was working on a project and started to realize that our CSS is one of those things that just continues to grow over time along with the rest of our application but I also noticed that we tend to have a lot of areas of repetition within our CSS, like how many times are we setting text align to center, how many times are we setting the text color to the primary color or to gray or something of that nature and maybe we could create a utility library that just does all the stuff for us, similar to if you've ever worked with Bootstrap that they had utility classes that allow you to do some things like set the text alignment or the text color or font size and things of that nature.

I started thinking around it but eventually, I changed jobs and that idea kind of just went to the side but then, I'd say about a year and a half or two years ago, I came across a blog post called '15kb of CSS is all you’ll ever need' and so, immediately I was intrigued. It was actually a blog post on Medium that was talking about the benefits of using something like Tachyons or base CSS, which are two different functional CSS libraries. To explain what I mean by functional CSS, it's just a whole library, a whole arsenal of these utility classes that just do one or two things that allow you to basically take any design and implement it by composing all these classes together throughout your HTML.

I was looking at Tachyons, looking at base CSS and thinking, there's a couple of things I like to customize about this but at the time, when I was looking at it, I couldn't really figure out how to customize those values very easily and so, I did every developer does and just decided to make my own and that's where Elassus came into play, where it's a CSS library that is entirely made up of functions that generate your CSS based off of the value of variables in Sass. Everything is customizable, all the way down to how the class names look and the syntax that you use. If you hate what I'm doing, you can customize it very easily.

A lot of people might wonder why would you want to use functional CSS, what are the benefits that you get out of it and really, one of the first two things that comes to mind for me that really attracted me to it was that your CSS files start small and they stay small. For the most part, depending on the configuration and the particular library that you're using, your entire CSS payload can be anywhere from 10 to 20 kilobytes, minified and gzipped, which is in stark contrast to some other projects I've worked on, where just one of the many CSS files you're downloading are 745 kilobytes. Dramatic improvements there, so that's why it was instantly appealing to me.

But one of the other nice things you kind of get for free by default is a consistent design pattern, right out of the box. Because if you're working on a large team that has maybe multiple designers and many developers, one designer might implement something with the spacing system that's maybe based on five pixels: your padding, your margins, your widths, might be five, 10, 15, 30 pixels, 45 pixels but then maybe, another designer is implementing something based off of more material design, which is base-4 pixels, so it's like four pixels, eight pixels, 16 and so on.

Over time, little differences like this can really show up as you navigate between pages of your application and with functional CSS, you're given a limited number of options to choose from. As a result, it actually makes your application feel like it's one cohesive consistent piece of software between pages and between the features that you're navigating through. Those are really two areas that have really attracted me to functional CSS and I guess, is a tl;dr -- if that was a tl;dr in fact -- of what functional CSS is and why I like it.

DAVID: One of the things we've got here was the problem that functional CSS and styled-components are both trying to solve.

KRIS: Like I said earlier, a lot of the applications that many of us might be working on, they tend to grow at time by either adding features or rewriting and improving existing feature. In JavaScript, over the years, we have made tremendous improvements in our tooling with things like code splitting and tree shaking to really limit and minimize the amount of payload that we're sending to our users at the end of the day. But really, it's only up until recently we hadn't really been talking about how we do that for CSS and typically, as developers, we had deadlines, we're working with them in sprints, we had a real strict time constraints on what we can work on and for how long and so, we tend to focus on making sure the features look right, they work right, and that our test passed.

In CSS cleanup is just one of those things that often doesn't get cleaned up as often as you would like. Sometimes, that's also because we assume that a particular set of styles is probably getting used by another team, working on a completely different part of the application. As a result of just not being able to work on this problem or this issue, we likely have many, many kilobytes of orphan styles being shipped to our users at the end of day and our style sheets tend to grow over time. If there's one thing that's kind of become a mantra within software development over the last few years, it's that every kilobyte matters.

One of the main reasons I wanted to come and talk to guys today was about some things that I've been tinkering with on the side, just to figure out how we solve this problem and so, their approach of using functional CSS is one of those things but using something like styled-components, which is mostly used in React library of the solution of writing your CSS in your JavaScript and that's another approach to how to tackle this problem.

DAVID: That's an interesting thing you said there. I remember, whenever I started off in software development, one of the big no-nos for styling is don't put your styles in your JavaScript. It's interesting to see that that has kind of changed.

KRIS: Yeah, indeed. I remember listening to a podcast about two, maybe three years ago and they had a panel on and they were talking about there's a new thing called CSS and JS and I was like screaming in my car like, "This is the most horrible thing in the world you could possibly do. Why would someone want to do this?" and obviously over the last couple few years, the technology has definitely matured and they definitely hardened it, to where when I was working on updating my personal blog, working on moving over to Gatsby, I didn't want to have to pull an entire functional CSS library because I kind of thought that would be overkill for just a couple small pages and so I was like, "Let me try this whole styled-component thing. That way if I'm going to hate it, I can hate it from the experience point of view."

I actually found that the developer experience of working with styled-components was really nice and so, I kind of became converted over to that way of thinking about, "I'm not a hater. It's actually a really nice way to write your CSS for your blog or your applications."

JEFFREY:: I count myself still as a skeptic about this whole idea, so I'm excited that we're going to talk about it now and then, you have the opportunity to sell me on it, which should be fun. Some of the work I've been doing on lately, we've been doubling down on the idea of CSS modules, that at least fix some of the encapsulation and problems around CSS where everything is sending up in a global scope. How do these ideas of functional CSS kind of take that even farther? What advantages are there over using, just simply using say, CSS modules where we're fixing the global namespace.

KRIS: I'd say with functional CSS, again the things that kind of mine for me are you keep the file size small, you have your design patterns that you can get for free right out of the box. Your styles are pretty low specificity, meaning that if you ever have a need to where you get a really customized something because you just can't do a functional CSS, which I found to be very, very few and far between as far as the number and instances I've come across, it's very easy to overwrite if you need to go back into, I guess old fashioned way of actually writing some CSS yourself.

But then you also have the benefit of not having to come up with clever class names to describe the different states or different use cases of your component that you're working on or the future that you're trying to implement and so, I'd say those are definitely some of the pros of why I like using functional CSS at the end of the day. I will say that there are some cons -- I'll be completely honest -- and say that one of the areas where functional CSS tends to have a weak spot is in the area of browser specific issues.

In my experience, I tend to have a lot of issues with iOS Safari and it's like the bane in my system sometimes. Sometimes, worse than i11 but if you have to really target something that's exclusively for a specific browser, that's really not doable from what I've seen in functional CSS. That's where we should have to go back in and handwrite some CSS for that kind of situation.

But also, one of the cons is that sometimes, depending on the element or the component you're trying to style, I think the example is if you have a nav bar that can toggle between being a sidebar but also a top nav bar, that's a completely different set of styles and to achieve those few layouts at the end the day, you have to have a completely different collection of class names that you have to toggle between based off of the state of that component or that feature that you're working on. While that's doable, especially with something like a frontend framework like Vue, React or Ember, it's just a little pain point that sometimes you have to work around but I say, those are the pros and admittedly some of the cons of using something like functional CSS to tackle your styling needs.

JEFFREY:: My favorite thing that you mentioned there was how hard naming is. I think everyone can agree, very strong of it, just naming things is the hardest thing. What has been your experience in learning these? They're almost new domain languages for like we want to padding of this amount. Have you found that pretty easy to get up to speed and get fluent in or have you run into some blocks in kind of learning these new languages that some of these CSS tools have presented.

KRIS: That's a really good question. When I first started digging into Tachyons -- that was the first functional CSS library I started working with -- there is definitely, I'd say a small learning curve of maybe, I was trying to implement it just to see how it worked and it took me, maybe a couple of hours, if that, to kind of get a feel for how they write their classes and how the classes are put together to do certain things. But once you kind of get through that little initial pain point, it's pretty easy.

In one of the things that when I was working on my own implementation of it, I wanted to make sure that no matter what you're doing, all the class names are consistent, that the patterns that you use to your class naming is consistent and I guess, because I wrote it, then it wasn't so hard for me but I also understand that that's not the case for everybody. When I wrote Elassus, one of the things I did is I actually was able to create a compiled JSON version of all the CSS and actually, use that to create like a little React search tool. If you're looking for padding or margin, you can just enter that in a search bar and it'll tell you all the options in the classes that are associated with that and that's one thing I did to kind of help with that barrier but it definitely can be a bit of a learning curve but like I said, it's maybe an hour or two of your time to really get familiar with it.

JEFFREY:: That is not bad.

KRIS: No, not at all. I was actually really surprised at how fast I got up and running with it.

DAVID: That actually ties into something that I was kind of curious about myself. Whenever I went through the coding bootcamp that I did a couple years ago, after we finished going over raw CSS and how that works and the basics of that, they introduced us to Bootstrap as our first real CSS library to play with. That was fairly achievable for someone brand new to frontend work in general. What I was curious about is would you see these functional CSS libraries as equally accessible or maybe, more advanced than something like Bootstrap or Foundation.

KRIS: I never thought about that before. I think I kind of see it like as the next step. I've worked on several products over the years where Bootstrap was what they started with. As we went on to build out the feature set of the product, we started to find ourselves battling with the various styles that we get at from Bootstrap for free. I think one of the things about working on a website, where actually using Bootstrap is that sometimes you can just kind of tell that it's a Bootstrap based website. I don't know if I'm making any sense to you guys --

DAVID: Oh, yeah. Definitely.

KRIS: That was another reason why I wanted to have something... That had basically forced me to decide how my navigation bar looked. It forced me to think more about how my sidebar is looked or what my spacing scale -- it's the term I like to use -- based on what you're using for your pure margins, your paddings or what's your heights, things like that. It make me think about what those values should be and that's how you kind of break out of the box of having everything look like it was built in the same framework.

I would say it's more of a next step if you're talking about your experience as a developer. Yeah, learn CSS. I think it's still incredibly valuable to know CSS and for me, like you David, I kind of got started and learned about this thing called Bootstrap and you can quote me. Back in the day, I said, "This is like the jQuery for CSS. It's amazing," but quickly, I kind of ran into some of those issues that I mentioned earlier and kind of force me think about, "Maybe, I don't want to use Bootstrap. Maybe, I want to handle my own thing," and so, I say functional CSS, as far as like again your education level, it's more or less that step of if you think about how do you want to solve certain problems in our design.

DAVID: Okay. We've gone over functional CSS with some and styled-components, how would you choose between the two?

KRIS: This is actually really hard because I actually like both approaches. I'm not one of those guys that's like, "That one sucks. I don't want to use it." I actually think both have their place and so, I know I kind of sound like the typical 'this versus that' blog post in technology right now but I will say that there's a few questions you should ask yourself and ask your team if you're considering changing up your approach to CSS. One of those is what is the preferred developer experience of your team. That's something we've been talking about a lot at the company.

What we mean by that is how enjoyable is it to work in a particular codebase? Do your developers mind writing their CSS, instead of a JavaScript file? Or would they prefer to have something where they can actually read the HTML and build a C, just by looking at the elements in the classes that are being applied there? What this layout looks like or what's being applied to this layout? You know, questions like that can definitely help determine what direction you should go.

Again like I said, one thing I love is that both approaches remove the developer's burden from having to come up with clever class names for all the different use cases or states of your components. I think you're going to get a win-win either way there. However, if you're using React, we already have a ton going on in our JSX and so, if adding a series of class names to determine your design tips is overwhelming for you, then maybe give styled-component a try because it can definitely simplify your JSX output a little bit.

If the idea of having JS to handle your CSS seems a little fishy, I think one of the cons of using styled-components is that we're asking JavaScript to do yet another thing for us. If you're one of those people who has a problem with that, then give functional CSS a try. There's definitely more than one flavor out there and I've tried out almost all of them and like them. They all have their nice tidbits inside of them. I'll say, if you prefer to have easy theming, if you're working on a project that has to have a lot of theming involved, then I might be more toward styled-components. If design consistency and design patterns is a priority, then maybe functional CSS is an option for you.

With both of those scenarios, easy theming in patterns is possible in both. It's just one is harder in one than the other. With styled-components, there are implementations of it in Vue and Ember but they don't seem to be very widely used yet, so I'd say use at your own risk or contribute to open source to make them better. I'd say for right now, if you don't mind being locked into using React, then styled-components is pretty cool. It also allows you to transfer your styled-components from one project to another without having to worry about. [inaudible] all the CSS because that's a big deal but I would ask myself those kind of questions to determine what solution I would choose.

JEFFREY:: I am interested in the performance implications of these tools. You touch earlier on by using tools like this, you can definitely shrink your CSS payload by a lot, so what is the actual output of using these tools? What will it look like? Do you end up with still an external CSS file or is it a scenario where you're kind of getting some CSS injected into your markup?

KRIS: Yes with one and no on the other. With functional CSS, you typically have a CSS file that is compiled and maybe, you're using Sass to do the operation for you or post-CSS, so you get an output CSS file that you help you fetch on the frontend of your application. Like I said, [inaudible] if these files are between 10 to 20 kilobytes, which is remarkably small. With styled-components, it's a totally different approach. It's a totally different implementation as well, unless you have a normalize or perhaps a reset CSS that you're fetching on the frontend, all the styles are actually -- I guess, I should start with earlier. What happens is you define your styles in your JavaScripts and at runtime, the component will read the styles that you set and then create a unique class for that instance into the head of your document. It's basically a single class use for that component and so, all in all, it gets compiled at runtime. There's no additional CSS files being injected. It's actually being created dynamically in your actual HTML document.

One of the things I was first concerned about was what are the performance implications of having JavaScript do this for us and over the last couple of years, as the technology has been maturing, performance has been an issue but I think it was with V4 -- version 4 styled-components that just released, I want to say, in September or October, they have tremendous performance improvements. It's rapidly fast now, so it's not really something you need to be concerned about.

I'd say again, it's one of those things: Can you get over the mental hurdle of having JavaScript do one more thing for you? And can you get used to the developer's ergonomic practice of writing CSS inside a JavaScript file? I know it kind of takes a little bit of getting over that because it just seems so strange but you also get a lot of perks by doing that with having easy access to theming, you get to use some of the power of JavaScript to help build out your CSS, which can be nice as well even if it kind of feels weird.

JEFFREY:: Yeah, I agree with the ergonomics do feel weird but you'll get some benefits out of doing that. In the styled-components where it'd bring on the fly, I'm happy to kind of figured out the performance like runtime implications with that but it seems to me that now you no longer have a cache CSS and that you're having to get that all the time on the fly. Maybe for some systems, that is okay but that's definitely something that I would miss having -- the idea that CSS actually can be a little heavier because it will always be cached.

KRIS: That is totally a valid point. I think one thing I do kind of like about the styled-component approach is that when you delete a component, all the styles go with it, so you no longer have orphaned styles just hanging out there that are getting shipped down to your user. I agree that the caching is definitely a concern but I just kind of start to weigh the pros and cons and again, you have to decide for your product, for your team what works best for you. Also, one of the pros that I would bring up is that you don't have any orphan CSS hanging out in your application anymore, which is definitely one of the key contributors to growing bloat in your files over time.

JEFFREY:: That is very nice.

KRIS: Yeah.

DAVID: That ties into a question that I was kind of wondering. As your projects get larger and larger and more complex, your style sheets tend to grow along with that and one of the issues that you might have as time goes along is visual regressions like styling bugs. How does using functional CSS or styled-components sort of deal with that or prevent that?

KRIS: With functional CSS, like I said, you have those patterns kind of built in out of the box. As long as you don't change the value of what a particular class renders out of, maybe someone goes in and without thinking of it, they change the second value of your spacing scale to a lot of pixels just because they felt they needed to. You're not going to have visual regression that happen over time because the whole premise of functional CSS, those values don't change. Those are basically meant to be seen as immutable values, immutable classes. You shouldn't have style regressions that pop up over time as people continue to work on your application.

With styled-components you still have that risk unless you are setting all those same values and your theme properties that you can use within styled-components. Unless you're setting all your different spacing values, you have a tendency to potentially run the same issue. But actually, now that I think about it, maybe not because with styled-components, every component is unique. Every components styles are completely unique, so again, unless you're changing those core values of your themes, you should not have visual regression that happen in a totally separate part of your application when you're working on say, the home page. I would say both handle that pretty well, now that I'm thinking of it.

DAVID: That's cool.

KRIS: This reminds me, I'm working on a particular piece of software at a particular feature of my work and we don't use functional CSS or styled-components yet. I'm trying to get them to move over so again, this is all stuff I do on my free time but I'm working on editing a component and I had to make some pretty drastic changes to it and I was like, "How is this going to break on some other page that I'm not aware of yet," and so, this requires a lot more going back and checking all the various uses of this particular component. We used it in 10 or 11 different places on different pages and so, we have to go back to each and every single one of those use cases to make sure I didn't break something. Whereas if I was using one of these two approaches, it wouldn't be something that I would have to be concerned about. This is what I'm trying to push through on my own job as well.

DAVID: Yeah, it really eat up your time.

KRIS: Yeah.

JEFFREY:: I want to talk for a few minutes about media queries because, I think that the situation is kind of handled. It's definitely something I've run into: CSS variables and with post-CSS tooling. I'm interested in kind of how does the media query story is handled with the setup.

KRIS: With functional CSS, you tend to have, basically say, I have a class that sets a margin on the top of 16 pixels, for example. Let's say that that class name is MT-3... I don't know, meaning for like the third position in your spacing scale, 'MT' stands for margin top. If you wanted to only use that style for perhaps, like your medium break point, then different libraries have a different way to approach this, basically, you're 'MT-3--medium' or '--desktop.' It's the additional characters that you append to the class name to target that specific media query.

Also, depending on the framework that you're using, you can also target pseudo states, whether or not this element is focused or on hover, maybe you want the background code to change. There's variations of those classes that allow you to target those specific states, which is really nice in functional CSS. Again, just by looking at the classes that you're adding to your DOM, you don't have to scroll through your style sidebar to figure out like what styles are associated with this class name and why it's being overwritten by these 10 other things that we have going on. Again, you can just look at the HTML and see what's happening and what should happen if you hover over an element.

With styled-components, you can still use media queries and all that stuff right out of the box. In order to keep your media queries consistent, that way not everybody is having a handroll, at least by every single time that you need to use them. You can actually store your media queries in a separate JS files and it's kind of import them as you need them, which again, kind of feels like a weird developer ergonomic for your CSS but definitely, it comes in handy and definitely keeps your styling consistent the more you break out the various pieces into small pieces or small components like that. That's how to handle things like media queries and things like that.

JEFFREY:: It's interesting how these approaches are kind of pushing the web towards, I'm feeling a lot more like native development, where you build your style and your visual look in conjunction with your component code and those things are not really ever separate. I think, there's some interesting influences coming in there from some made up things.

KRIS: One thing I think about a lot, especially when I'm thinking about functional CSS is I'm reminded of this image I saw kind of posted on Twitter somewhere in the interwebs. It was a picture of the Super Mario Brothers start screen and the entire Super Mario Brothers video game was something like 23 kilobytes. This screen that you're looking at, which was like the start screen was like 345 kilobytes or something. My numbers are probably off but it was something drastic like that and this makes me think about, if you were to go back through, there's a couple of documentaries about how the loops that developers had to jump through in order to make old video games for Nintendo Entertainment System back in the late-80s -- I'm probably dating myself right now -- but the things they had to go through and to think about in order to make those games work the way they did is insane and I kind of feel like over the last... I don't know, maybe 10 or 15 years, because we've gotten such fast computers and such fast mobile devices and so on and so forth, we've kind of gotten crazy.

I look at video games now and they're 100 gigabytes sometimes and I kind of feel like they didn't need to be 100 gigabyte video game and I think --

DAVID: But it's so much cooler because it is.

KRIS: I'm not saying it's not cool but I'm think like if they really took the time to pay attention to some of these details, we probably could automate this a little better. Maybe, I'm just being biased but I think about we're kind of starting to do the same thing and talk about the same kind of problems in web development. Again, just because we can have a 745 kilobytes of CSS file and thought the world falling over, it doesn't mean we should, so maybe, let's take into consideration that not everybody is using a cinema display on a super-fast MacBook to view your application, what about the person who's using an outdated Android device that hasn't been updated in three or four years? How's your product going to work on their device?

I'm really glad that people actually started to take that kind of thing into consideration and it even reminds me a little bit of another area that I think a lot about is the area of accessibility: how do people with various impairments or disabilities use your software and how can they use your software but also, outside of that, how do people, who maybe don't have the money to have a fancy mobile device like you have, use your product along those same veins. It kind of dips into the same vein of accessibility when we're talking about how do we optimize our products to make them usable by everyone despite their income level or their ability to have a fancy, nice desktop or a latest iPhone.

JEFFREY:: Awesome.

DAVID: Yeah. Definitely, you think about who are the majority of users on the internet these days and really, it's an emerging market with underpowered devices and the internet is becoming much more and more accessible for people to get on and browse, so you really got to keep those people in mind.

KRIS: Definitely, especially if you're creating software that you want people across around the world to be using, a lot of people don't have access to crazy high speed internet like we do here in the States, so we should probably be taking more time to consider what it looks like for those people to use our product. At my company, what we've been talking about is like we should be testing and basically, starting mobile first with all of our development to see how does this work on our mobile device.

I'm a remote developer so we're trying to figure out how to get me mobile devices that I can actually test on our VPN but it's something that we're trying to move towards at my company as well.

DAVID: Okay, that about wraps up Episode 116, our final episode of 2018. Man, the year has gone by fast. We are the Frontside. We're the frontend specialists who make your largescale application projects run smoothly by helping your team assemble the right tools and implement the right automated processes. We can ensure your projects can move forward on time and without regressions while preserving quality and long term maintainability.

If that's something you're interested in, please get in touch with us at @TheFrontside on Twitter or Contact@Frontside.io via email. Do send us any questions you might have, any topics you'd like to hear about in the future. We look forward to hearing from you. Thanks again to the wonderful Mandy Moore for producing today's podcast. You can find her on Twitter at @RubyRep and that's it.

Dec 20 2018

32mins

Play

115: Testing Issues and BigTest Solutions

Podcast cover
Read more

In this internal episode, Charles and Wil talk about testing issues and BigTest solutions. Pieces of the testing story are discussed, such as the start and launch
application, component setup and teardown, interacting with the application and component, convergent assertions, and network. Then they talk about testing issues: the fact that cross browser and device-simulated browsers are not good enough, maintainability and when and when not to DRY (RYE), slowness and why (acceptance) testing is slow, portability and why tests are coupled to the framework, and reliability. Finally, they talk about BigTest solutions:

  • @bigtest/cli to start / launch (Karma
    recommended for now)
  • @bigtest/react, @bigtest/vue, etc for setup & teardown
  • @bigtest/interactor for interactions
  • @bigtest/convergence for assertions
  • @bigtest/network in the future (Mirage
    recommended for now)

Resources:

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES: Hello, everybody and welcome to The Frontside Podcast, Episode 115. My name is Charles Lowell, this episode's host and a developer here at the Frontside. With me today to talk some shop is Mr Wil Wilsman.

WIL: Hello.

CHARLES: Hello, Wil.

WIL: How's it going?

CHARLES: It's going good. I'm actually pretty excited to get to jump into this topic because we're going to be talking about some of the big things that are happening at Frontside and some of the things that we've been developing in almost for the last year.

WIL: Yeah. It's been about a year now.

CHARLES: It's been about a year and we've talked about it in various podcast but we're going to be talking about it again because there's just been so much progress that we've made, I think in a lot of clarity in kind of what we're going for here when we talk about BigTest and testing big and how we want to roll out the BigTest framework. We just have a lot more experience using it on a number of different projects, so we get to talk about that today.

Before we get started, I just wanted to talk a little bit about what BigTest is, both in terms of the framework and also the philosophy. Wil, you're the one who works the most on BigTest. When you think about philosophically, what does BigTest mean to you?

WIL: It's the size of your test, not a physical size like size and storage but how much your task actually does. The test itself can be very small as our test are but it tests the whole application from the user interacting with it down to the network requests. That's the definition of the philosophy of a BigTest to me. It's to tests your application from the biggest point of view.

CHARLES: Actually, achieving that can be surprisingly difficult, especially in a frontend JavaScript application and there are a lot of solutions out there for testing and we've talked about them. One of the questions that arises is when we talk about BigTest, what exactly are we talking about? Are we talking about a product that you can download and install? Are we talking about the philosophy that you just outlined? Or are we talking about the individual pieces of software that make that philosophy real? I think the answer is we're kind of talking about all three but we want to take this episode to talk about where we're going with the product.

What we've identified is the subcomponent pieces of that product. In other words, in order to get started testing big, what are the things that you need to think about? What are the things that you need to do? And then what are the component pieces? Because one of the things that I think is very important to us is that you be able to arrive at wherever you are in your project, whatever framework you are using, whatever current testing solution and be able to begin using BigTest. That means, you might be using some of it or you might be using a lot of it but we want to meet you exactly where you are, so that you can then, get onboarded and start testing big.

WIL: Yeah. Definitely an important distinction that we get confusion about is what is BigTests and people just assume like this whole test suite is BigTest but we used the parts of it ourselves like we use Mocha, which is not part of BigTest. We use Chai, which is not part of BigTest. We use Mirage which is kind of part of BigTest but definitely it originate in BigTest and Karma and things like that. BigTest isn't your testing suite. It's not one thing to go-to to grab, to start writing tests. It is a small pieces that you can use in conjunction with other small pieces, just to make it really easy and flexible to test your application.

CHARLES: Exactly. Because it turns out that there's a lot going on in the application. Maybe we should talk about what some of those pieces are that you might want to start using BigTest with or that you might need to test big, I guess I should say. What's a good place to start? Let's start with talking about some of the issues that you want to do when your testing big. Then we can talk about what pieces of the testing story that fit in to solve those issues.

One of them is you need to test that your application works, like actually works. That means you need to be able to test on a multiplicity of browsers, for example. We're limiting to the domain of web applications. There are actually a shockingly large number of browsers. It's not just Chrome. It's not just Safari. There's Mobile Chrome, Mobile Safari, which are subtly different. There's Edge and I'm sure the Mobile Edge is slightly different too, so you want to be able to test cross browser, right?

WIL: Yeah, absolutely and things like Nightmare and JS DOM and things that simulated browsers, we don't necessarily think those are the best tools for writing BigTest because we want to ensure that those browser quirks are caught and tested as well.

CHARLES: This is not theoretical like sometimes you'll have a syntax, like the parser is slightly different and you have something that throws a syntax error in Safari or in the Internet Explorer and your whole app is completely busted. If you just take in the time, just even trying to load the app in that browser, you would have caught that. That's what I've been on many times.

WIL: Yeah and what I just saw came up yesterday, which comes up frequently is not closing your CSS Selector and Chrome doesn't really care like web to browsers don't care too much but that will fail in Edge and depending on what you're missing, the failing is part of that too but mostly, Firefox and Chrome don't care about that kind of thing.

CHARLES: Right. It seems like the majority of testing solutions are kind of focused around Headless Chrome or some variation of Electron. That entire class of really dumb errors has already been caught. Like I said, to actually catch it, it takes less than a millisecond of CPU time just to load it onto the browser and see that thing doesn't work. Unfortunately, they can be catastrophic errors but the problem is how do you actually do.

We want to test like cross browser. This is something that we want to do. For me, I just can't imagine shipping an application without having some form of cross browser testing, some capability of being able to say, "I want to test it," like, "We want to work on these eight browsers and so we're going to test it on these eight browsers," but how do you actually go about doing that?

WIL: Right now, we are working on the BigTest CLI which will help us launch browsers but that's not complete yet. It has some bugs on. For the meantime we've been using Karma, which is great. Basically, you just have this service that's able to find the browser binary on the system and just launch them pointing to local hosts with your app loaded up and your normal development server take care of loading the test up and running the test. Karma and the BigTest CLI is just there to capture output and launch those separate browsers.

CHARLES: Yeah. I remember when I was first using working with Karma and I think Testim is another tool that's in this space. There's Testim, Karma and BigTest actually is we're developing a launcher because launching is something that you're going to need but it's such a weird problem. I feel like with the browser launchers, there's three levels of inversion of control because you're starting a server that then starts another process, which then calls back to your server, which then loads the app resources, which then loads the tests and then runs the test. There's a lot of sleight of hand that has to happen and –

WIL: Including injecting the adapter that you use, like the Mocha adapter, the Jasmine adapter that ends up reporting back to the CLI. That's something that Karma and Testim and BigTest will handle for you.

CHARLES: Right, so you're fanning out the test suite to a suite of browsers then collecting the results but basically, you need some sort of agent living inside the browser that's going to act on behalf of the test suite, to collect the results. I remember when I first came into contact with Karma and Testim, I was like, "This is so unnecessarily complex," but then, having used it for a while and I think there are some complexity that can be removed but if you want to do cross browser testing, that kind of level of ping-ponging is there's a certain amount of it that just necessary. It's something that's actually quite complex that you need to have in your stack, in your toolbox, if you want to truly test big.

WIL: Yeah and all the solutions is mechanisms for detecting when the browser has launched and restarting the browser based on its health check, etcetera and things like that that you wouldn't think of actually loading up a browser but you need to think of when you're doing automated testing.

CHARLES: What is it that sets apart, for example the launcher solution? We kind of call this class of solutions launchers, so Testim, Karma, the BigTest CLI. What is it that sets BigTest CLI apart from say, Karma and Testim?

WIL: We're trying to be as minimal config as possible and just really easy to get started and going. Karma has a lot of plugins that you need to make sure you have installed and loaded in the options set for those plugins. Testim has some stuff bundled but it still requires this big config bulk at the beginning that you need to passing or that's all what you were doing. We're trying to avoid that with BigTest CLI and one of the ways that we're able to avoid that is by just letting your Bundler handle bunding the test. In Karma, you need Karma webpack or something. Testim has some stuff that it needs and really, we just want like in-testing mode. When you're in the testing environment, just change your index to point your tests, instead of your application and your Bundler will do all the work and we just serve that file and collect the results.

CHARLES: Right, so it doesn't matter if you're using Parcel or you're using webpack or you're using Ember CI.

WIL: Yeah, Rollup even.

CHARLES: Or even just like low level Broccoli or Gulp or whatever. There's a preponderance of bundling solutions and that was always something that was just a huge pain in the butt with Karma. I know it's like just getting to the point where my tests are loaded and you look with Testim, most of my experience come with Testim comes through how it's used in Ember CLI like the histrionics that undertaken just to bundle all your tests assets and your application assets and your vendor assets and just kind of bootstrap that thing. It's a lot of work.

WIL: Another thing with BigTest CLI that doesn't include in Karma and Testim does is a concept of a watcher because all these Bundlers, you have HMR -- hot module reloading, Rollup and things like that. It come with plenty of plugins. Parcel is always set out of the box, so if you're using your Bundler, your existing Bundler to bundle your test, you get that watch feature for free, so it's another complexity that the BigTest CLI kind of eliminates.

CHARLES: What it means is we've hidden most of that complexity. Just let the Bundler handle it, right? The Bundler is the part of your project that bundles.

WIL: Yeah.

CHARLES: You should have your launcher actually doing that for you but we still do need to have some way to do that set up and tear down. When we have that testing endpoint, we have some way to say, "We're starting a test, not the application. We're ending the test, tear it down," so how do you abstract that away?

WIL: That's kind of something that we can't really avoid. It is just like some sort of dependency on the framework itself, your application framework. It's like you need to mount a React app. You need to mount an Ember app, etcetera and there's different ways to mount those things. This is one of the things that can't really be decoupled as much as everything else can but BigTest has BigTest React and BigTest Vue and we want to eventually gets like BigTest Ember but really, the main export of all these packages is just a simple mount helper that will mount and clean up your application for you and your testing hooks, whether you're using before each from Mocha or before from something else like Jasmine. You know, no matter what you're doing, you just have a hook that mounts your application and then, cleans it up on the next mount.

CHARLES: It's worth pointing out here that this is kind of a core concern of testing and testing big is being able to mount your application and tear it down with regularity and having hooks into that process. Whether you're using BigTest or whether you're not, can you still use BigTest React and BigTest Vue, even if you weren't using anything else?

WIL: Yeah, absolutely. Like I said, they just export simple mount helpers. I don't even think they have any other inner BigTest dependencies. They just have pure dependencies on their frameworks.

CHARLES: Right and so, you could use it, even if you wanted to roll everything else by hand or you wanted to get started somehow and you needed to do set up and tear down, again this is something that's key to being able to test big, so you should be able to use it independently, whether you use the CLI or not, whether you're using any of the other tools or not. All of the tools can be used independently.

WIL: Then another feature of the BigTest React and BigTest Vue is the tear down that happens before set up, rather than happening after your test runs, having a separate tear down. This allows it. Whether your test passes or fails, you can look at it and play with it and inspect it and debug it much easier than if you had tear down. You have to disable at tear down or throw a pause in there to keep other or something.

CHARLES: Yeah, I love that. When something goes wrong, you can just let the test case run and the last test that it runs, it just leaves at set up. It does the tear down right before the set up.

WIL: Exactly, yeah. At the very end of the whole test run, there's an app there waiting for you to play with.

CHARLES: If you focus in on a single test, we most commonly use Mocha, so you say a '.only' to run that single focus test, then you have the state of the application at that test case set up and ready to go. You can just play with it, you can inspect it, you can actually just use it as a starting off point and interact with the app normally as you would.

WIL: I want to say, Cypress does this too. They do their tear down before they're set up as well. That's how you're able to play with Cypress test.

CHARLES: Yeah, I like that trick. Now, we talked about launching, setup and tear down but we haven't actually talked about much of what actually happens in the test cases themselves. We talked about how to start and launch your test suite, how to do that across a bunch of different browsers, how inside of that, you have a separate concern as applications set up and tear down and how you want to lean on how you're actual app is actually bundled because that fits in with the philosophy of testing big. You don't want to use an external Bundler for your test suite. You want to use your real Bundler, how the asset is actually going to look.

But when it comes down to actually writing the tests, you need to be able to interact with at the highest level as you possibly can. When I say highest level, we want to verify that the users, when they take certain actions, we'll see certain outcomes and so, we want those outcomes and we already talked about this to be reflected in a real DOM, in a real browser. But at the same time, the real interactions, we want those to be as high fidelity as possible, so you want to be sending events to the browser. You want real mount events, real key events, real interactions.

WIL: Yeah, interacting with application. That's another core philosophy that we kind of talked about earlier that defines a BigTest. It's the user interacting with your application. We're not calling methods and expecting other callbacks or arguments to be passed or clicking on a button and expecting a message to pop up that says, "Form submitted successfully." These are user-facing things were starting on and acting on.

CHARLES: Yeah and then, it can be really tricky because these things don't happen synchronously. They're happening inside of your browser's event loop. I click that button and then it goes off and there's some loading state and then, I might get an error message that pops up this thing that animates out and then, goes away. The state of the browser is in constant flux. It's constantly changing and so, it can be very difficult to put your finger and say, I want to be in this state if you are limiting yourself to only reading from the DOM.

Some frameworks, Ember for example, you have kind of a white box where you can actually inspect the state of the Ember run loop and use that to do some synchronization but it can be very, very hard to coordinate these interactions.

WIL: Yeah. You know, to talk about getting to the solution as a BigTest interactor, which is basically modern page components or page objects. If you ever heard of page objects, it's just a way to encapsulate interacting with big pieces of your pages. It's not a new concept. It's been around for a while but BigTest interactor has kind of a new twist on it where they're immutable, composable interactions that are also convergent, which we'll get into later, which basically means if your buttons not there, it won't click the button until it is there. They're really powerful and they're making really easy and fun to write these tests.

CHARLES: Yeah, they're super powerful. I remember we talked about convergences last time when we talked about BigTest but interactors, I think are definitely a new development. I think we should spend a little bit of time there talking about, not just the power but also the ergonomics of interactors because they are like page components or page objects, except they're scope to the component. Not only do they have all this wonderful stuff where it'll make sure that the component exists before it starts to interact with it and things like that but their composable. If I have a button, then there are certain operations that are valid for that button. I can click it. I can hover over it. I can do all these things. They're the operations that make it unique to the button. Now, those might actually map to real events.

WIL: Similarly, their assertions about that button as well, like as primary is secondary. If this button is repeated throughout your application, you might want to make sure that your form has a primary and secondary button.

CHARLES: Exactly. It really encapsulates all the knowledge of how you can interact with both in terms of taking action and reading state from that button. It almost feels like an accessibility API. It would be easy to write a screen reader if you had these interactors for every single component on the page.

WIL: That's kind of what it is. It's just like you're defining an API around how your user would interact with your application and what your user would expect in the application. That's the point of page objects and interactors as you're defining this user API, essentially.

CHARLES: Yeah and so, really the step that interactor take is that they take the classic page object and it make them composable, so I can have, you kind of touched on this before, a modal dialogue interactor, which is composed out of two button interactors. One for the primary action, one for the secondary action and maybe, it's aware of its own title text, so you can assert on the title text but I didn't actually have to write the individual button interactors for that modal dialog interactor. Then I might have a second modal dialog interactor or a form that's on a modal dialog just composed of the modal dialog interactors and the individual form components, which appear on that particular modal dialog.

WIL: It's essentially how we've been building applications lately with components but this is for page objects in your test if you want to mirror that. You don't have to have one-to-one mappings of an interactor to a component but if you do, it's really powerful.

CHARLES: Yeah. I found that when we have one-to-one interactors, that's when it just feels the best.

WIL: Yeah and on top of this, if you have a component library and your component library exports the interactor that it uses for the component test, like we said, this BigTest technology, they're sprinkled also. We don't have to use interactors in big acceptance tests. We can use them for smaller component tests too, so if we ship these component interactors with the component library, your application that's consuming this component library now can test those components for free, without having to write their own interactors. It can just compose the interactors exported by the library.

CHARLES: Man, I almost want you to repeat that word for word again, just so it can sink in. It's so awesome. Because when you actually go to write your tests, you're not starting from ground zero like, "How do I do this?" They're like, "I'm writing some tests for this thing and I'm using these components and so, I've already got the prepackaged interactions for those components." It's like you start writing your tests. If your tests are a 10-story building, it's like you're starting on Floor 7 and you only have to walk up to Floor 10, instead of slogging up all 10 stories.

WIL: One really helpful interactor that we work within the open source stuff we've been working on is a date-picker interactor because date-pickers can be really complex. Just having that common interactor and have a date-picker on multiple forms where we can just use that one interactor, we don't have to tell every single test how to interact with that date-picker. We just say pick date and pass the date.

CHARLES: Yeah, it's so awesome. That is actually a great example. It doesn't feel scary to write a test for a page that has a date-picker on it or two. If you're doing like a date range or something like that, you're like, "Oh, my God. I don't write the selectors to test this." You just import your date-picker interactor, you set the date, it actually worries about all the low level events and there you go. It feels like you're operating at a much higher level.

WIL: Yeah. The interactor API essentially, you're telling me the test what the user would be doing and what the user would be seeing.

CHARLES: Yeah. It's worth pointing out again. We've identified starting and launching. We've identified set up and tear down but interaction is a core concern of BigTesting, no matter what tool you're using. One of the things that we found as interactors are something that you can sprinkle on literally any test suite if you're testing an interface and it makes it better. We've used it inside big acceptance tests. We use it inside Jest, doing just little component tests. There are people in the BigTest community who have used it to basically, write component tests against a JS DOM and while theoretically, philosophically, you want to make those tests as big as you possibly can, you can use that piece in your test suite.

If you are using a simulated DOM and if you're running a node in a browser, these interactors will still work and you're going to get high fidelity test cases that are resilient to this asynchrony and are composable and if they do have a full-fledged test suite, you can reuse these interactors. They are a really awesome power up that you can bring into your test suite.

WIL: And they are not tied to the framework at all. We use them in React for our stuff but we've also written some in Ember. Robert's written some in the Vue and ported some test and one of the beautiful things we've seen from this is that one interactor goes everywhere. You just write the interactor once and you can use it in Ember, in React, in Vue, in those test suites. If the rest of your test suite is framework agnostic, you have this test suite that you can jump frameworks in your test suites until it works and can test your application with high fidelity.

CHARLES: Yeah, it's fantastic. I remember when we first tried using interactors inside an Ember test suite because Ember comes with like a big kitchen sink in testing set up but interactors just slotted right in and there's absolutely no issue.

WIL: Yeah and there is actually a speed boost even because in most of the Ember test offers a hook into the Ember run loop and interactors are not. There is actually a good speed boos just using interactors.

CHARLES: Yeah. This is a good point. It's a good segue because typically, we think of acceptance tests as being really slow and one of the reasons, even the people [inaudible] acceptance tests or testing big as they think like it's going to take a long time. We found that actually we've been able to maintain a happy medium of testing big but also, having those test be really, really fast. When you say you said a speed boost from using interactors with Ember, where is that speed boost actually come from?

WIL: I mentioned the Ember test offers a hook into the Ember run loop and interactors aren't and the reason of this is because interactors are converging and they wait for things in the DOM to exist before interacting with them. Instead of waiting for the framework to settle, it just waits for the thing to appear and then interacts with it immediately. If you're starting something about a button toward the top of the page, you don't really care that another button at the bottom of the page has rendered yet, unless of course you have assertion about that but if they're converging, you don't need to hook into the wrong loop to wait for the entire page to load, to interact with just one piece of it.

CHARLES: Right. You're just waiting and you say, "I'm expecting something to happen and the moment I detect it, no matter what else is going on, the page could be taking 30 seconds to load but if that button appears and I can interact with it, I can take my action then or I can make my assertion then." It's about kind of removing gates -- artificial gates.

WIL: Yeah. Another common thing that's helped with is animations as most test that are hooked into the run loop, you kind of have to wait for some of these animations to finish before you can even interact with the element and that means if a model has a half second animation where it flies in and you have 30 tests around this modal, those tests are extremely slow now because you have to wait for that modal to come in, whereas --

CHARLES: -- Straight up flaky.

WIL: Yeah, straight up flaky. Whereas in the actual DOM, that modal is inserted pretty immediately and can be interacted with pretty immediately. With interactors, they don't need to wait for the animation to finish. They can just immediately interact with that modal but of course, if you need to wait for the animation to finish, there are options for that as well.

CHARLES: Yeah. If there's some fade in that needs to happen, you can kind of assert on any state and as long as it's achieved at some point, the interactor will recognize it and recognize it at the soonest possible time that it possibly could. I remember getting bitten on one project where the modal animations in particular were so brutal. Not only were they flaky, they just were slow because there was all these manual time outs. It wasn't even a paper cut. It was kind of like a knife cut, like there's someone sitting there and kind of slashing you with a pocket knife. It just was a constant source of pain in your side.

WIL: Yeah and that's how you end up with things like waits and sleeps in your test suite. When you need to wait for the animation to happen or something, you just see a sleep for four seconds with a comment because we have to wait for the components to load in. That's kind of a code now.

CHARLES: Yeah, that's just asking for trouble both in terms of slowness and in terms of it's going to get flaky again. That has been kind of one the most freeing things about working with interactors and working with convergent assertions on which they're based is you just don't ever have to worry about asynchrony. Really, really truly, most of the time, you're writing your tests, like it's all synchronous and that kind of makes sense because from the user's perspective, their consciousness is synchronous and they don't care about the internal run loop. It's just they were making observations in serial and at some point, they're going to observe something, so the interactor sits at that point and really observes the application the way that your user would.

WIL: Yeah. We've mentioned a few times now the convergent assertions, which interactors are based on. A little caveat there if you're using interactors and you're making non-convergent assertions, they might fail or be flaky. That's because interactors wait for the thing to be there to interact with, so as soon as the buttons there, it clicks it but it doesn't wait for after that event has fired and your application has reacted to that event, that's your application is concerned. We need something there like our convergent assertions that can converge on that state and wait for that state to be true before it considers itself passing or in times out.

CHARLES: Maybe we should dig a little bit into convergent assertions. I think the last time we had a public conversation on the podcast about this, this is kind of where we were, like we hadn't built the interactors, we hadn't built these other component pieces of the testing story. We were really focused on the convergent assertion. We've talked a little bit about this but I think it's worth rehashing a little bit because it's a unique way of approaching the system but it's also kind of horrifying when you see how it works under the covers.

I think when we tell people about the fact that it’s basically polling underneath the covers. The timeout is configurable but it's basically polling every 10 milliseconds to observe a state. I remember the first time being confronted with this idea and I was horrified and like my programmer hackles on the back of my neck, like raised up and I was like, "Wait a minute. This is going to be slow. It's going to be computationally intensive."

WIL: Yeah. That was my exact thought too because this is going to be slow. If acceptance tests are slow and we're doing an acceptance test every 10 milliseconds, it's going to be really slow and that's actually not the case completely. It's actually the opposite. They're extremely fast.

CHARLES: It is shockingly fast. You've got to try it to believe how fast it is, how fast you can run acceptance tests.

WIL: Yeah, talking like 100 tests in just tens of seconds.

CHARLES: Right. You basically gated by how fast your framework can render. Your tests are not part of the slowness. Your test --

WIL: And also, memory leaks can be costly too. We experience that recently where we had memory leaks that were slowing down our test but we fixed those up in test and put our backup.

CHARLES: Yeah, because basically, running the assertion or running the convergence is very fast. It's just a very light ping. I kind of think of it is as it is light as the brush of a photon or something that was bouncing off of a surface, so that you can observe it. It's extremely light and most of the time, it's just waiting so the test and the convergence really just gets out of the way. Just because they can run a thousand times or a hundred times in a second, it's doesn't gun it up. But the thing is it means that your tests run as fast as your application will run. You get back to the point... Was it in React where the kind of the key insight is that JavaScript is not the bottleneck? Well, your tests are not the bottleneck.

WIL: Yeah.

CHARLES: I guess this is what it is. I don't know if there's anything else that you want to say about convergences.

WIL: No. We pretty much summed it up there and that's what interactors are based on. That's how they're able to wait for things in a DOM. It basically polls the DOM until it exists and then it moves on and actually does the interaction.

CHARLES: Once again, this is actually a very low level thing on which BigTest is based but this is once again, something that you can use independently. You can write your own convergent assertions. You can write your own convergences that honestly have nothing to do with testing your assertions. It's a free standing library that you can use in your test suite or elsewhere should you choose.

WIL: That doesn't need to be a DOM for BigTest convergence there. I use BigTest convergence in BigTest CLI to converge on the browser being launched. Instead of waiting for the browser to report that, I can just kind of poll and see how that process is doing and the convergence waits for that process to start before moving on.

CHARLES: Right. I guess the best way I've thought about it it's a way to synchronize on observations and not on callbacks. It’s a synchronization mechanism and 99% of the synchronization mechanisms that we're used to, they've involved some sort of callback, a promise, an event-listener, things like that or even a generator where control is handed back explicitly to a piece of code when something happens. Whereas, this is a fundamentally different synchronization primitive, where you are writing synchronous code that's based on observations, so what I observe this, do this. When I observe this, do this. It's extremely robust.

WIL: Yeah, very.

CHARLES: It is a core piece. A fundamental thing that on which interactors are based on, which the CLI is based, I don't know if it's core to writing tests but --

WIL: It definitely helps.

CHARLES: It doesn't helps. We couldn't have BigTest interactor without that.

WIL: No, definitely not.

CHARLES: Because that's what makes it fast, that's what makes it not flaky at all and having those things, I think it makes it easy to maintain because you can work at the interactor level or the level of user interaction and you don't have to worry about synchronization, so the flow of your tests are very natural.

WIL: Yeah. We don't have to explicitly wait for request to be done for making an assertion about your app. That'll just come with convergences, just waiting for test date in application to true.

CHARLES: Let's talk about one more piece of the testing issue because when you're testing big, when you're testing in the browser, there's always the issue of what are you going to do about your API. You got to have your API running. It's just always an issue and this is kind of interesting because this sits at the crossroads of testing big and also, getting the most utility out of your test because in an ideal world, if you're testing really big, you're going to be using a real API. You're not going to poke holes in reality.

WIL: Yeah. One of the things that we avoid in BigTest is poking holes. We're not shallow mounting the components and testing the methods and the results. We're fully mounting these things and fully interacting with them through the full DOM API.

CHARLES: Yeah, exactly, using real browsers. It just occurred to me the irony of us talking about reality being things that are still running inside of a computer processor. I think we've inherited this term from that talk that Justin Searls at AssertJS in 2017. It's a really, really excellent talk. I think he gave it at RubyConf. It's the 'Don’t mock me.'

WIL: Yeah, it's one of my favorite talks.

CHARLES: Yeah, it's a great talk. In it, he talks about the value of a test is a balance of how many holes you poke in reality and sometimes, you encounter a test where all it is like holes in reality. Whether you're mocking this, you're mocking that, you're mocking the DOM, you're mocking the browser, you're mocking your network layer, you're mocking this external API and the more holes you poke, the less useful it's going to be. Network is one of those where it can be very difficult to not poke holes in that reality because it's a huge part of your application. Your frontend application is how it's going to interact with the server but at the same time, servers are gigantic pieces of software themselves, each with their own dependencies, each with their own set up and tear down --

WIL: Have their own concerns.

CHARLES: Yeah, exactly. They might be in a different language. They've got runtime, things like they might need external C libraries and crazy stuff like that. They're their own beast. To get a true big end-to-end test, you going to have to stand up your server but the problem that presents is you want your tests to be also isolatable. If you're a developer, I can go to a repo, I can do an install of my dependencies and I can run the tests without having to do any external dependencies other than the repository and the language in which I'm working.

This is one where we kind of have tried to walk the line of not wanting to poke holes in reality but also, have the test be containable to the actual application. In order to do that, you need something that presents a high fidelity version of the network. You can kind of try and have your cake and eat it too. You want to have something that acts like a server and really acts like a server but it's actually not a server.

WIL: And still poke as few holes as possible and the application and how that's all set up, we don't want to be intercepting methods and responding with fake data. That's not a good way to mock that network.

CHARLES: Right. We want to be calling actual fetches, calling actual XMLHttpRequest. Ideally, if you've got service workers, making actual service worker requests.

WIL: Basically, as far as the application is concerned, it's talking to a real server on us.

CHARLES: Yeah and that's kind of the litmus test for is it a hole in reality or is it just a really great illusion?

WIL: Yeah and that's a good name for Mirage, right? It's a really great illusion.

CHARLES: Yeah. It is a simulation of reality, so we use Mirage, which is something from the Ember testing world but something that we have extracted and made available as BigTest Mirage.

WIL: Yeah. The main difference just being is that we've taken away the Ember dependencies and the run loop stuff. It's just plain JavaScript Mirage. It works exactly the same as you use it in Ember minus the auto imports and the file... Oh, man. I can't think of that word. Aside from automatically importing your files for your server config, you have to do that manually because Ember is what provides that but other than that, it's a form of Mirage. You define models and serializers and factories and all the good stuff.

CHARLES: Right and then you can use those factories and you can use those models to really give a high fidelity server. If you are building something in whatever framework, you can use BigTest Mirage to simulate that network layer. Again, we've used it in a number of different scenarios but having that in place means that you're going to be able to have those high fidelity tests where your application is actually making XMLHttpRequest but it's all isolatable, so that it can be run in repo. This isn't really related to testing but it has a fantastic capability where you can prepopulate, you can use the factories to prepopulate your server with data, so that you can use the application without the actual server being implemented.

WIL: Yeah. That's extremely powerful. That's we were talking about earlier and getting at is the scenarios which are setting up specific, essentially fixtures but you're generating these fixtures. Factories are essentially high level fixtures, network of fixtures.

CHARLES: Yeah, higher order of fixtures.

WIL: Yeah, so the scenarios are just setting up these fixtures for a scenario of your applications, like the backend is down or the list only responds with two items as opposed to 5000 items, something like that. You want to be able to, not only test these things but be able to develop against it and Mirage makes that really easy because you can just start your app with Mirage-enabled point to that scenario and you're there. You have that exhausted scenario to develop in.

CHARLES: If you've never used Mirage, it is really hard to understand just how incredibly powerful it can be. We've used it now on at least four projects, where we did develop the entire first version of the product without any backend whatsoever. It's an incredible product development tool, even apart from testing, that then informs the shape of what the API was going to be. I know we've talked about this on the podcast before but it's really an incredible technology and it is available to you no matter what framework you're using. I think it's one of the best kept secrets in JavaScript development.

WIL: Yeah. That's definitely great. That said, though it does have some fallacies. It's great but it can be a little slow sometimes, so we are eventually working on a BigTest network like another piece of the BigTest pie that you'll be able to sprinkle into your application but in the meantime, praise Mirage.

CHARLES: Yeah. We are going to be offering an alternative or maybe collaborating for another version of Mirage but hopefully, we can make Mirage faster, we will be able to make this thing faster, so that it can use service workers and be used in a bunch of different scenarios.

Just to recap, we've talked about a lot of different components but over the past year, a couple of years, these are the things that we've identified as being really key components as big part of your acceptance testing and really your testing stack. How you're going to start and launch these things? How are you going to set them up and tear them down? How are you going to interact with the application from a user, both in terms of making assertions and how are you going to take action on behalf of the user and still have it be maintainable, have it be resistant to flakiness, have it be performant?

BigTest is the answer to that for those particular areas of the testing story and so, some were using we're using existing components like we use Karma, we use Mirage to date. Those, we did not develop but where we see kind of key pieces of that puzzle missing is where we started writing the BigTest solutions so things like the interactor. Eventually, we are going to make BigTest into a product that's you're going to be able to use kind of out of the box, just like you might install Cypress, where it's a very quick set up and we make all of the decisions about the components for you.

But in the meantime, we're really trying to take our time, identify those pieces of the puzzle and build the software component that fits that piece of the puzzle at the absolute best so when they're polished, use them in a more comprehensive product. Things like convergence, things like interactor, things like BigTest React, BigTest Vue and very soon, BigTest Ember. These are things that you can use today, to make your tests just that much bigger and that much better, especially interactor. It's been an incredible journey this past year as we kind of develop these individual pieces and there's just going to be more goodness to come.

WIL: Absolutely. Right now, I'm working on some validation type API for interactor that I'm hoping to land soon. That'll open up the possibilities of maybe hiding away those convergent assertions a bit more in your tests and just handling this automatically. It'll be pretty good.

CHARLES: It's really exciting. Writing test has got more and more easy and more and more fun over the last year for us. I think we're already starting in a pretty good place. If you have any questions about BigTest, how would folks get in touch with us?

WIL: We have a BigTest Gitter channel. You can find a link to that on the BigTest website: BigTestJS.io. Just ask us questions on Gitter and we'll try to answer them.

CHARLES: And as always, you can ask us directly. You can send email to Contact@Frontside.io or reach out to us on Twitter at @TheFrontside or you can actually reach out to the BigTestJS Twitter account directly and just call us on Twitter at @BigTestJS. Thank you very much, Wil.

WIL: Thank you, Charles.

Nov 29 2018

50mins

Play

114: The Business Case for Experimentation with Elm with Dillon Kearns

Podcast cover
Read more

Guest:

Dillon Kearns: @dillontkearns | GitHub | Incremental Elm

In this episode, Dillon Kearns joins the show to talk about techniques for experimentation with Elm, making those experiments safe, the concept of mob programming, why you would want to experiment with Elm in the first place, and how you too can begin to experiment with Elm.

Resources:

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES: Hello, everybody and welcome to The Frontside Podcast, Episode 114. My name is Charles Lowell. I'm a developer here at the Frontside. With me today as co-host on the show is David. Hello, David.

DAVID: Hey, guys.

CHARLES: David is also a developer here at Frontside and we are going to be talking about something that we've been talking, I guess a lot about recently and we're talking about Elm. I think we first started talking about this several years ago and then it kind of simmer down a little bit but recently, it's been top of tongue. With us to talk about Elm today is Dillon Kearns. Welcome Dillon.

DILLON: Thank you so much for having me.

CHARLES: I understand that you are a full time Elm consultant. You have a background as a Lean and Agile coach but have recently transitioned to doing Elm consulting full time. Now, what exactly does that mean in 2018 to be an Elm consultant.

DILLON: Actually a lot of my motivation for getting into Elm consulting in the first place is I kind of realize that Elm to me is just an extension of the things that I was passionate about with Agile and software craftsmanship. I'm trying to help teams have a better experience with their code, make it more maintainable, make it easier to change, make it easier to drive things based on customer feedback and I really believe that Elm helps people do that. I used a lot of the background and experiences that I've had with Agile and Lean coaching and a lot of those same skills, in order to help organizations adopt Elm.

One thing I've seen a lot of teams struggling with is trying out a lot of different frameworks. I've encountered teams that have spent months, very painfully trying out different frontend frameworks and having trouble coming to consensus about that. One of the things that I think really helps address that is having an experimental and iterative approach, that you can really use the scientific method to focus on learning, rather than getting it right the first time. I think that there's really a need to help teams through that process of introducing a new frontend framework like Elm, so that that's why I've gone into full time Elm consulting.

CHARLES: That's an interesting process. It sounds like you really need to be constantly sending out spikes, doing research on whether it's Elm or some other technology to help you kind of bridge the chasm to the next generation. How do you actually do that as an organization? My guess, this kind of a question independent of Elm but maybe we can talk about how you see that play out in the context of Elm.

DILLON: Right and actually, for any listeners interested in that question, I would really highly recommend Grant Maki's ElmConf talk from this year. He spoke about exactly that topic and it was at ElmConf that it's relevant whether your team is considering Elm or looking at other frameworks. I think that the key is you need to get good at experimenting in a way that's low risk and in a way that you can be constantly learning and seeing how these different technologies fit in your codebase and fit for your team.

There's a quote that I really like from Woody Zuill. Have you guys heard of mob programming before?

CHARLES: I heard of mob programming from a paper by Richard Garfield a long, long time ago, almost 20... I don't know if it's the same concept.

DILLON: Yes. It gained a lot of momentum these days. Mob programming is essentially pair programming but with more people involved. I've really enjoyed that process actually. I think it's actually a great way to experiment with different technologies because you get all of the minds together and it's a very good way to kind of transfer knowledge and explore things together but Woody Zuill talks about mob programming and he likes to ask the question, "Why did we begin doing mob programming for the team at Hunter Industries that originally started mob programming?"

People would give answers like, "Because it cuts out code review from the process because you have lots of eyeballs on it in real time," or, "Because it reduces bugs," or, "Because it gives you better quality code. It gives all the best ideas into the product in real time," and all those things are valid points that are really good benefits of mob programming. But he says those things may be true but actually, they're not why we tried mob programming. The reason we tried mob programming was because the team wanted to try it. That's a really important point.

The team needs to be experimenting with things that they're passionate about and they need to be exploring things on their own terms. But with that said, another lesson from that story of kind of his team at Hunter Industries discovering mob programming is that the team didn't discover mob programming in a vacuum. Really, the team discovered mob programming because the team became really excellent at experimenting and evaluating those experiments and then, they like to talk about this phrase that Kent Beck coined, 'turn up the good.' When something is working well, we often focus on the negative things and trying to eliminate those things but what happens if we take the things that are working well and 'turn that dial up to 11.'

CHARLES: Yeah, I love that. I remember in the kind of the original layout of extreme programming, talking about how I really just wanted to turn up all the things that were working for 11 or to 11, so testing, refactoring, incremental releases and things like that.

DILLON: Exactly.

CHARLES: I actually had one question that's maybe a little bit of a diversion. This is actually the first time I've heard of mob programming. It's definitely not the same sort of mob programming I learned about in Richard Garfield's paper. I think it was more referring to massively distributed open source in the form which is really kind of commonplace now that happens on GitHub. I think it's maybe, an obsolete definition of mob programming but how many people would be in a mob like two, three, four, five, six, seven, 10?

DILLON: That's a great question. Really, the answer is of course, it depends. That's a consultant's favorite answer but it really does. My rule of thumb is I find it usually three people is a very nice size for a mob. I find that mobs tend towards around three or four people but that being said, it's important to note that mob programming is all about this idea that what is the true cost of programming.

I think that often we look at programming as the act of writing code, initially and that's a very limited way of looking at coding. Because of course, 90% of our effort is spent maintaining code, making decisions around code, reproducing bugs, fixing bugs, communicating with customers about bugs -- bugs are extremely expensive -- the farther out they get, until eventually they get to the point of a customer discovering them, bugs are in extremely expensive part of software. If we can minimize bugs, that's very valuable.

When you look at programming on this bigger scale and look at the bigger picture of programming, then you realize that you may be able to get one person to write the code faster but then, that person needs to code review it. That person needs to go and ask somebody question down the line when they don't have context because they weren't involved in the decision making.

For example, maybe there's a UX person who doesn't have context on certain choices that were made, so there's a lot of churn, so you can kind of eliminate that churn by getting all the relevant people involved right away and that's the idea. In my experience with mob programming, it works really well to keep kind of a core of around three people. Sometimes, somebody goes up to have a conversation with somebody, take a break or answer somebody's question, maybe somebody from another team has a question that type of thing and so, the team can keep coding as a pair or whatever. But ultimately, the idea is that you get faster because you're building up this shared context and you're not spending as much time down the line answering questions, doing code reviews and things like that.

CHARLES: Right. I see.

DAVID: That kind of matches with my experience. Mob programming on previous teams, the way we had it set up is there was a regular mob programming chat session that the whole team was invited to but it was optional. You can just show up if you wanted to and really, that sort of made it so that there was a set of people who regularly attend -- three to five people in a session -- and they were the core group, essentially.

DILLON: Right. That's another great point. Invitation is a powerful technique. If you're kind of mandating the people try an experiment or work in a certain way, ultimately it's much more powerful to let the team experiment on their own and follow their passion and they'll discover great things. It's about experimenting, rather than choosing specific experiments.

While we're on this topic of kind of the real cost of coding, I think this is a good point to talk about this quality in Elm because, I think that this is one of the things that really motivates me to use Elm myself and introduce it to others is that, I think that Elm really get something about programming where there's a sort of superficial ease of certain techniques that Elm kind of goes beyond and says, "Actually, let's optimize for a different set of things that we think make code more maintainable and more delightful to work with in the long run."

CHARLES: I wanted to also transition between, we were on a little diversions on mob programming but do you use mob programming as explicit technique for introducing Elm when a team is considering adopting it?

DILLON: That's a fantastic question. I absolutely do. Of course, I honor the ways of working in a particular organization or team. I think that's important to do but I do strongly encourage using mobbing as a technique for knowledge sharing and when I'm on-site with a client, I find it extremely powerful as a technique for knowledge sharing and also, let's say you do an experiment, somebody is off in a corner and they're trying out Vue.js or they're trying out Elm or they're trying a particular coding technique. Then they come back to their team and they say, "Hey everybody, I tried this great thing," and now they have to spend this time convincing everybody and saying like, "Wait a minute. You didn't try this, you didn't try it that way. It wouldn't actually work in our context because of this." I think that it's very powerful to have everybody kind of involved in that process so that you can evaluate it together as a team.

CHARLES: Because the thing is like, when you experience win or you experience fail, it's a very visceral feeling and that's the thing that sells you or turns you away. You can argue until you're blue in the face but words have a very limited capacity to convince, especially when compared with like physical and emotional feeling. It sounds like you can get everybody to have that shared experience, whether for the good or for the bad, you're going to arrive at a decision, orders of magnitude more quickly. They have to rely in conviction of that decision spread around the team.

DILLON: Exactly. I think that hits the nail on the head and you say that we have this sort of skepticism of arguments from theoretical conversations, rather than 'show me the money,' but it's actually, try solving a real problem in this and that's exactly as it should be. I think that's one of the big antidote from this problem that I've seen in a lot of environments, where there's this analysis paralysis, especially with the state of the JavaScript ecosystem these days. I think that one of the keys to improving that situation is to get good at trying things, rather than theorizing about things.

We have a tendency to want to theorize and when we do that, then we say, "Can it solve this problem? Can it solve this problem? Can it solve that problem?" You can talk about that until the cows come home but it doesn't get you anywhere and it doesn't really convince anybody of anything. The key is to find very small experiments and what I really recommend and what I'm dead focused on when I'm initially working with a client is getting something into production.

Now, that doesn't mean that you need to have a road map for turning your entire application into Elm. In fact that's the whole point, is that you're not trying to do that. The point is you're trying to get as realistic of an experience as possible for what problems might occur if we do this? Will the team enjoy working with this language? Will it work well with our built pipeline? Will there be any unforeseen issues? You don't know until you actually try it, so you've got to try it and you've got to try it in tiny, tiny steps and low risk experiments.

CHARLES: Right but you've got to try it for real. You don't want to try it with a TodoMVC.

DILLON: Exactly. It needs to be meaningful, to really have a good understanding of what it's going to be like.

CHARLES: I would say that I tend to agree but I've definitely encountered the counterargument and I also think this counterargument makes sense or perhaps where the pushback lies is if I'm constantly experimenting, then what I'm doing is I'm internally fragmenting my ecosystem and there is power in similarity. Any time you introduce something different, any time you introduce one fragment, you're introducing complexity -- a mental complexity -- like maybe I have to maintain my Elm app and I also have to have my Legacy... Or not Legacy, I've got my other JavaScript tool kit that does it in one way. Maybe I've got a couple of more because I've run these other experiments. I'm not saying that there is one way but there is power in uniformity. There is power in diversity. Where do you find the balance?

DILLON: Those are all excellent points. To me, I think really the key is it's about the scientific method, you could say. The thing with the scientific method is that we often forget the last part. We get really good at hypothesizing about things. Sometimes people leave it at that, which we kind of just discussed. Sometimes, people go past the hypothesizing stage and they actually run the experiment and that's great. But then, the majority of people, if they get to that point, will forget to do the last step which is to evaluate the results.

I think the key here is you need to be experimenting and this is what it means for it to be a low risk experiment. It means that you're not setting yourself off in a direction where you can't turn back. You want to set it up in such a way that you can turn away from it with minimal cost. One of the things that is really helpful for that is if you build a tiny, independent, little widget in your application, try building that in Elm. Some people will do that with a little sort of login badge in the corner of their application.

One of the teams where I've introduced Elm at a Fortune 10 company, actually where we introduced Elm, we started out with just a tiny little table in one page and if we wanted to back that out, it would have been trivially easy but we decided that we wanted to go in further and invest more.

CHARLES: That makes a lot of sense. Effectively, you need to have a Plan B. Don't sync all of the available time that you have to invest in an experiment. Make sure that you have a Plan B and if you need to do this widget or this table in Angular or React or Ember or whatever, you are thinking about that -- how would that work.

DILLON: Exactly and the thing with experiments is the purpose of an experiment is not to build something. It's to learn. I really like this kind of ethos of lean startup, which I think is really getting much more into the mainstream in the software industry, which is a wonderful thing. The idea of lean startup, the kind of core concept is this idea of validated learning. Basically, in an environment where there's uncertainty, which is pretty much most of the things you're doing in software, the main goal is you're not shipping a product like you would be if you're trying to manufacture cars as quickly as possible. The main thing that you're producing is what they call 'validated learning' and so, you want to minimize the amount of time it takes to validate or invalidate your assumptions about something and then, you want to make it as cheap as possible to move on from that.

CHARLES: I like that. So if you're going to organize your development process around this principle or maybe not organize it but integrate it into development process, how do you know that you're conducting a healthy number of experiments, versus I may be conducting too many experiments? Is there a metric that you can look at? We need to have this many experiments running at all times or this is just too many or something else.

DILLON: That's a really interesting question. I think I would tend to think about that more again, as looking at the way the experiments are run, rather than 'are there too many experiments?' That's just not a problem that I've seen there being too many experiments. The pain that we tend to really see in environments where experiments are hurting teams is the way the experiments are being done. It's hard to backtrack from those experiments and as you were saying before, you kind of put yourself down this path where you can't walk it back and you create this sort of rift in the way the code is being written, which makes it more difficult to work in that codebase.

The thing with experiments is they can have really big payoff. Now, you want to make sure that you're not just going in and picking up every shiny object that you see. One thing that can keep you honest with that is if you're kind of coming up with a hypothesis before you start. If you're saying, "This is the value to our business and to our team if we attempt this thing and this is what will prove that it seems to have that value and this is what will tell us, 'Actually, it doesn't have that value and we should drop it and cut our losses.'"

CHARLES: That's a great heuristic. As you're saying and imagining how that might have saved my bacon in the past because I've definitely made the mistake of playing with too many shiny objects and picking things because I didn't fully evaluate what I thought the value. I was explicit with myself about what is the value that is going to bring to this project or this business. I have a theory about it but I am not thinking what is my hypothesis and how am I going to validate or invalidate? I'm thinking, I've got a short term pain that I'm experiencing and I'm grasping for this thing, which I think will solve it and I'm not properly evaluating how it's going to affect me long term.

DILLON: Right and that could be a great team practice to play around with is often, teams will kind of come up with action items out of retrospectives. One thing that I think can be really beneficial for teams is to kind of flip that notion of doing action items which again, it's really just doing the middle part of the experiment where you're conducting the test but you cut out the hypothesis part and the evaluation part. Try to bring that into your team's retrospective and try to have explicit hypotheses in the retrospectives and then, in the next retrospective, evaluate the results.

CHARLES: All right. I will definitely keep that in mind but this feels like a fresh take on kind of how you manage software development that I haven't encountered too much, being more scientific about it. It sounds like science-oriented development.

DILLON: Right.

DAVID: I like that.

DILLON: There are a lot of buzzwords these days in software development, in general and it's really becoming a problem, I think in the Agile community but really, what it boils down to is these basic elements and basing decisions on feedback is one of those fundamental unit. You can call the scientific method, you can call it lean startup and validated learning, you can call it agile, you can call it whatever you want but ultimately, you need to be basing things on feedback.

I think of it almost like our nerves. There's actually a disorder that some people have, which can be fatal, which is that their nerves don't tell them when they're feeling pain. I think this is a great analogy for software because that can happen to companies too. They don't feel the pain of certain decisions not landing well. Because they're not getting feedback from users, they're not getting feedback from metrics and recording, they're not getting feedback from doing that final evaluation step of their experiments, so when you fall on the ground, a small cut could be extremely harmful because you don't know the damage it's doing to you.

CHARLES: I think that is a good analogy. One of the things that I'm curious about is we've been discussing a lot of techniques for experimentation and how you can integrate that into your process and how you can make your experiments safe, so let's talk a little bit about -- first of all, two things -- why would I want to experiment with Elm in the first place? Because ultimately, that's why we're here and why we're having this conversation. What's compelling about it that would make me want to experiment? And then how can I begin to experiment with Elm?

DILLON: I actually just published a blog post yesterday. It's called 'How Elm Code Tends Towards Simplicity.' To your question of why would a team consider Elm, I kind of talk a little bit in this blog post about a case study at a Fortune 10 company where I introduced Elm to a few of the teams there. One of the teams there, we had actually seen an Angular project that they had worked on and often, in an enterprise environment, you have projects moving from one team to another.

I actually had my hands on this Angular project. It kind of moved over to another team and we were experiencing some major pain trying to make changes in this codebase. Even making the simplest change, we were finding that there were a lot of bugs that would be introduced because there's some global variable. There's some implicit state. Sometimes, it was even reaching in and tweaking the DOM and really, the topic of conversation at our team lunches was how afraid we were to touch this codebase.

Fast forward a few months and this team was asking my advice on picking a new frontend framework and I introduced them to Elm. They took a run with it and it was pretty remarkable to see this same team that had really struggled with AngularJS and they didn't really have a strong sense of what were the best practices. They weren't getting any guidance from the framework itself and the tooling around it and they actually loved the experience of working with Elm because they were saying, "This is amazing. Maybe it takes a little time to figure out how to solve a particular problem on Elm but once we do, we know that we've done it in a solid way."

One of the things that I think is most powerful about Elm is that it keeps you from shooting yourself in the foot. I think that's a really good headline kind of summary of what I love about Elm. For example, tweaking the DOM. Now, it might seem like a pretty obvious thing that we just won't tweak the DOM and that's fair enough. That might not be a problem for a lot of teams. People wouldn't even reach for that technique because they're disciplined about it. But at a certain point, you start taking on enough things and then go from kind of those basic things that are going to make your code more unreliable and unsafe like tweaking the DOM and you start getting into the realm of best practices.

There's so much discussion these days in the JavaScript community about best practices, which is great. It's great to discuss that but my concern is that there's a new best practice each week and the team has to agree on it, you have to find techniques for enforcing it, people have to make sure that these best practices are being followed in code reviews. Then when you look at a given piece of code, you have to trust that those best practices are being followed, so it requires a lot of work to make sure, in your reducers, in redux that you are not mutating anything. With Elm, data is just immutable. That's just how it is.

There are a lot of these kind of things that are baked into the language and the expressivity of the type system allows you to bake in your own constraints. One of the things that I find really compelling about Elm is its design really prevents you from shooting yourself in the foot and it gives you tools for making sure that you take it even a step further and it helps you enforce these best practices at a compiler level.

CHARLES: Now what's interesting here is it's almost like the opposite tension of experimentation is a work, right? like here, we have an example of uniformity being the more powerful track but then inside the actual macroscopic process, you want a lot of experimentation and diversity. But at the microscopic level, inside your application, it sounds like you want less experimentation and you derive a lot of strength from that but --

DILLON: That's a great point.

CHARLES: -- Experiments that are possible, yeah.

DILLON: I think that there is a lot of pain these days in the JavaScript community. We hear people talking often about JavaScript fatigue and it's a real thing. It takes a lot of work to stay on top of the latest best practices and frameworks and that can be a lot of fun. I love learning about the latest new frameworks and tooling but ultimately as you're saying, we don't want that experimentation so much about the fundamentals. We want some dependable, solid fundamentals and then we want the experimentation to happen within there. I think that's exactly what we see in the Elm ecosystem.

We have a single kind of data store or way of managing state in Elm. It's called the Elm Architecture. In fact, it's what Redux is based on and it worked extremely well and you don't have to experiment with different data stores in Elm because that's just what Elm code looks like. Now, if you want to experiment in Elm, then there is a lot of innovation happening. One of my favorite things about Elm is that the compiler and its expressiveness has sparked a lot of creativity. One of my favorite things about Elm is the library called Elm UI.

Actually, a client that I'm working with right now, it's a really interesting case study. They are kind of a very small startup. They just kind of branched off of a larger startup. They're building some tooling for this ecosystem. They were engineers at a company called Procore that does cloud document management for construction companies. They wanted to get a product-ready for a big conference for their potential clients. The reason they brought me in to help them was because they wanted to reach this ambitious target of being able to do a demo of this brand new product at this conference and they wanted to iterate very quickly. One of the things that really drew them into Elm in the first place is this library Elm UI.

Elm UI essentially, Richard Feldman gives a talk on it, where he uses the analogy of it being treating CSS and HTML as bytecode for your views. I think that's a really apt way to put it. If you break down this idea of CSS -- Cascading Style Sheets, it removes the cascading part of CSS and it removes the sheets part of CSS. What you're left with is a way of expressing style and it's a way of expressing style that is able to part ways with all of the baggage of the entire history of backwards compatible decisions that CSS has ever made.

If you want to vertically aligned something, then you just say, "Align vertically," you know, center vertically. If you want to center something horizontally, you say center X. It creates a high level language for expressing views. My experience with Elm UI, this may not be the right choice for every team but I love it. I use it on all of the projects that I maintain personally. I love using it because it gives you that same sense of invincibility refactoring that you get with Elm, which is remarkable that you could have that feeling with managing views.

CHARLES: It's definitely something that feels like a dark art and it can't be called science. It's an art. It's a science for some people but it's historically been a dark art and something fiddly to work with. In terms of being able to make the experiment with Elm, when we talked a little bit about why you might want to experiment with it in the first place, what the business case is, I guess my next question is or a question that immediately comes to mind is supposing that we have decided to experiment with this, how do you mitigate that experiment?

We talked about lowering the cost, having a way to turn away from it, having a way to make it inexpensive. For example, one of the things that I think of when evaluating a new technology is how well can I use it with old technologies. I have a lot about best practices in my tool bag. We all do. We got our all favorite libraries and pathways that are just familiar to us. One of the things that I've noticed is when adopting a new technology, one of the things that makes it easy to experiment with is how well it works with the existing technologies. I know that, we talked about Elm UI, kind of rethinking style in CSS and your views and Elm itself as a completely different language within JavaScript, that can be both liberating but it can also be limiting in the sense that I can't reach back for my existing tool if no tool exists in this new space.

The kind of experience that I've had where this is really worked is systems like JRuby or Clojure, where there's a very clear pathway to be able to use Java libraries from those environments, so you always have kind of an escape hatch. What's that like in Elm?

DILLON: This is a really interesting conversation because it highlights, in some ways some of the most defining features of Elm. In terms of how do you kind of pull Elm into an existing application, there are a lot of different techniques for that. It's pretty straightforward to create a little Elm app. We usually don't call them components for reasons that we can get into if we want to but that's a whole can of worms.

But if you've got a little Elm application that you want to use to render a widget on your page, then it's as simple as just calling Elm.yourmodule.init and rendering it onto the page there. That's quite straightforward and if you want to interface with your existing code there are several ways to do that. There's something called port in Elm, which is how you kind of communicate by sending these messages and data back and forth between your Elm app and JavaScript.

Now, this is one of the decisions, I think that defines Elm as the language and the reason this is important is because Elm decided not to make the choice that a lot of other compile to JS languages do. For example, if you look at ReasonML or PureScript or a more extreme example, TypeScript. TypeScript is a superset of JavaScript, so it's trying to allow you to gradually introduce this to get some incremental improvements for your JavaScript code, so it's extremely easy to experiment with it, which we've talked about the importance of experimentation.

Now, the challenge with this technique, the tradeoff here is it's great, that it then becomes very easy to transition into it and that's an excellent strategy for the goals of TypeScript. Elm has a different set of goals, so the things that elm is focused on giving you is a truly type-safe experience. When you're working with Elm, if your Elm code says that this data is a float, then it is a float. Either, it is a float or that code is not being run and so, that's very different than the experience in TypeScript where you have these escape hatches.

This is an inevitable choice for any compiled to JS language. Are you going to have escape hatches or not? Elm is really the only language out there, I think that chooses not to have escape hatches and that is actually the thing that that I love about the language because that's the only way you can truly have guarantees, rather than, "Yeah, I'm pretty sure that these type guarantees hold."

DAVID: Yeah, wishes and dreams.

DILLON: Yeah.

CHARLES: What does it mean to have no escape hatches? because you talked about ports. Does it mean like it's impossible to use an external JavaScript library?

DILLON: That's an excellent question. You absolutely can use JavaScript libraries. It means that it's being explicit and upfront about the fact that there's uncertainty in these areas. That's what it comes down to. Take for example dealing with JSON. In a JavaScript application, what we get when we're dealing with JSON is you make a request up to the server, you have some callback that passes in the data you get back and then you start pulling bits and pieces off of it and you say 'response.users subzero.firstname' and you hope that none of those things are null, none of those types are different than what you expected.

In a way, it's kind of letting you pretend that you have certainty there when in fact, you don't and with Elm, the approach is quite different. You have to explicitly say, "I expect my response to have this shape. I expect it to be a list of things, which have a first name and last name which are strings," and then Elm says, "Okay, great. I'm going to check your assumptions," and if you're right, then here you go and you're in a well typed-space where you know exactly what the types are and if you're wrong, then that's just another type of data, so it's just a case statement where you say, "If my assumptions were correct, then do something and if my assumptions were incorrect, then you decide what to do from there."

CHARLES: Right. For me, it sounds like there is some way because ultimately, I'm going to be getting unstructured but I'm going to be getting JSON back from the server and maybe, I have some library that's going to be doing that for me and enhancing it and adding value to that JSON in some way. But then at some point, I can present it to Elm but what you just saying is I need to be complete in making sure that I handle each case. I need to do or handle the case. Explicit about saying if the assumptions that Elm wants to make, turn out to not be true, Elm is going to make me handle the case where those assumptions were not true.

DILLON: Exactly. I think that TypeScript of any type is the perfect illustration of the difference. TypeScript of any type is sort of allowing you to say, "Don't type check this. Trust me here," and Elm's approach is more kind of just be explicit about what you want me to do if your assumptions are incorrect. It doesn't let you kind of come in and say, "No, I know I'm going to be right here."

CHARLES: Right but there is a way to pass data structures back and forth.

DILLON: There absolutely is and actually, there's a technique that's starting to gain some traction now, which I'm really excited about, which is rather than using this sort of JavaScript interrupt technique we talked about, which is again, it's very much like communicating with a server where you're kind of sending messages and getting data back -- getting these messages with data back. But there's an alternative to that which is using web components. Actually, there's quite good support for assuming that you don't need to be compatible with Internet Explorer.

Basically in a nutshell, if you can wrap a sort of declarative web component around anything, it could be a Google Maps API, it could be a syntax highlighting JavaScript library, something that you don't have an Elm library for but you want to use this JavaScript library, it's actually quite a nice experience. You just render that custom element using your Elm code just as you would any other HTML in Elm.

CHARLES: Yeah I like that, so the HTML becomes the canvas or composition with other JavaScript and the semantics are very well-defined and that interface is actually pretty thin.

DILLON: Exactly and the key again is that you wanted to find a declarative interface, rather than an imperative one where you're kind of just doing a series of statements where you say, "Do this and then set this value and then call this and then set this call back." Instead, you're saying, "Render this Google Maps custom element," which is centered around these coordinates and has this zoom level on. You declaratively give it the bit of information that it needs to render a particular view.

CHARLES: Okay. Then I guess the final question that I have around this area is about being able to integrate existing tools and functions inside of an Elm application. Because it sounds like you could theoretically develop large parts of your application, is there a way that you can actually have other areas of your application that are not currently invested in Elm still benefit from it, in the sense for kind of need of JavaScript APIs that Elm can make available.

DILLON: Right, so you're kind of talking about the reverse of that Elm reaching out to JavaScript. You're asking about, can JavaScript reach out to Elm and benefit from some of its ecosystem?

CHARLES: Exactly. I say that is that another potential vector for experimentation.

DILLON: It's a really interesting thought. I haven't given it too much thought, to be honest but I actually have heard it come up before and my gut feeling is that it's probably more fruitful to explore the inverse, reaching out to JavaScript from Elm and the reason is kind of the main appeal of Elm is that when you're operating within Elm, you have this sense that if it compiles, it works. Because again, this central decision to not allow escape hatches is what allows you to have that sort of robustness, so you have this feeling of bullet proof refactorings and adding new features seamlessly where you change your data modeling to say, "Here's this other case that can be represented," and then suddenly, the Elm compiler says, "Tell me what to do here, tell me what to do here and tell me what to do there," and you do it and your app is working. That's the real appeal of Elm, I think and you don't really get much of that by just calling out to an Elm library from within JavaScript. That's my gut feeling on it.

CHARLES: Okay, that's fair enough. On the subject of interrupt and using tools like JSON, you actually maintain a GraphQL library for Elm. You probably have a lot of experience on this. Maybe we can talk about that as a concrete case that highlights the examples.

DILLON: Yeah. I think to me this is one of the things that really highlights the power of Elm, to give you a really amazing refactoring and kind of feature creating experience. A lot of Elm libraries are prefaced by the author name, so it's still DillonKearns/ElmGraphQL. I spoke about it this year at ElmConf.

In a nutshell, what it does is it actually generates code based on your GraphQL schema. For anyone who doesn't know, GraphQL is just kind of a language for expressing the shape of your API and what types of data can return. What DillonKearns on GraphQL does is it looks at your GraphQL schema and it generates an API that allows you to query that API. using this library, you can actually guarantee that you're making a valid query to your server. Again, you get this bulletproof experience of refactoring in Elm where you can do something like make a change in your API and recompile your Elm code and see whether you've made a backwards incompatible change.

All of this effort of doing sort of this JSON decoding I was talking about earlier where you kind of have to explicitly say, "These are my assumptions about the shape of the JSON that I'm getting back." When you're using this library, you no longer need to make any assumptions because you're able to rely on this sort of schema of your API and so you know when you're requesting this data, you don't have to run it, see if it works and then tweak it and run it again -- this sort of cycle of checking your assumptions at runtime.

It moves those assumptions that you're making from runtime to compile time and it can tell you when you compile your application, it can say, "Actually, this data you're requesting, it doesn't exist," or, "It's actually called this," or, "This is actually the type of the data."

CHARLES: Right. I love that. How do you do that? Because it seems like you've got a little bit of a chicken and the egg problem because the schema is defined outside of Elm, so you have to be able to parse and understand the schema in order to generate the Elm types to be able to compile Elm code against them. Maybe I'm not --

DILLON: That's exactly right. That's exactly what it does. Now, the nice thing is that GraphQL is really designed for these types of use cases. It supports them in a first class way. If you have a GraphQL API, that means you have built into it whether you know about it or not, a way to introspect the schema. All of the queries for kind of interrogating that GraphQL server and asking what types of data does this return, what are all your queries that I can run, it's built into it by the framework, so that comes for free. Getting up and running with this package I built is as simple as running a little npm CLI, pointing it to either your URL for your server or the JSON form of your schema, if you prefer and then, it generates the code for you.

CHARLES: Wow, that sounds fantastic. This is the exact kind of thing that feels like it would be cool if I could just start using this library to manage the GraphQL of my application but I'm consuming that GraphQL from other JavaScript but it's the Elm code that's managing it. Do you see what I mean?

DILLON: I hadn't considered that. I guess you could. You're right. Maybe I’m so smitten with Elm that it's hard to see an in-between but I guess, you could get some benefits from that approach.

CHARLES: Right and as an experiment of course.

DILLON: There you go. There you go.

CHARLES: All right. With that, I think we'll wrap it up. Thank you so much, Dillon for coming on and talking with us on the podcast.

DILLON: My pleasure. I really enjoyed the conversation.

CHARLES: I actually got so many great tidbits from so many different areas of software development in Elm but also, just in kind of other things that I'm interested in trying. It was a really great conversation.

DILLON: I had a lot of fun and I love discussing these things. For any listeners who are interested in this stuff, feel free to reach out to me on the Elm Slack or on Twitter. I'm at @DillonTKearns. I'm also offering a free intro Elm talk for any companies that are kind of entertaining the idea of doing an experiment with Elm. If you go to IncrementalElm.com/Intro, you can find out about some of the talks that I'm offering.

CHARLES: All right. Well, thank you very much and we, as always are the Frontside. We build software that you can stake your future on and you can get in touch with us at @TheFrontside on Twitter or Contact@Frontside.io on email. Please send us any questions you might have, any topics that you'd like to hear about and we look forward to hearing from you and we will see you next week.

Nov 08 2018

50mins

Play

113: There and Back Again: A Quest For Simplicity with Philip Poots

Podcast cover
Read more

Guest:

Philip Poots: GitHub | ClubCollect

Previous Episode: 056: Ember vs. Elm: The Showdown with Philip Poots

In this episode, Philip Poots joins the show again to talk about the beauty of simplicity, the simplicity and similarities between Elm and Ruby programming languages, whether Elixir is a distant cousin of the two, the complexity of Ember and JavaScript ecosystems (Ember helps, but is fighting a losing battle), static vs. dynamic, the ease of Rails (productivity), and the promise of Ember (productivity, convention).

The panel also talks about the definition of "quality", making code long-term maintainable, and determining what is good vs. what is bad for your codebase.

Resources:

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES:: Hello, everybody and welcome to The Frontside Podcast, Episode 113. My name is Charles Lowell. I'm a developer here at the Frontside and with me today are Taras Mankovski and David Keathley. Hello?

DAVID:: Hey, guys.

TARAS: Hello, hello.

CHARLES:: And we're going to be talking with a serial guest on our serial podcast, Mr Philip Poots, who is the VP of engineering at ClubCollect. Welcome, Philip.

PHILIP: Hey, guys. Thanks for having me on.

CHARLES:: Yeah. I'm actually excited to have you on. We've had you on a couple of times before. We've been trying to get you on the podcast, I think for about a year, to talk about I think what has kind of a unique story in programming these days. The prevailing narrative is that folks start off with some language that's dynamically typed and object oriented and then at some point, they discover functional programming and then at some point, they discover static programming and they march off into a promised land of Nirvana and no bugs ever, ever happening again. It seems like it's pretty much a straight line from that point to the next point and passing through those way stations.

When I talk to you, I guess... Gosh, I think you were the first person that really introduced me to Elm back at Wicked Good Ember in 2016 and it seemed like you were kind of following that arc but actually, that was a bit deceptive because then the next time I talked to you, you were saying, "No, man. I'm really into Ruby and kind of diving in and trying to get into Ruby again," and I was kind of like, "Record scratch." You're kind of jumping around the points. You're not following the preordained story arc. What is going on here? I just kind of wanted to have a conversation about that and find out what the deal was and then, what's kind have guided your journey.

PHILIP: There was one event and that was ElmConf Europe, which was a fantastic conference. Really, one of the best conferences I've been to, just because I guess with the nature of early language, small conference environment. There's just a lot of things happening. There's a lot of people. Evan was there, Richard Feldman was there, the leading lights of the Elm community were there and it was fantastic.

But I guess, one thing that people have always said to me is the whole way track is the best track of the conference and it's not something I really appreciated before and during the breaks, I ended up talking to a guy called Michel Martens. He is the finder of a Redis sourcing company and I guess, this was just a revelation to me. He was interested in Elm. He was friends with the guys that organized the conference and we got talking and he was like, "I do this in Ruby. I do this in Ruby. I did this in Ruby," and I was like, "What?" and he was like, "Yeah, yeah, yeah."

He's a really, really humble guy but as soon as I got home, I checked him out. His GitHub is 'soveran' and it turns out he's written... I don't know, how many gems in Ruby, all with really well-chosen names, very short, very clear, very detailed. The best thing about his libraries is you can print them out on paper. What I mean by that is they were tiny. They were so small and I guess, I just never seen that before. I go into Ruby on Rails -- that was my first exposure to programming, that was my first exposure to everything -- unlike with Rails, often when you hit problems, you'd start to dive a bit deeper and ultimately, you dive so deep that you sunk essentially and you just accepted, "Okay, I'm not going to bend the framework that way this time. Let's figure out how everyone else goes with the framework and do that."

Then with Ember when I moved into frontend, that was a similar thing. There were so many layers of complexity that I never felt like had a real handle on it. I kind of just thought this was the way things were. I thought it's always going to be complex. That's just the nature of the problem. That's just the problem they're trying to solve. It's a complex problem and therefore, that complexity is necessary.

But it was Elm that taught me, I think that choosing the right primitives and thinking very carefully about the problem can actually give you something that's quite simple but incredibly powerful. Not only something quite simple but something so simple that it can fit inside your head, like this concept of a program fitting inside your head and Rails, I don't know how many heads I need to fit Rails in or Ember for that matter and believe me, I tried it but with Elm, there was that simplicity.

When I came across this Ruby, a language I was very familiar with but this Ruby that I had never seen before, a clear example was a templating library and he calls it 'mote' and it's including comments. It's under a hundred lines of code and it does everything you would need to. Sure, there were one or two edge cases that it doesn't cover but it's like, "Let's use the trade off." It almost feels like [inaudible] because he was always a big believer in "You ain't going to need it. Let's go for that 80% win with 20% effort," and this was like that taken to the extreme.

CHARLES:: I'm just curious, just to kind of put a fine point on it, it sounds like there might be more in common, like a deeper camaraderie between this style of Ruby and the style encouraged by Elm, even though that on the surface, one is a dynamically typed object oriented language and the other is a statically typed functional language and yet, there's a deeper philosophical alignment that seems like it's invisible to 99% of the discussion that happens around these languages.

PHILIP: Yeah, I think so. I think the categories we and this is something Richard Feldman talks. He's a member of the Elm community. He does a lot of talks and has a course also in Frontend Masters, which I highly recommend. But he often talks about the frame of the conversation is wrong because you have good statically typed languages and you have bad statically typed languages. You have good dynamic languages and you have bad dynamic languages. For all interpretations of good and bad, right? I don't want to start any wars here.

I think one of the things that Elm and Ruby have in common is the creator. Matz designed Ruby because he wanted programming to be a joy, you know? And Evan created Elm because he wanted programming to be a delight. I think if you experience both of those, like developing in both of those languages, you gain a certain appreciation for what that means. It is almost undefinable, indistinguishable, although you can see the effects of it everywhere. In Ruby, everything is an object, including nil. In Elm, it's almost he's taken everything away. Evan's taken everything away that could potentially cause you to stumble.

There's a lot to learn with Elm in terms of getting your head around functional mindset and also, working with types but as far as that goes, people often call it like the Haskell Light, which I think those are a disservice to Elm because it's got different goals.

CHARLES:: Yeah, you can tell that. You know, my explorations with Elm, the personality of Elm is 100% different than the personality of Haskell, if that is even a programming term that you can apply. For example, the compiler has an identity. It always talks to you in the first person, "I saw that you did this, perhaps you meant this. You should look here or I couldn't understand what you were trying to tell me." Literally that's how the Elm compiler talks to you. It actually talks to you like a person and so, it's very... Sorry, go ahead.

PHILIP: No, no, I think the corollary to that is the principle of the surprise in Ruby. You know, is there going to be a method that does this? You type it out and you're like, "Oh, yes it is," which is why things like inject and reduce are both methods in enumerable. You didn't choose one over the other. It was just like, "Let's make it easy for the person who's programming to use what they know best."

I think as well, maybe people don't think about this as deeply but the level of thought that Evan has put into designing Elm is crazy, like he's thought this through. I'm not sure if I said this the last time but I went to a workshop in the early days in London, which is my kind of first real exposure to Elm and Evan was giving the workshop. Someone asked him, "Why didn't you do this?" and he was like, "Well, that might be okay for now but I'm not sure that would make so much sense in 10 years," and I was kind of like, "What?" Because JavaScript and that ecosystem is something which is changing like practically hourly and this is a guy that's thinking 10 years into the future.

TARAS: You might have answered it already but I'm curious of what you think is the difference, maybe it just comes down to that long term thinking but we see this in JavaScript world a lot, which is this kind of almost indifference to APIs. It almost doesn't really matter what the API is for whatever reason, there seems to be a big correlation between the API that's exposed with the popularity of the tool. I think there are some patterns, like something that's really simple, like jQuery and React have become popular because of the simplicity of their APIs. What the flip side to that? What other ways can APIs be created that we see in JavaScript world. Because we're talking about this beautiful APIs and I can relate to some of the work that Charles has been doing and I've been doing microstates but I wonder like what would be just a brief alternative to that API, so it's kind of a beautiful API.

PHILIP: I don't know if anyone is familiar with the series of essays 'Worse is Better' like East Coast versus West Coast, from Richard Gabriel. The problem is, I guess and maybe this is just my understanding over my paraphrase of it, I'm not too familiar with it but I think that good APIs take time and people don't have time. If someone launches a V1 at first and it kind of does the job, people will use that over nothing and then whenever they're happy with that, they'll continue to use it and develop it and ultimately, if she's market share and then that's just the thing everyone uses and the other guy's kind of left behind like, "This is so much better."

I guess this is a question, I think it was after Wicked Good Ember, I happened to be on the same trend as Tom Dale on the way back to New York and we started talking about this. I think that's his big question. I think it's also a question that still has to be answered, which is, "Will Elm ever be mainstream? Will it be the most popular thing?" aside from the question of whether it has to be or not. For me, a good APIs good design comes from understanding the problem fully --

CHARLES:: And you can understand the problem fully without time.

PHILIP: Exactly and often, what happens -- at least this is what happens in my experience with the production software that I've written -- is that you don't actually understand the problem until you've developed a solution for it. Then when you've developed a solution for it, often the pressures or the commercial pressures or an open source is [inaudible] the pressures of backwards compatibility, mean that you can never refactor your way to what you think the best solution is and often, you start from scratch and the reality is people are too far away with the stuff you wrote in the past about the thing you're writing now. Those are always kind of at odds.

I think there are a lot of people that are annoyed with Elm because the updates are too slow, it relies on Evan and we want to have a pool request accepted. All of the things that they don't necessarily recognize like the absence of which make Elm an Elm, if you know what I mean. The very fact that Evan does set such a high standard and does want everything to go through his personal filters because otherwise, you wouldn't gain the benefits that Elm gives you. The attention is very real in terms of I want to shift my software now and it becomes easier then.

I think to go to a language like JavaScript, which has all of the escape hatches that you need, to be able to chop and change, to edit, to do what you need to do to get the job done and let's be quite honest, I think, also with Elm, that's the challenge for someone who's not an expert level like me. Once you hit a roadblock, you'll say, "Where do I go from here?" I know if I was using JavaScript, I could just like hack it and then clients are happy and everything's fine and you know there's a bit of stuff in your code that you would rather wasn't but at the end of the day, you go home and the job's done.

DAVID:: Have you had to teach Elm to other people? You and I did some work like I've seen you pair with someone and guide them through the work that they needs to get done. If you had a chance to do something like that with Elm and see how that actually happens, like how do developer's mind develops as they're working through in using the tool?

PHILIP: Unfortunately not. I would actually love to go through that experience. I hope none of my developers are listening to this podcast but secretly, I want to push them in the direction of Elm on the frontend. But no, but I can at least make from my own perspective. I find it very challenging at first because for me, being a Ruby developer and also, I would never say that I understood JavaScript as much as I would have liked. Coming from dynamic language, no functional experience to functional language with types, it's almost like learning a couple of different things at the same time and that was challenging.

I think if I were to take someone through it, I would maybe start with a functional aspects and then move on to the type aspects or vice versa, like try and clearly breakdown and it's difficult because those two are so intertwined at some level.

Gary Bernhardt of Destroy All Software Screencast, I watch quite a bit of his stuff and I had sent him an email to ask him some questions about one of the episodes that he did and he told me that he done the programming languages course, I think it's on Coursera from Daniel Grossman, so [inaudible] ML which is kind of the father of all MLs like Haskell and also Elm. I find that really helpful because he broke it down on a very basic level and his goal wasn't to teach you ML. It was to teach you functional programming. It would be a very interesting exercise, I think.

I think the benefit that Elm gave you is you get to experience that delight very quickly with, "Oh, it's broken. Here's a nice message. I fix the message. It compiles. Wow, it works," and then there's a very big jump whenever you start talking about the effects. Whenever you want to actually do something like HTTP calls or dealing with the time or I guess, the impure stuff you would call in the Haskell-land and that was also kind of a bit weird.

CHARLES:: Also, there's been some churn around that, right?

PHILIP: That's right. When I started learning, they had signals, then they kind of pushed that all behind the scenes and made it a lot more straightforward. Then I just mastered it and I was like, "Yes, I know it," and then I was like, "All right. I don't need to know it anymore."

This is the interesting thing for me because at work, most of our work now is in Elixir and Phoenix. I'm kind of picking a little bit up as I work with them. I think Elm's architecture behind the scenes is kind of based, I believe on Erlang’s process model, so the idea of a mailbox and sending messages and dealing with immutable state.

CHARLES:: Which is kind of ironically is very object oriented in a way, right? It's functional but also the concept of mailboxes and sending messages and essentially, if you substitute object for process, you have these thousands and thousands of processes that are sending messages back and forth to each other.

PHILIP: Yeah, that's right. It's like on a grand scale, on a distributed scale. Although I wouldn't say that I'm that far with Erlang, Elixir to appreciate the reality of that yet but that's what they say absolutely.

CHARLES:: Now, Phoenix and Elixir is a dynamically typed functional language. does it share the simplicity? One of the criticisms you had of Rails was that you couldn't fit it in your head. It was very difficult. Is there anything different about Elixir that kind of makes it a spirit cousin of Elm and the simple Ruby?

PHILIP: I think so, yes. Absolutely. I don't think it gets to the same level but I think it's in the right direction and specifically on the framework front, it was designed specifically... I mean, in a sense it's like the anti-type to Rails because it was born out of people's frustrations with Rails. José Valim was pretty much one of Rails top core committers. Basically, every Rails application I wrote at one period, at 80% of the code written by José Valim, if you included all the gems, the device and the resourceful and all the rest of it.

Elixir in many ways was born out of the kind of limitations of Ruby with Rails and Phoenix was also born out of frustrations with the complexity of Rails. While it's not as simple as say, Michel Martens' Syro which is like his web framework, which is a successor to Cuba if people have heard of that, it is a step in the right direction. I don't understand it but I certainly feel like I could. They have plug which is kind of analogous but not identical to Rack but then the whole thing is built out of plugs.

I remember Yehuda Katz give a presentation like 'The Next Five Years' and essentially about Rails 3.0. This is going way back and Phoenix is in some ways the manifestation of his desire to have like the Russian doll pattern, where you could nest applications inside applications and you could have them side by side and put them inside each other and things like that. Phoenix has this concept called umbrella applications which tells that, like Ecto is a really, really nice obstruction for working with the database.

CHARLES:: I see. It feels like, as opposed to being functional or static versus dynamic, the question is how do you generate your complexity? How do you cope with complexity? Because I think you touched on it at the beginning of the conversation where you thought that my problems are complex so the systems that I work with to solve those problems must necessarily also be complex. I think one of the things that I've certainly realized, kind of in the later stages of my career is that first part is true. The problems that we encounter are extremely complex but you're much better served if you actually achieve that complexity by composing radically simple systems and recombining them. To the commonality of your system is going to determine how easy it's going to work with and how well it can cope with complexity. What really drives a good system is the quality of its primitives.

PHILIP: Absolutely. After ElmConf, I actually invited Michel to come to my place in the Netherlands. He live in Paris but I think he grew up Buenos Aires in Argentina. To my amazement, he said, "Yes, okay," and we spent a couple of days together and there he talked to me about Christopher Alexander and the patterns book, where patterns and design patterns actually grew out of.

One of his biggest things was the code is the easiest part, like you've got to spend 80% of your time thinking deeply about the problem, like literally go outside, take long looks. I'm not sure if this is what Rich Hickey means with Hammock Driven Development. I've never actually got around to watching the talk.

CHARLES:: I think it's exactly what he means.

PHILIP: And he said like once you get at, the code just comes. I think Michel's work, you should really check it out. I'll send you a link to put in the show notes but everything is built out of really small libraries that do one thing and do it really well. For example, he has a library like a Redis client but the Redis client also has something called Nest, which is a way to generate the keys for nested hashes. Because that's a well-designed, the Redis client is literally just a layer on top.

If you understand the primitive then, you can use the library on top really well. You can embed Syro applications within Syro applications. I guess, there you also need the luxury of time and I think this is where maybe my role as VP of engineering, which is kind of my first role of that kind, comes in here which is when you're working on the commercial pressure, try to turn around to a business guy and say, "Yes, we'll solve this problem but can we take three weeks to think about it?" It's never going to happen --

CHARLES:: No.

PHILIP: Absolutely, it will never going to happen. Although the small things that I tried to do day to day now is get away from the computer, write on paper, write out the problem as you understand it, attack it from different angles, think about different viewpoints, etcetera.

CHARLES:: I think if you are able to quantify the cost of not thinking about it for three weeks, then the business person that you're going to talk to is their ears are going to perk up, right? But that's so hard to do. You know, I try and make like when we're saying like, "What technologies are you going to choose? What are the long term ramifications in terms of dollars or euros or whatever currency you happen to be in for making this decision?" I wish we had more support in thinking about that but it is kind of like a one-off every time. Anyway, I'm getting a little bit off track.

PHILIP: No, not at all. This is a subject I love to talk about because we kind of had a few a bit of turbulence because we thought, maybe we should get product people in, maybe we should get them a product team going and what I find was -- and this is maybe unique to the size of the company -- that actually made things a lot more difficult because you got too many heads in many ways. Sometimes, it's better to give the developer all of the context so that he can think about it and come up with the best solution because ultimately, he's the only one who can understand. I wouldn't say understands the dollars and cents but he understands the cost implications of doing it in efficient ways, which often happens when you're working in larger teams.

TARAS: One thing I find really interesting about this conversation is the definition of good is really complicated here. I've observed Charles work on microstates and I work with him, like I wrote a lot of the code and we got through like five or six iterations and at every point, he got better but it is so difficult to define that. Then when you start to that conversations outside of that code context and you start to introduce business into the mix, the definition of good becomes extremely complicated.

What do you think about that? How do we define it in a way? Are there cultures or engineering cultures or societal cultures that have a better definition for good that is relevant to doing quality work of this?

CHARLES:: That's a deep question.

PHILIP: Wow. Yeah, a really, really deep question. I think often for business, like purely commercially-driven, money-oriented good is the cheapest thing that gets the job done and often that's very short term, I think. As you alluded to Charles, that people don't think about the cost of not doing the right things, so to speak in our eyes and also, there's a huge philosophical discussion whether our definition of good as programmers and people who care about our craft is even analogous to or equal to a good in a commercial context.

CHARLES:: Yes, because ultimately and this is if you have read Zen in the Art of Motorcycle Maintenance, one of the things that Pirsig talks about is what is the definition of quality. How do we define something that's good or something that's bad? One of the definitions that gets put forward is how well something is fit to purpose. Unless you understand the purpose, then you can't determine quality because the purpose defines a very rich texture, a very rich surface and so, quality is going to be the object that maps very evenly and cleanly over that surface. When it comes to what people want in a program, they're going to want very different thing.

A developer might need stimulation for this is something that's very new, this is something that's going to keep my interest or it's going to be keeping my CPU max and I'm going to be learning a whole lot. A solution that actually solves for that purpose is going to be a high quality solution. Also, this is going to be fast. We're going to be able to get to market very quickly. It might be one of the purposes and so, a solution that is fast and the purpose fits so it's going to be good.

Also, I think developers are just self-indulgent and looking for the next best thing in something that's going to keep their interest, although we're all guilty of that. But at the same time, we're going to be the ones maintaining software, both in our current projects and collectively when we move to a new job and we're going to be responsible for someone else's code, then we're going to be paying the cost of those decisions. We both want to minimize the pain for ourselves and minimize the pain for others who are going to be coming and working in our code to make things long term maintainable. That's one axis of purpose and therefore, an axis of quality.

I think in order to measure good and bad, you really have to have a good definition of what is the purpose of that surface is so rich but the more you can map it and find out where the contours lie, the more you're going to be able to determine what's good and what's bad.

TARAS: It makes me think of like what is a good hammer. A sledgehammer is a really good hammer but it's not the right hammer for every job.

CHARLES:: Right.

TARAS: I think what you're saying is understanding what is it that you're actually doing and then matching your solution to what you're actually trying to accomplish.

PHILIP: Yeah, absolutely and in my experience, we have a Ruby team building a Rails application. That's our monolith and then, we have a couple of Elixir teams with services that have been spun out of that. This isn't proven. This is just kind of gut feel right now and it is that Elixir is sometimes slower to develop the same feature or ship it but in the long term it's more maintainable. I haven't actually gotten dived into to React and all of the amazing frameworks that it has in terms of getting things up and running quickly but in terms of the full scale application, I still think 10, 11 years on, Rails has no equal in terms of proving a business case in the shortest time possible.

CHARLES:: Yeah. I feel very similarly too but the question is does your development team approach the problem as proving a business case or do they approach the problem as I want to solve the set of features?

PHILIP: Yes. Where I'm working at the moment, I started out just as a software developer. I guess, we would qualify for 37 signals or sorry... base camps definition of a calm company --

CHARLES:: Of a what company?

PHILIP: A calm company. Sorry. They just released a new book and called 'The Calm Company' and 'It Doesn't Have to Be Crazy at Work.' I was given in my first couple of months, a problem. It was business oriented, it had to be solved but it had to be solved well from a technical perspective because we didn't want to have to return to it every time. It was standardizing the way that we exported data from the database to Excel. You know, I was amazed because it was literally, the first time that I'd been given the space to actually dive in on a technical level to do that kind of stuff.

But I think even per feature, that varies and that sometimes challenging when handing the work on because you've got to say, "This fit. Literally, we're just trying to prove, whether if we have this feature, the people will use it?" versus, "This is a feature that's going to be used every day and therefore, needs to be at good, technical quality." Those are the tradeoffs that I guess, keep you in a job. Because if it was easy, then you would need anyone to figure it out but it's always a challenge.

What I like is that our tools are actually getting better and I think, with Elm for example, it's kind of major selling point is maintainability and yet, with Elm, there haven't been that many companies with Elm over a period of years that exists, that can live to tell the tale. Whereas, we certainly know with Rails applications have done well like Basecamp and GitHub. For sure, they can be super maintainable but the fact that it took GitHub to just moved Elm to Rails 5.0, I belief, the fact that it took them years and years and they were running off at fork of Rails 2.3, I think it shows the scale of the problem in that way.

You know, Phoenix also went through a few issues, kind of moving architectures from the classic Rails to a more demand driven design model. I think we're getting there slowly, zig-zagging towards a place where we better understand how to write software to solve business problems. I guess, I was really interested in microstates when you shared it at Wicked Good Ember because that to me was attacking the problem from the right perspective. It's like given the fact that the ecosystem is always changing. How can we extract the business logic such that these changes don't affect the logic of our application?

CHARLES:: Man, we got a lot to show you. It has changed quite a bit in the last two years. Hopefully, for the better.

TARAS: It's been reduced and it's almost a quarter of its size while maintaining the same feature set and it's faster, it's lazier, it's better in every respect. It's just the ideas have actually been fairly consistent. It's just the implementation that's evolved.

CHARLES:: Yeah, it's been quite a journey. It parallels kind of the story that we're talking about here in the sense that it really has been a search for primitives and a search for simplification. One of the things that we've been talking about, having these Ruby gems that do one thing and do it very, very, very well or the way that Elixir being architected has some very, very good primitives or Elm, the same kind of thing being spiritually aligned, even though on the surface, it might share more in common with Haskell. There's actually a deep alignment with a thing like Ruby and that's a very surprising result.

I think one of the things that appeals to me about the type of functional programming that is ironically, I guess not present in Elm, where you have the concept of these type classes but I actually think, I love them for their simplicity. I've kind of become disenchanted with things like Lodash, even though they're nominally functional. The fact that you don't have things like monoid and functors and stuff is kind of first class participants in the ecosystems, means you have to have a bunch of throwaway functions.

Those API surface area is very large, whereas if you do account for those things, these kind of ways of combining data and that's how you achieve your complexity, is not by a bunch of one-off methods that are like in Lodash, they're all provided for you so you don't have or have to write them yourself. That is one level of convenience but having access to five primitives, I think that's the power of the kind of the deeper functional programming types.

PHILIP: And Charles, do you think that that gives you the ability to think at a higher level, about the problems that you're solving? Would you make that link?

CHARLES:: Absolutely.

PHILIP: So, if we're not doing that, then we're actually doing ourselves a disservice?

CHARLES:: I would say so.

PHILIP: Because we're actually creating complexity, where it shouldn't exist?

CHARLES:: Yeah, I think if you have a more powerful primitive, you can think of things like async functions and generator functions, there's a common thread between async functions, generator functions, promises arrays and they're all functors. For me, that's a very profound realization and there might be a deeper spiritual link between say, an async function and an array in the same way that there's a deep spiritual link between Ruby and Elm, that if you don't see that, then you're doing yourself a disservice and you're able to think at a higher level. Also, you have a smaller tool set where each tool is more powerful.

PHILIP: You did a grit, I think it was a repository with a ReadMe, where you boiled down what people would term what I would term, the scary functional language down to a very simple JavaScript. Did you ever finish that? Did you get to the monads?

CHARLES:: I did get to the monads, yeah.

PHILIP: Okay. I need to check that out again. I find that really, really helpful because I think one of Evan's big things with Elm is he doesn't use those terms ever and he avoids them like the plague because I think he believes they come tinged with the negative experiences of people trying Haskell and essentially getting laughed at, right?

CHARLES:: Yes. I think there's something to that.

TARAS: But we're doing that in microstates as well, right? In microstates documentation, even though microstates are written completely with these functional primitives, on the outside, there's almost no mention of it. It's just that when you actually go to use it, if you have an idea, one of the thing that's really powerful with microstates is that this idea that you can return another microstate from a transition and what that will do is what you kind of like what a flat map would do, which is replace that particular node with the thing that you returned it with.

For a lot of people, they might not know that that's like a flat map would do but a microstate will do exactly what they wanted to do when it didn't realize that's actually should just work like that. I think, a lot of the work that we've done recently is to package all things and it make it powerful and to access the concepts that it is very familiar, something you don't need to learn. You just use it and it just works for you.

CHARLES:: Right but it is something that I feel like there's unharvested value for every programmer out there in these type classes: monads and monoids and functors and co-functors or covariant functors, contravariant functors, blah-blah-blah, that entire canon. I wish there was some way to reconcile the negative connotations and baggage that that has because we feel kind of the same way and I think that Evan's absolutely right. You do want to hide that or make it so that the technology is accessible without having to know those things. But in the same way, these concepts are so powerful, both in terms of just having to think less and having to write less code but also, as a tool to say, "I've got this process. Is there any way that could it be a functor? If I can find a way that this thing is a functor, I can just save myself so much time and take so many shortcuts with it."

PHILIP: And in order to be able to communicate that, or at least communicate about that, you need to have terms to call these things, right? Because you can't always just refer to the code or the pattern. It's always good to have a name. I'm with you. I see value in both, like making it approachable, so the people who don't know the terms are not frightened away. But I also see value in using the terms that have always existed to refer to those things, so that things are clear and we can communicate about them.

CHARLES:: Right. definitely, there's a tradeoff there. I don't know where exactly the line is but it would be nice to be able to have our cake and eat that one too. We didn't get really to talk about the type versus dynamic in the greater context of this whole conversation. We can explore that topic a little bit.

PHILIP: Well, I can finish with, I think the future is typed Erlang. Maybe, that's Elm running on BEAM.

CHARLES:: Whoa. What a take? Right there, folks. I love it. I love it but what makes you say that? Typed Erlang doesn't exist right now, right?

PHILIP: Exactly.

CHARLES:: And Elm definitely doesn't run on BEAM.

PHILIP: I don't know if I'm allowed to say this. When I was at this workshop with Evan, he mentioned that and I'm not sure whether he mentioned it just as a throwaway comment or whether this is part of his 20-year plan but I think the very fact that Elm is designed around like Erlang, the signal stuff was designed around the way Erlang does communication and processes, it means I know at least he appreciates that model. From my point of view, with my experience with Elixir and Erlang in production usage, it's not huge scale but it's scale enough to need to start doing performance work on Rails and just to see how effortless things are with Elixir and with Erlang.

I think Elm in the backend would be amazing but it would have to be a slightly different language, I think because the problems are different. We began this by saying that my story was a little different to the norm because I went back to the dynamic, at the dark side but for example in Elixir, I do miss types hugely. They kind of have a little bit of a hack with Erlang because they return a lot of tuples with OK and then the object. You know, it's almost like wrapping it up in a [inaudible].

There are little things and there's Dialyzer to kind of type check and I think there are a few projects which do add types to Erlang, etcetera. But I think something that works would need to be designed from the ground up to be typed and also run in the BEAM, rather than be like a squashed version of something else to fit somewhere else, if that makes sense.

CHARLES:: It makes total sense.

PHILIP: I think so. I recently read a book, just to finish which was 'FSharpForFunAndProfit' is his website, Scott Wlaschin, I think. It's written up with F# but it's about designing your program in a type functional language. Using the book, you could probably then just design your programs on paper and only commit to code at the end because you're thinking right down to the level of the types and the process and the pipelines, which to me sounds amazing because I could work outside.

CHARLES:: Right. All right-y. I will go ahead and wrap it up. I would just like to say thank you so much, Philip for coming on and talking about your story, as unorthodox as it might be.

PHILIP: Thank you.

CHARLES:: Thank you, Taras. Thank you, David.

TARAS: Thank you for having us.

CHARLES:: That's it for Episode 113. We are the Frontside. This is The Frontside Podcast. We build applications that you can stake your future on. If that's something that you're interested in, please get in touch with us. If you have any ideas for a future podcast, things that you'd like to hear us discuss or any feedback on the things that you did here, please just let us know. Once again, thank you Mandy for putting together this wonderful podcast and now we will see you all next time.

Oct 25 2018

45mins

Play

112: Language Formation with Amanda Hickman and Amberley Romo

Podcast cover
Read more

Guests:

In this episode, Amanda Hickman and Amberley Romo talk about how they paired up to get the safety pin, spool of thread, and the knitting yarn and needles emojis approved by the Unicode Committee so that now they are available for use worldwide.

They also talk about how their two path crossed, how you can pitch and get involved in making your own emojis, and detail their quest to get a regular sewing needle approved as well.

Resources:

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

ROBERT: Hello, everyone. Welcome to Episode 112 of The Frontside Podcast. I'm Robert DeLuca, a software developer here at the Frontside and I'll be your episode host. With me as co-host is Charles Lowell. Hey, Charles.

CHARLES: Hello, Robert. Good morning.

ROBERT: Good morning. This is an exciting podcast. Today, we're going to be discussing writing a proposal to the Unicode Committee, getting it accepted and rejected. This is basically making emojis which I think is really awesome. We have two guests today who have an amazing story, Amanda Hickman and Amberley Romo.

Thank you both for joining us. You two have an amazing story that I would really love to dive into and we're going to do that today. It's about basically creating your own emoji and getting that accepted so everybody can use that and I think that's super, super cool, something that I've always kind of wanted to do as a joke and it seems like that's kind of where your stories began, so you two want to jump in and start telling? I think Amanda has a great beginning to this.

AMANDA: Sure. I mean, hi and thanks for having me. I don't know where to begin and really for me, this starts with learning to sew my own clothes which is an incredibly exasperating and frustrating process that involves a lot of ripping stitches back out and starting over and Instagram was a really big part of me finding patterns and finding other people who are sewing their own clothes and learning from the process.

I wanted to be able to post stuff on Instagram and it started to drive me absolutely crazy, that there's emojis for wrenches and nuts and hammers and there are no textile emoji. The best I could find was scissors which is great because cutting patterns is a place where I spend a lot of time procrastinating but that was it. I knew a woman, Jennifer 8 Lee or Jenny who had led a campaign to get the dumpling emoji into the Unicode character set. I knew she'd succeeded in that but I didn't really know much more about how that had worked.

I started thinking I'm going write a sewing emoji. I can do this. I can lead this campaign. I started researching it and actually reached out to Jenny and I discovered that she has created an entire organization called... What was that called? She's created an entire organization called Emojination, where she supports people who want to develop emoji proposals.

CHARLES: Before you actually found the support system, you actually made the decision that you were going to do this and you found it. You know, from my perspective, I kind of see emoji is this thing that is static, it's there, it's something that we use but the idea that I, as an individual, could actually contribute to that. I probably, having come to that fork in the road would have said, "Nah, it's just it is what it is and I can't change it." What was the process in your mind to actually say, "You know what? I'm actually going to see if I can have some effect over this process?"

AMANDA: It definitely started with a lot of anger and being just consistently frustrated but I knew that someone else had already done this. It was sort of on my radar that it was actually possible to change the emoji character set. I think that if I didn't know Jenny's story and it turned out I didn't know Jenny story at all but I thought I knew Jenny story but if I didn't know that basic thing that that somebody I knew who was a mere mortal like me had gone to the Emoji Subcommittee of the Unicode Consortium and petition them to add a dumpling emoji, I am sure that I wouldn't bother. But I knew from talking to her that there was basically a process and that there were a format that they want proposal in and it's possible to write them a proposal. I knew that much just because I knew Jenny.

I think at that point, when I started thinking about this, the Emoji 9 -- I should be more of an expert on that actually, on emoji releases but a new release of emoji had come out. There were a bunch of things in that release and it got a little bit of traction on Twitter. I knew that the Unicode Consortium had just announced a whole new slate of emoji, so I also was generally aware that there was some kind of process by which emoji were getting released and expanded and updated.

ROBERT: That's interesting. Do you know when that started? Because it seems like Apple started to add more emojis around like iOS 7 or something but it was pretty static for a while right? Or am I wrong?

AMANDA: I actually am tempted to look this up but the other piece that is not irrelevant here is that at the time, I was working at a news organization called BuzzFeed that you may have heard of --

ROBERT: Maybe, I don't know. It sounds kind of familiar.

AMANDA: I do feel like people kind of know who they are. I was surrounded by emoji all the time: in BuzzFeed, in internet native of the highest order and we had to use emoji all the time and I had to figure out how to get emoji into blog post which I didn't really know how to do before that. I can put them on my phone but that was it. I was immersed in emoji already. I knew that there was a project called Emojipedia, that was a whole kind of encyclopedia of emoji.

One of my colleagues at BuzzFeed, a woman named Nicole Nguyen had written a really great article about the variation in the dance emoji. If you look at the dance emoji, one of the icons that some devices use is this kind of woman with her skirt flipping out behind her that looks like she's probably dancing a tango and then one of the icons that other character sets use and other devices use is a sort of round, yellow lumping figure with a rose in its mouth that you sort of want to hug but it's definitely not to impress you with its tango skill. She had written this whole article about how funny it was that you might send someone this very cute dumpling man with [inaudible] and what they would see was sexy tango woman.

I think there was some discussion, it was around that time also that Apple replaced the gun emoji with a water gun. There was some discussion of the direction that the various emoji's face. One of the things that I learned around that time was that every device manufacturer produces their own character set that's native to their devices and they look very different. That means that there's a really big difference between putting a kind of like frustrated face with a gun pointing at it, which I don't really think of it as very funny but that sort of like, "I’m going to shoot myself" is very different from pointing the gun the other way which is very much like, "I'm shooting someone else," so these distinctions, what it means that the gun emoji can point two different ways when it gets used was also a conversation that was happening. None of that answers your question, though which is when did the kind of rapid expanse of emoji start to happen.

ROBERT: I feel like the story is setting in the place there, though because it seems like there's a little bit of tension there that we're all kind of diverging here a little bit and it's sort of driving back towards maybe standardization.

AMANDA: There's actually, as far as I know, no real move toward standardization but the Unicode Consortium has this committee that actually has representatives from definitely Apple and Microsoft and Google and I forget who else on the consortium. Jenny 8 Lee is now on the consortium and she's on the Emoji Subcommittee but they actually do get together and debate the merits of adding additional emoji, whether they're going to be representative. One of the criteria is longevity and I tend to think of this as the pager problem. There is indeed a pager emoji and I think that the Unicode Consortium wants to avoid approving a pager emoji because that was definitely a short-lived device.

CHARLES: Right. I'm surprised that it actually made it. Emoji must be older than most people realize.

AMANDA: My understanding is that very early Japanese computers had lots emoji. There's a lot of different Japanese holidays that are represented in emoji, a lot of Japanese food as well are represented in emoji, so if you look through the foods, there's a handful of things that haven't added recently but a lot of the original emoji definitely covered Japanese cuisine very well.

ROBERT: I definitely remember when I got my first iPhone that could install iPhone OS 2, you would install an app from the App Store that then would allow you to go toggle on the emoji keyboard but you had to install an app to do it and that's kind of where the revolution started, for me at least. I remember everybody starting to sending these things around.

AMANDA: But if you look at Emojipedia, which has a nice kind of rundown of historical versions of the Unicodes, back in 1999, they added what I think of as the interrobang, which is the exclamation/question mark together and a couple of different Syriac crosses. Over the years, the committee has added a whole series of wording icons and flags that all make sense but then, it is around, I would say 2014, 2015 that you start to get the zipper mouth and rolling your eyes and nerd face and all of the things that are used in conversation now -- the unicorn face.

ROBERT: My regular emojis.

AMANDA: Exactly.

CHARLES: It certainly seems like the push to put more textile emoji ought to clear the hurdle for longevity, seeing there’s kind of like, what? Several millennia of history there? And just kind of how tightly woven -- pun intended -- those things are into the human experience, right?

AMANDA: Definitely. Although technically, there's still no weaving emojis.

CHARLES: There's no loom?

AMANDA: There's no loom and I think that a loom would be pretty hard to represent in a little 8-bit graphic but --

CHARLES: What are the constraints around? Because ultimately, we've already kind of touched on that the emoji themselves, their abstract representations and there are a couple of examples like the dancing one where the representation can vary quite widely. How do they put constraints around the representation versus the abstract concept?

AMANDA: You don't have to provide a graphic but it definitely kind of smooths the path if you do and it has to be something that's representable in that little bitty square that you get. It has to be something representable in a letter-size square. If it's not something that you can clearly see at that size, it's not going to be approved. If it's not something you can clearly illustrate at that size in a way that's clearly distinct from any other emoji and also that's clearly distinct from anything else of that image could be, it's not going to be approved. Being able to actually represented in that little bitty size and I don't know... One of sort of sad fact of having ultimately worked with Emojination on the approval process is that we were assigned an illustrator and she did some illustrations for us and I never had to look at what the constraints were for the illustration because it wasn't my problem.

ROBERT: Sometimes, that's really nice.

AMANDA: Yes, it's very nice. I ended up doing a lot of research. What made me really sad and I don't want to jump too far ahead but one of things that made me really sad is we proposed the slate and the one thing that didn't get approved was the sewing needle and it also didn't get rejected, so it's in the sort of strange nether space. That's kind of stuck in purgatory right now. I did all this research and learned that the oldest known sewing needle is a Neanderthal needle so it predates Homo sapiens and it's 50,000 years old.

CHARLES: Yeah. Not having a sewing needle just seem absurd.

AMANDA: Yeah. We have been sewing with needles since before we were actually human being.

ROBERT: That's a strong case.

AMANDA: Yes, that's what I thought. If I sort go back to my narrative arc, I wanted to do a sewing needle and started researching it a little bit --

CHARLES: Sorry to keep you interrupting but that's literally the one that started this whole journey.

AMANDA: Yes, I wanted a sewing needle and I really wanted a sewing needle. I did a little research and then I reach out to Jenny and to ask her if she had any advice. She said, "You should join my Slack," and I was like, "Oh, okay. That's the kind of advice." She and I talked about it and she said that she thought that it made more sense to propose a kind of bundle of textile emoji and I decided to do that. She and I talked it through and I think the original was probably something closer to knitting than yarn but we said knitting, a safety pin, thread and needle were the ones that kind of made the most sense.

I set about writing these four proposals and one of the things that they asked for was frequently requested. One other thing that I will say about the proposal format is that they have this outline structure that is grammatically very wonky. They ask you to assert the images distinctiveness and they also ask you to demonstrate that it is frequently requested. I found a couple of really interesting resources. One, Emojipedia which is this sort of encyclopedia of emoji images and history maintains a list of the top emoji requests. I actually don't know how they generate that list or who's requesting that and where but I think it's things that they get emailed about and things people request in other contexts and sewing and knitting, I've done on that list and I started compiling it in 2016.

ROBERT: To be a part of the proposal process, to show that it is requested, without that resource, you just start scouring Facebook and Twitter and history and shouting to people like, "I really want this emoji. Why it didn't exist?" That seems pretty hard.

AMANDA: Actually our proposals all have Twitter screen shots of people grousing about the absence of knitting emoji and yarn emoji and sewing emoji. I know that Emojipedia, they do a bunch of research so they go out and look at based what people are grousing about on Twitter. They look at places where people are publicly saying like, "It's crazy that there's no X emoji," and that's part of their process for deciding what kinds of emojis people are asking for. Their research was one resource but we took screenshots of people saying that they needed a safety pin emoji and that was part of making the case.

One of the things that I found as I was doing that research was that, I guess at this point it was almost two years ago, when the character set that included the dumpling emoji came out, there was a bunch of grousing from people saying, "Why is there not a yarn emoji?"

There was a writing campaign that I think Lion Brand had adopted. Lion Brand yarn had put in this tweet saying like, "Everyone should complain. We needed a yarn emoji," but it doesn't matter how much you yell on Twitter. If you don't actually write a proposal, you're not going to get anywhere. I had been told that the Emoji Subcommittee, they're really disinclined to accept proposals that had a corporate sponsor, so they weren't going to create a yarn emoji because Lion Brand yarns wanted them to create a yarn emoji.

ROBERT: Right, so it was like counter-peer proposal.

AMANDA: Right. But as I was digging around the other thing I found was this woman in... I actually don't know if you're in Dallas or Austin but I found Amberley, who also put a post on Twitter and had started a petition, asking people to sign her petition for a yarn emoji proposal or a knitting emoji. I don't remember if it was a yarn emoji or a knitting emoji but I found her petition and reached out to her to ask if she was interested in co-authoring the proposal with me because she had clearly done the work. She actually had figured out how the system worked at that point. I think she knew who she was petitioning, at least.

I reached out to Amberley and we worked together to refine our proposal and figure out what exactly we wanted to request. I think there were a bunch of things that were on the original list like knitting needles, yarn and needles. I think crocheting would have been on the original list. We were sort of trying to figure out what was the right set of requests that actually made sense.

ROBERT: So then, this is where Amberley stories comes in and it is interesting too because she has entirely different angle for this. Maybe not entirely different but different than outright. This kind of ties back to the word software podcast mostly. It kind of ties back to the software aspect, right?

AMBERLEY: Yeah. I think, really they're kind of separate stories on parallel tracks. My motivation was also two-fold like Amanda's was, where I started knitting in 2013 and I had a really good group of nerd friends with a little yarn shop up in DC, like a stitch and ditch group --

ROBERT: I love it.

AMBERLEY: It was a constant sort of like, where's the insert emoji here, like where's the yarn emoji? Where's the knitting emoji? And we would sort of sarcastically use the spaghetti emoji because it was the most visually similar but that was something that was in the back of my mind but it teaches you a lot about yourself too because I was like, "Oh, this is like fiber art, not really an emoji. It's kind of technical, like on a tech space," and I didn't really connect that it was relevant or that I might have any power to change it. It just didn't occur to me at the time.

ROBERT: Interesting. I feel like a lot of people are in that similar situation or maybe not situation, even though you can make change on this.

AMBERLEY: Right, so my brain didn't even make it like, "Why isn't this a thing? let me look at how to make the thing." When that happened for me, Amanda mentioned using emoji and everything in the BuzzFeed space. I love how you explained BuzzFeed a while ago, it's my favorite description of BuzzFeed I ever heard. Something similar that happened for me was I was a software developer and in 2016, the Yarn package manager was released and that kind of turned something on in my head. That was like I'm seeing all these software engineers now be like, "Where's the yarn emoji?" and I'm like, "Welcome to the club."

ROBERT: "Do you want to join our Slack? We can complain together."

AMBERLEY: Right. It has been like a pretty decent amount of time, I'm semi-seriously ranting and complaining to my coworkers who were primarily male software engineers. I remember I went to [inaudible] in the Frost Bank Tower after work and was just like, "I'm going to figure out how this happens," and I spent a couple hours at the coffee shop. I found the Unicode site and I found their proposal process and their structure for the proposal and everything and I just started doing the research and drafting up a proposal specifically for yarn. Maybe it was a bit naive of me but to me it was like, "Okay, here's the process. I follow the process. Cool." I mean, you have to make a case and it has to be compelling and has to be well-written and it has to be supported and all that and that to me it was like, "Okay, there's a process.

At the same time, I did read about the dumpling emoji but I didn't connect it to Emojination and they had started the Kickstarter. We should talk about this later but I think the sort of idea the issue of representation on the committee and who gets to define language is really interesting but I saw that they had done the Kickstarter and there was a campaign aspect to it, so I ended up just building up this simple site so that if anyone Google, they would find yarn emoji. It's still up at YarnEmoji.com and that was how Amanda found me.

I got this random email, I sort of like had this burst of energy and I did all the research and I wrote the draft, sort of piecemeal, filling out the different sections of the way they have it outlined on the Unicode site and then I feel like a month or two went by and I had kind of not looked at it for a bit and then, I get this random email from this website that I almost forgot about. It was like, "Hey, I'm working on this series of proposals. If you're working on knitting or yarn or whatever, maybe we could work together," and I was like, "Well, that's sweet." Then she opened up this whole world to me. There's this whole Emojination organization, sort of 100% devoted to democratizing the process of language formation through creating emojis and so then, I got really into that. My primary motivator was yarn.

CHARLES: So what's the status of the yarn spool, those emoji right now?

AMBERLEY: The yarn, the spool of thread and the safety pin, they're all approved emoji for the 2018 released. Amanda and I are actually at the end. Amanda, a couple of months ago when I saw someone used the spool thread emoji for a Twitter thread -- you know how people will be like all caps thread and have a thread of tweets -- I saw someone do that just out of the blue. I was like, "Oh, my God. Is it out?" and the thing about these individual vendors, it sort of gets released piecemeal, so at the time Twitter have I think released their versions of this series of new emoji but others hadn't.

CHARLES: How does that work? Because you think the Twitter would be kind of device depending on what browser you're using, like if you're on a Windows or a Mac or a Linux Box, right?

ROBERT: -- Emoji set, right? I know Facebook does this too.

AMBERLEY: I'm painfully aware that Facebook does it because I can't use the crossed finger emoji on Facebook because it actually gives me nightmares.

ROBERT: I have to go look at this now.

AMBERLEY: Because it's so creepy-looking.

CHARLES: Okay. Also like Slack, for example is another. It's like a software-provided emoji set.

AMANDA: Right.

AMBERLEY: I'm not totally sure that Slack actually adheres to the standard Unicode set. I think it's kind of its own thing but I might be wrong about that.

AMANDA: Sorry, Slack definitely supports the full Unicode set. They also have a bunch of emoji that they've added that aren't part of the set.

AMBERLEY: Slack emojis?

AMANDA: Yes.

CHARLES: Yeah and then every Slack also has its kind of local Slack emoji.

AMBERLEY: Right.

CHARLES: But how does that work with --?

ROBERT: Okay, this crossed-finger Facebook emoji is... yes, I agree with you, Amberley.

AMBERLEY: Thank you. I had yet to find someone who disagrees with me about that.

AMANDA: I have never seen it before and I'm now like, "What is going on?"

CHARLES: Yeah, so how does it work if a vendor like Twitter is using a different emoji set? How does that work with cut and paste, like if I want to copy the content of one tweet into something else? Are they using an image there?

AMANDA: They're using an image. I think it's doesn't happen as much anymore but for a long time, I would often get texts from people and the text message would have that little box with a little code point in it and you were like --

AMBERLEY: More like an alien thing?

AMANDA: Yeah. Definitely, if you don't have the emoji character set that includes the glyph that you're looking at, you're going to get that little box that has a description of the code point and I think what's happening is that Twitter is using JavaScript or generally programming. There were air quotes but you can't see.

Twitter is using their software to sub in their emoji glyph whenever someone enters that code point. Even if you don't have the most up to date Unicode on your computer, you can still see those in Twitter. If I copy and paste it into a text editor on my computer, what I'm going to see is my little box that says '01F9F5' in it but if I get it into Twitter, it shows up. I can see them on Twitter but I can't see them anywhere else.

AMBERLEY: Damn, you really have the code point memorized?

AMANDA: No, I --

CHARLES: Oh, man. I was really hoping --

AMBERLEY: Oh, man.

ROBERT: You live and breathe it.

AMANDA: No, I'm not that compulsive.

AMBERLEY: We definitely have our emojis on our Twitter bios, though.

AMANDA: Absolutely.

ROBERT: If you see Amanda's bio, it's pretty great.

AMANDA: They started showing up on Twitter and I think that somebody in Emojination probably told me they were out and that was when I first started using them. Amberley might have actually seen it. It sounds like you just saw it in the wild, which is kind of amazing.

AMBERLEY: I saw it in the wild with this tweet thread and yeah, it's just [inaudible]. I was like, "Amanda, is it out?"

CHARLES: Yeah, I feel like I saw that same usage too, although I obviously did not connect any dots.

AMANDA: This last week, October 2nd -- I'm also looking things up. I'm just going to come to the fact that I am on a computer looking things up so I can fact check myself -- after they actually released their emoji glyph set, so by now any updated iOS device should have the full 2018 emoji, which in addition to a kind of amazing chunky yarn and safety pin, there's also a bunch of stuff. There's a broom and a laundry basket. There's a bunch of really basic, kind of household stuff that certainly belongs in the character set alongside wrenches and hammers.

AMBERLEY: I think one of the big ones too for this year was the hijab?

AMANDA: No, the hijab actually came out with a dumpling. Hijab has been available --

AMBERLEY: It's been up, okay.

ROBERT: So did it come with iOS 12 or 12.1? I don't know for sure. I just know --

AMANDA: I'm looking at it and it's 12.1. I really feel that I should be ashamed that I have used the internet and search for this.

AMBERLEY: I would say, I have no idea what their release numbers are.

AMANDA: [inaudible] as it appeared for the first time in iOS for 2018 with today's release of the iOS 12.1, Beta 2 for developers.

ROBERT: That is amazing. Do you get some kind of satisfaction -- like you have to, right? -- from people using the emoji and it's starting to make its way out there?

AMANDA: So much. Oh, my God, yeah.

AMBERLEY: I didn't really expect it, like saying that random tweet using this spool of thread for a tweet thread. I just thought and I just got so psyched. For me, I'm a knitter. I have knitter friends and it started with yarn and then really, Amanda and through Amanda, Jenny really sort of broadened my idea of what it all really meant. To think someone using it in the wild for a totally different application than I had ever thought of was like, "That's legit."

AMANDA: I definitely have a sewing emoji search in my tweet deck and sometimes, when I'm feeling I need a little self-validation, I'll go look over there and find people who are saying things like, "Why is there no sewing emoji?" and I'll just reply with all the sewing emoji, like it is part of my work in this life to make sure that not only do they exist but people know about them.

ROBERT: That is awesome. I would do the same thing, though to be honest. You'll be proud of that.

AMANDA: Totally.

ROBERT: Were there any hitches in the proposal process? I know we're kind of alluded to it but the thing that you started off one thing, Amanda didn't make it. Right?

AMANDA: I know.

ROBERT: So how did that process happen from you two meet each other and then going through the actual committee and the review process and then being accepted. What would that mean?

AMANDA: The process is actually incredibly opaque. We wrote this whole proposal, a bunch of people edited it, which is one of the other nice things about collaborating with Emojination. There was a bunch of people who are just really excited about emoji and the kind of language making that Amberley was talking about. There's a whole bunch of people who just jumped in and gave us copy edits and feedback, which was super helpful and then, there was a deadline and we submitted it to the committee and it actually shows up in the Unicode register which is also a very official kind of document register. I was a little excited about that too but then they have their meeting.

They first have a meeting and there's like a rough pass and the Emoji Subcommittee makes formal recommendations to the Unicode Consortium and then the consortium votes to accept or reject the Emoji Subcommittee's recommendations. It's a very long process but unless you're going and checking the document file and meeting minutes from the Unicode Consortium meetings, you'll never going to know that it happen.

AMBERLEY: -- You know someone connected through there because one of the things in our first pass, it wasn't that it was rejected. It was that we needed to modify something. We do have art for knitting needles with yarn because at one point, I think we weren't totally sure that a ball of yarn would be visually distinct enough in this emoji size to look like yarn and so, we had put it with sort of knit piece on knitting needles.

AMANDA: Oh, that's right. There was a tease of a little bit of knitted fabric.

AMBERLEY: Right and I think that, probably through Jenny or the people actually in the room, the feedback I remember is that there is a crocheter in the room who was like, "Yeah, why isn't there a yarn emoji but knitting needle?" so there was a little bit of like that was how I think we ended up from knitting needles with a fiber piece to ball of yarn, maybe.

AMANDA: I think that sounds right. I'm actually sure of that. It's just all coincide with my recollection. There were some things that they had questions about and that happened really fast because I feel like we had a couple of days and they have stuck to our guns and said, "No, we're only interested in knitted bit of fabric." Also, we worked with an illustrator and went back and forth with her because the initial piece that she had illustrated, I feel like the knitting needles were crossing in a way. That was not how knitting works and so, there was a little bit of back and forth around that as well.

But then once they decided that the they like the thread, yarn and safety pin, we're going to move to the next stage. I actually had to go back and look at the minutes to find out that the two reasons that they didn't move the sewing needle on to the next stage is when they thought it was adequately represented by the thread, which I wholeheartedly disagree with and they thought it wasn't visually distinctive. That's so much harder because a sewing needle, which is really just a very fine piece of metal with an eye at the end, you get down to a really small size and it is maybe a little hard to know what you're looking at.

But I think there's such a big difference between the static object which is the spool or the thread which represents a lot of things and is important and the needle, which is the active tool that you use to do the making, to do the mending, to do the cobbling.

CHARLES: Yeah. I'm surprised that it almost isn't reversed when certainly in my mind, which I think is more culturally important in terms of the number of places which it appears, it's definitely the needle as being kind of... Yeah.

AMANDA: Yeah and I think that the thread and yarn, they're important and I think that the decision to have a ball of yarn rather than a bit of knitting makes sense because there's a lot of things that you can use a ball of yarn that aren't just knitting and they think that --

AMBERLEY: And it's the first step too that doesn't exclude anyone in the fiber art community.

AMANDA: But there's so many things like in sutures and closing wounds, you're not using a little spool of cotton thread for that or polyester thread and stuff like embroidery and beadwork, you might be using thread or fiber of some sort that started on a spool but you might not. Embroidery floss was not sold in a spool and there's all these places where we use needles and all kinds of different size and you don't always use thread.

Sometimes, you're using yarn. Sometimes, you're using leather cord. Sometimes, you're using new bits of, I would say Yucca. You're using plant fibers to do baskets and in all of these different practices, that process of hooking it through the eye and sewing it is how it's actually made. It still sort of mystifies me why they haven't accepted it but they also didn't reject it, which is really interesting. I don't know how many other emoji are sort of sitting in this weird nether space because sometimes they just reject them outright. I think there was a proposal for a coin that they just said no.

ROBERT: They were a like, "A coin?" That would be [inaudible].

AMANDA: Oh, God.

ROBERT: They have to add one for every --

AMANDA: [inaudible].

CHARLES: Literally, the pager of 2017.

AMANDA: Exactly.

CHARLES: So what recourse is now available to you all and to us, by extension, to get the sewing needle?

AMANDA: I'm actually working on a revised proposal and I've been trying to figure out what are all the arguments that I'm missing for why sewing and the needle are not adequately represented by the thread and yarn. A bunch of things that a friend of my named, Mari who's half-Japanese, half-American but lives in Guatemala and does all this kind of arts in textile work, pointed out that there's a whole holiday in Japan devoted to bringing your broken needles and thanking them for their service. I thought that was really cool.

I've been trying to formulate what are all of the arguments for the necessity of both a needle and a spool. If anybody has interesting ways to phrase that, I would love for arguments.

CHARLES: Yeah but it's hard to imagine the arguments is just anything being more compelling than the arguments the you just laid out that you named about seven context: shoemaking, medicine, different fibers where the needle operates completely and totally independent of the thread. It's looming so large in kind of our collective conscious like holidays, being dedicated to them, except I think the Cro-Magnon pager, which is made out of stone, I believe, the being the artifact that pre-dates...

AMANDA: There's the idiom landscape as well. Things like finding a needle in a haystack, that has a very specific meaning --

ROBERT: And for puns. I've been resisting saying a pun this whole time.

AMANDA: Oh, share your pun with us.

AMBERLEY: Yeah, you have to say it.

ROBERT: Well, you could say that trying to get this through the committee is like threading a needle. Butchered but --

AMANDA: There's a biblical quote about getting into heaven -- a camel through the eye of a needle. I forget actually how it...

CHARLES: To thread a camel through the eye of a needle than for a rich man to get into heaven.

AMANDA: Exactly and there's this sort of do-re-mi, saw, a needle pulling thread. There are all these places where it's about the needle and somebody had --

CHARLES: It's primarily ancient.

AMANDA: I know.

CHARLES: It is the prime actor. Maybe, this is a good segue into kind of talking about the makeup of the committee and the decision making process and these kind of what seem like very clear arguments might not be received as such.

AMANDA: I certainly don't want to say anything bad about anybody on the committee.

CHARLES: No, no, I don't think that there's anything bad. I think that being receptive to things which are familiar to us versus with things that aren't is a very natural human thing and it can be interesting to see that at work and at play.

AMANDA: The Unicode Consortium is also evaluating all of these requests for whole language glyphs sets. Lots of languages and lots of character sets that are kind of obvious, like there has to be a sort of like character set like there has to be an Arabic character set but there are a lot of languages that have been left out of that because they're very small minority languages or they are historical languages, where the actual writing is no longer written the same way but there's historical reasons to be able to represent those characters.

One of the reasons why the Emoji Subcommittee cares about what gets into the formal character set is that everybody has to accommodate it and there's already been, I think some grousing. People start to moan and groan about how there's too many emoji, then it's too hard to find things.

CHARLES: And there's no take backs.

AMANDA: There's no take backs. You can't undo it. The committee is made up of representatives from a lot of tech companies primarily, although there's a couple of other kind of odd additional folks on there. I do try to find the committee list and I can find it right now.

AMBERLEY: I have it from Emojination. I don't know if it's up to date but Oracle, IBM, Microsoft, Adobe, Apple, Google, Facebook, Shopify and Netflix. The other voting members --

ROBERT: Shopify?

AMBERLEY: Yeah, right? The others being the German software company, SAP and the Chinese telecom company, Huawei and the Government of Oman.

AMANDA: Yeah, the Government of Oman is a fascinating one. I don't think they're the ones that are biting us on this. Especially for those tech companies, every time the emoji character set adds 10 or 12 emoji, they don't have to accommodate it on their devices. They have to put illustrators on it, they have to deal with everyone saying that the crossed fingers emoji in Facebook looks like I-don't-even-know-what.

AMBERLEY: Hey, Amanda.

AMANDA: It's all your fault. There's a whole process and there's non-trivial work associated with every single new emoji, so wanting to put the brakes on a little bit and be intentional about where and when they apply that work, it doesn't seem crazy to me. I just want them to approve the thing that I want.

AMBERLEY: I like the way that Emojination captures it. I looked at their website earlier and actually, they take it down but their goal quote "Emojination wants to make emoji approval an inclusive representative process." There has to be a process. There's overhead involved but looking at the makeup of the decision makers are not a trivial question.

CHARLES: Right. This is a great example like [inaudible] metaphor but these little artifacts, these emojis are literally being woven to the fabric of a global culture and certainly, everybody uses them and they become part of the collective subconscious. It does seem like very important to be democratic in some way. It sounds like there is a process but making sure that everyone has a stake.

AMBERLEY: Yeah.

ROBERT: What was the reason that they gave for not accepting the needle and thread? Was it like a soft no? You said it's like just hanging out, not really rejected but not accepted. We're going to drop a link in the show notes for the proposal and your GitHub and everything. I'm looking at the PDF that was put together and it seems like it was all a package deal like we talked about. How do they just draw or they just take like a lawyer would, just like draw or cross it out like, "Well, no but we'll take the other ones."

AMANDA: Yes, basically. What they did is they need to discuss and I don't know how long they've been meeting but they need to discuss all of the proposals that have been supplied by a particular deadline and --

ROBERT: That sounds painful.

AMANDA: Yeah, I mean, it's --

ROBERT: Just imagine the power of thinking about emojis.

AMANDA: One of the things that they rejected, I think because there's the smiling poo face. Somebody wanted a frowning poo face and they rejected that. There's a bunch of things that actually do get rejected. I don't know if they've been really care about a smiling poo face versus a frowning poo face.

ROBERT: What about an angry one?

AMANDA: We got all the feelings of poo.

ROBERT: We got important work to do here.

AMANDA: But they go through when they're trying to figure out. I think to some degree, you want to get them when they're not tired but I think the status that it's listed right now is committee pushback, so they've set it aside until we have some concerns. We're not going to reject it outright but we're not really sure why this isn't adequately represented. Then their most recent meeting, they just kind of passed on reconsidering it, which is fine because I think I was traveling and my proposal is not done. I really want to make sure that I have consolidated every imaginable argument in one place so --

ROBERT: And make it strong as possible.

AMANDA: Yeah. If people want to help the other thing that would be amazing is any and all idioms that you can think of, especially ones that are not in English or European languages, idioms in Central European languages, idioms in Asian languages that refer to needles, either translations of the kind of classic, 'finding a needle in a haystack,' but also any idioms that are kind of unusual and specific to a culture outside of what I have experience with would be amazing for making the case, so this is an international need.

ROBERT: Do they need any specific or actionable feedback or do they just say, "We're going to push back on this. We're just not quite sure?"

AMANDA: The two things that we're in the minutes -- there are minutes and they publish the minutes to Unicode.org -- were it was not visually distinct, which is not totally crazy. We actually worked with an illustrator to get a different image. The first image was almost at 90 degrees. It was kind of straight up and down and it is a little hard to see and the second is --

ROBERT: Especially, because it's thin.

AMANDA: The second image is actually a kind of stylized needle because it's fairly a little fatter and the eye is bigger but it's much more distinctively a needle. I'm hoping that that will also convince them but you have to be able to tell at a very small size that it's a needle. The other thing that they said was that sewing was already represented by the thread, that we didn't need thread and needle but it was literally one line in the minutes that referenced that and then it sort of like, "Did you have somebody in the room or not?" and so, if there is somebody on the committee who is willing to tell you really what their concerns were, then you have some sense of what they're looking for and why they're pushing back.

When you can very much see in the earliest emoji character sets that I have a hammer and I have a wrench and I use them but there's these very conventionally male tools. We have all of the kind of office supplies but all of homemaking and housekeeping and textile production, none of them were there until very, very recently. I think it does reflect the gender of the people who've been making these tools, that sewing and knitting weren't important enough as human practices to be included in this glyph set.

AMBERLEY: I guess, that's non-trivial to mention because that wasn't an argument that I made in my original yarn draft and Amanda and Jenny sort of pushing to open it up to this whole slate of craft emoji. I didn't realize until they brought that up. I took a stroll through pretty much the whole slate of emoji and you can count on almost one hand the number that represented the creative endeavors or sort of more traditionally known as creative things like camera or painting palette and stuff like that. It was extremely limited.

AMANDA: I think they have stuff like that. I think there's a few different variations on the camera and then there's painting palette and that's it.

AMBERLEY: Oh, there's the theater mask.

AMANDA: Oh, that's right. There is the theater, the happy and sad --

AMBERLEY: And I don't know it exactly and I haven't read the minutes like Amanda has but I think and I hope that that was a particularly compelling piece of that argument.

AMANDA: I think they definitely heard it.

AMBERLEY: Yeah.

CHARLES: Opening it up then, what else is coming in the way of craft? It sounds like this is historical but these pieces are being filled in not only with the work that you all are doing but by other emoji which you're appearing.

AMANDA: Yes.

CHARLES: And are you in contact with other people who are kind of associated with maybe craft and textiles and other kind of what you're labeling historically creative spaces?

AMANDA: I don't think there are anymore with a possible exception. Someone's working on a vinyl record proposal which I think is great.

CHARLES: Yeah, that's awesome.

ROBERT: Antiquated, though.

AMANDA: Maybe not, I don't know.

AMBERLEY: Take a stroll through the Emojination Slack and people discussed that.

AMANDA: Yeah. If you click at Emojination.org, the whole Airtable database is on there. There's not a lot of other creative ones. A friend of mine got really bent out of shape about the lack of alliums and wrote a whole slate proposal for leeks and scallions and garlic and onions.

ROBERT: Oh, there is a garlic one, right?

AMANDA: No. I mean, there is --

AMBERLEY: Actually, I'm looking at the Unicode page for current emoji candidates. They first get listed as... I forget the exact order. They become draft candidates and then provisional candidates or vice versa but I don't see any pending further creative ones but garlic and onion are on there.

AMANDA: Yes.

ROBERT: That makes my Italian a little happy.

AMANDA: I think there's some prosthesis, the mechanical leg and the mechanical arm, a guide dog --

AMBERLEY: Ear with hearing aid, service dog.

AMANDA: Yeah, there's a good chunk of interesting things that have been left out. I guess they've been approved by the subcommittee but are still waiting on final approval by the Unicode Consortium.

ROBERT: Okay. What are the next steps that we can do to help push the thread and needle proposal through it. You mentioned a couple things like coming up with idioms that are in different languages and whatnot but how can we contact you and push this effort and help?

AMANDA: That's such a good question. I don't even know. I mean, I am Amanda@velociraptor.info and you're totally welcome to email me if you want to help with this and I will --

ROBERT: That's a great domain, by the way.

AMANDA: Unfortunately, there's no information about velociraptors anywhere on that site.

ROBERT: That's the way it should be.

AMANDA: But also, if you're excited about working on emoji proposals, Emojination is an incredibly great resource and folks there, including me actually will help you identify things that are on other people's wish lists that you could work on if you just want to work on something and we'll help you refine your proposal if you know what you want and we'll help you figure out whether it's worth putting the time in or not and how to make it compelling. You can definitely check out Emojination.org. I think there's a path to get on to the Slack from there.

AMBERLEY: Oh, yeah. The Slack and the Airtable.

AMANDA: Yeah.

ROBERT: It sounds like there's a whole community that was born out of this, where everybody is trying to help each other and collaborate and get their shared ideas across.

AMANDA: Definitely and there's a woman, Melissa Thermidor who is fantastic, who actually is a social media coordinator. It's her actual title but she works for the National Health Service in the UK and was tasked with getting a whole series of health-related emoji passed. There's a bunch of things that she's --

AMBERLEY: Is she's the one doing blood.

AMANDA: She's doing blood.

AMBERLEY: That's a good one.

AMANDA: Because there's a lot of really important health reasons why you need to be able to talk about blood and getting blood and blood borne illnesses and --

AMBERLEY: That one was listed on the emoji candidate page or blood donation medicine administration.

AMANDA: Yeah.

ROBERT: That's really interesting, so she works for the government, right? and that was part of her job to do that?

AMANDA: Yes.

ROBERT: That's awesome, actually. I love that.

AMANDA: Yeah, I think the drop of blood, the bandage and the stethoscope are the three that are in the current iteration, which is interesting because the existing medical emoji were the pill and that gruesome syringe with a little drops of fluid flying off of it, which do not do a lot to encourage people to go to the doctor.

ROBERT: No, not at all.

AMANDA: So a few more, we're welcoming medical emoji.

ROBERT: You have a GitHub. Is that where you're still doing for the follow up and the prep work for the sewing emoji?

AMANDA: Yeah, that's probably the best place. I do have a Google Docs somewhere but that's probably a better place to connect even than my ridiculous Velociraptor email. The GitHub --

ROBERT: But it's still awesome.

AMANDA: It is awesome. I won't lie. I'm very proud of it. I am AmandaBee -- like the Bumble Bee -- on GitHub and the sewing emoji, the original proposals are there and I will make sure that there is information about how to plug into the revised needle proposal there as well. You guys are a tech podcast, so if people want to just submit suggestions as issues on that repository, that's awesome. We'll totally take suggestions that way.

ROBERT: That would be pretty rad. Well, I appreciate you two being on the podcast. I love hearing your stories and how it ended up converging in parallel tracks but it end up achieving the same goal. Still unfinished, right? Let's see if we can help push this over the finish line and get it done because I would really like to see a needle. I could definitely use that in many of my conversations already now, making all kinds of puns. Thank you, Amanda for coming on and sharing your story.

AMANDA: Thanks for having me.

ROBERT: And thank you, Amberley for also coming on and sharing your story. This was super awesome.

AMBERLEY: Yeah and thank you for connecting us to finally have a voice conversation.

AMANDA: I know. It's great to actually talk to you, Amberley.

CHARLES: Oh, wow, this is the first time that you actually talked in audio?

AMANDA & AMBERLEY: Yeah.

ROBERT: We're making things happen here. The next thing we have to do is get this proposal through and accepted.

AMANDA: Yes.

CHARLES: You've converted two new faithful sewing and needle partisans here and I'm in.

AMANDA: Awesome.

ROBERT: I know you've already gotten, what? Three through accepted?

AMANDA: Yeah.

ROBERT: We talked about that, it's got to be really awesome. I think I want to try and jump in and get that same satisfaction because a lot of people use emojis.

AMANDA: Exactly.

CHARLES: It definitely makes me think like you look at every single emoji and there's definitely a story. Especially for the ones that have been added more recently, there's a lot of work that goes into every single pixel. That represents a lot of human time, which I'm sure you all know, so thank you.

AMANDA: Thanks for having us on.

AMBERLEY: Yeah, thank you guys.

ROBERT: Cool. That is the podcast. We are Frontside. We build UI that you can stick your future on. I really love this podcast because it wasn't necessarily technical but had a lot of interesting conversation about how to work with a proposal and probably make a bigger impact than any of us with software, just because the sheer reach that emojis have are insane and the fact that you can influence this process is new to me and really cool, so I hope a lot of other people learn from that too.

If you have any feedback that you would like to give us on the podcast, we're always open to receive feedback. We have our doors and ears open, so if you like to send an email at Contact@Frontside.io or shoot us a tweet or DM us at @TheFrontside on Twitter. We'd love to hear it.

Thank you, Mandy for producing the podcast. She always does an amazing job with it. You can follow her on Twitter at @TheRubyRep. Thanks and have a good one.

Oct 11 2018

57mins

Play

111: Accessibility in Single Page Applications

Podcast cover
Read more

In this episode, Robert, Charles, and Wil talk about the whys and hows of accessibility, as well as what makes single page applications special, why they are they harder for accessibility, and frameworks that can do this for you.

Resources:

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

ROBERT: Hello everyone. Welcome to The Frontside Podcast. This is Episode 111. I'm Robert DeLuca, a software developer here at the Frontside and I'll be your episode host. Today, we're going to be discussing accessibility in single page apps. With me as co-hosts are Charles Lowell. Hey, Charles.

CHARLES: What is up Robert?

ROBERT: And Wil Wilsman. Hey, Wil.

WIL: Yo, yo, yo, yo.

ROBERT: Sounds like we're ready to drop a disc track. We're not going to be dissing anybody here. We're going to be talking about helpful things with accessibility in single page apps. Before we get into the nitty-gritty of accessibility in single page apps because we're getting into some deep stuff, I think I want to cover a lot of 'how' because I know accessibility things are usually about why you should be doing it and then they touch on things like, "You should be using alt attributes for your images," but for single page apps, I think we need to go further.

CHARLES: It always ends up like, "Then draw the rest of the accessibility owl."

ROBERT: Yeah, here's your two circles and the alt-tags are your circles and then the rest of the freaking owl is focus management and everything else that comes with it. Before we get there, what is accessibility? I guess, if we trim the giant umbrella down a little bit from everything that is accessibility that can be physical space things, like wheelchair accessible ramps or things like that, what about just technology?

WIL: People need assistive tech to interact with technology such as switches or keyboards and obviously screen reader is a big one but when it comes to the software itself, you could even talk about colors and people who are color blind, so red might not be red to everybody.

ROBERT: Speaking from experience there?

WIL: Yeah. I'm colorblind. Red is brown to me.

CHARLES: Things like hearing and all of it, right? It really is like just designing it in such a way that it can be used by as many people as possible.

ROBERT: Right.

WIL: And that includes your mom who may not be the best with technology but she still needs to pay your bills online or something.

ROBERT: Exactly, yeah and in for context to listeners that might not know, my mom is 100% blind, so it's kind of where it comes from.

CHARLES: But my mom is not but she has all kinds of problems.

WIL: Yeah, same with my mom.

CHARLES: That also falls under the category of accessibility, right?

ROBERT: Absolutely.

CHARLES: Right, accounting for age and culture.

ROBERT: We're blending into the why of accessibility, which is perfect. One of the things that it's such a good segue because what people are starting to realize and I think why accessibility is really starting to catch wind and get some traction is because a lot of people that grew up in the technology age were open in using technology a lot. Our parents probably did not use technology heavily. That's definitely the case for my dad. He still has a flip phone and he says, he's a low tech man living in a high tech world and just refuses to pick up technology but those are the people that just didn't use technology. But now we have a lot of people that grew up with technology and use a lot and they're aging into more disabilities and they're going to need that accessibility, which I think is really interesting to think about because that's a lot of buying power if you're just going to start moving in that needs in accessibility, right?

CHARLES: That's true. I know I may need glasses pretty soon, so colors and fonts were going to be heck a lot more to me in the next five years than they have over the last 20.

ROBERT: Yup, exactly and that's going to be huge for that and that's one piece of the why, so what are the other reasons that you'd pick up accessibility other than people saying that it's morally correct. I don't like starting off conversations for accessibility because it's the thing that you should be doing.

WIL: I think it goes even to my user experience like power users that don't like using the mice or mouse. That's me. I really prefer to just use my keyboard for everything. When the new Firefox browser came out and I couldn't navigate through the menus the way I was used to, I went back to Chrome.

ROBERT: That's interesting.

CHARLES: I still have not found a good workflow for navigating tabs with the keyboard without just kind of twisting my wrist all out of shape. You have to share that with me but again, that's another thing. That's an impediment that sits between me and the application, that actually --

ROBERT: Which is really interesting, you're getting into a keyboard navigation and focus management, which is really the crux of accessibility for screen reader users and switch users.

CHARLES: What I'm hearing is that in this case, including good keyboard and focus management in your application, e.g. making it accessible to screen readers, you at the same time, enabling your power users. I think a great point is that by introducing these very low friction workflows, you're actually going to be enabling other parts of your customer base and not just catering to one, that there are ripple effects throughout your system.

ROBERT: It may set it for everyone.

WIL: Yeah, not only people who need it but people who don't know they need it.

ROBERT: Yes, exactly. Think about the WCAG spec as user experience guidelines. They're not telling you how to implement a thing specifically for a screen reader. They're telling you how to implement it in a way that works for everyone, regardless of what ailment they have. It could be a temporary ailment. It could be a permanent ailment, where they have to use a screen reader or any kind of ailment that you can think of. They're not considering just one use case. It's a broad thing that shows how you can make your application better for everyone. I think that's a better way to look at the WCAG spec than I need to read through this and make sure that this auto complete works for a screen reader.

If you look at the guidelines, they're not telling you just first screen reader. They're telling you how to make it work for a switch, someone who is colorblind or who is using a dictation software. I kind of tend to look at that as UX guideline, which really helps me build a better app overall because when you nail down that user flow, everyone benefits from it because it's pretty well thought out at that point.

CHARLES: I like that too and I think that thinking of it as user experience and making sure that you have a complete user experience is a good way to think about it because it kind of separates the concerns of, "I've got HTML but is my application really HTML or is it a set of workflows and the data over which those workflows operate?" It really forces you to think my application is not a set of React opponents or web components or Ember components or what have you but really, there's a deep structure to it and it makes you kind of shine a light on that deep structure and try to map its surface. Then, if you really, really know it and you capture it, then you can represent it in any medium.

I think it's just a win not just for one niche group of users but also for all of our users and then also for your future users that you don't have because your application is designed better and is going to work. Who knows? Maybe there's some new interface or some new medium, some new device that comes out that hasn't even been invented yet. But if you really have a strong internal representation of what your application is, you're going to be able to be the first to move to that market.

ROBERT: Right and like I said earlier, people are starting to aid in the needing accessibility thing. If you need a dollar to justify it, that's going to be a big reason coming up. As a part of the new WCAG 2.1 spec, there is a zoom. I forget what this access criteria number is but the new criteria says your app basically needs to be responsive. That kind of maps directly to what you said earlier, Charles which is like you're going to need glasses soon and fonts and being able to zoom. It's going to be really important.

We see that just pop up everywhere. To give a counter example of not just for accessibility like you need it for glasses but the other side of that could be like when we give demos on low resolution projectors or screencast or anything, when we zoom the screen, the apps just still be usable and you should still be able to demo it and that's just something that you have both sides to that, where accessibility kind of works out for everybody. People on the call probably don't need glasses but to see that tiny screen that's being shared, you should be able to assume that.

CHARLES: Right. I'm just kind of restating what you said but I want to make sure to call it out explicitly because it was a definitely an aha moment for me, that you basically if you build yourself an accessible website, you build it so that it works on projectors and mobile devices too, so you kind of killed two birds with one stone and you don't have to make a special effort because zooming in the screen is tantamount to viewing it on a phone or viewing it on a tablet.

WIL: Yeah, we mentioned it earlier about accessibility and physical space and one of those things is being able to access something from anywhere no matter the device that person is using.

ROBERT: Yes. That's a lot of the 'why' of accessibility and I think we did a pretty good job of staying away from the more argument because morally we don't know if it's the right thing to do but a lot of the times it's not in front of you, it's hard to do and I want to make sure that you don't feel bad if you're not building accessibility in your apps. It's not easy. I don't sort of like people get up and say that, "Accessibility is easy. Just do this," because it's not --

WIL: If it was easy, everybody would do it.

ROBERT: Exactly. I always come back to accessibility being just like UX -- user experience -- because if it were easy, everybody would have a really great user experience too. It's a hard thing to boil down and to simplify it, right?

CHARLES: Right and like every other aspect, the kind of nonfunctional requirement, I say nonfunctional requirement that's a little bit of a contradiction terms but it becomes very hard if you haven't done it from the beginning. But if you didn't start with an accessible app, you weren't thinking about that or you inherited the app or it just wasn't on your radar for whatever reason. If you got a codebase that's two years old, going back and trying to make it accessible, it was extremely hard. It's expensive, expensive, expense.

WIL: Yeah, it's going to cost more in terms of money and time to added it after the fact.

ROBERT: Exactly. I spent a year on helping on Visa Checkout and there were some accessibility considered in the designs but a lot of the time my feedback wasn't just like, "Yeah, sprinkle some ARIA attributes on there and you're good." It was like, "Do we really need a carousel here to represent your list of carts because it's really hard to navigate around that?" A lot of the times, it ends up boiling up to is this the best way we can represent this data? Is this the best way we can navigate and build this user flow? A lot of times --

CHARLES: And if the answer is no, it's so painful, right?

ROBERT: Yeah, exactly, so all that work that was put into building that carousel and all the components that built that carousel together is thrown out because it's just not a pattern that should be used there. Doing after the fact is really hard and really expensive and usually ends up in refactoring anyways.

CHARLES: I just want to point that out because a lot of people find themselves in that situation, where they're staring down at a pretty big cost. Now, the reasons why your app may not be accessible or many and good and you shouldn't feel bad, if you're finally in that situation.

ROBERT: I gave a talk at Nodevember a couple of years ago called Accessibility Debt and it's just like any technical debt. Your apps going to have it. It's okay. Don't get beat up about it, especially if anybody is trying to beat you up over it, don't listen to them. It really is just like any other kind of technical form of technical debt. It's something that you'll have to deal with and it is just something you have to work through. It's not world ending. It's just another problem to work through.

What are some things like everybody usually talks about like the things you can do, the basics for making your app more accessible like using all its attributes and instead of doing a div with an on-click handler, just use a button or don't overuse ARIA attributes. Are there any other things that I missed there like the basics of accessibility?

WIL: The biggest is the HTML structure. Screen readers and other assistive tech were built with the standards in mind, so if you're doing nonstandard things like putting divs in H1 and adding ARIA attributes within there, you're not going to have a great time.

ROBERT: Right or splattering ARIA roles all over the place, probably not a good idea. That will be harder to debug. Also fun fact, ARIA roles, while you can implement directly to the spec, you may still have bugs across the different screen reader combinations or assistive tech combinations, so that's fine.

CHARLES: Keep it super simple is what I'm hearing, like use semantic markup. If you're going to introduce a custom button, still make sure that it's a button.

ROBERT: Use the platform.

CHARLES: Yeah, use the platform. Don't fight the platform. Probably the best example of that is people implementing their own select boxes. That's the classic example.

ROBERT: Wil and I, our lives has centered around that for a little while. It's so true. Usually, it's the first thing people go to grab to reimplement because selects are just ugly. I think Firefox has the ability for you to style select options now like you can change the color and the font but you can't style that. The pop up that comes, usually that's the system dialogue, which a lot of designers don't really like. That's usually the first thing that people go to implement and that's usually actually the first thing that stop somebody from signing up. A lot of sign up forms that I see, if your date of birth is in a select format, that probably will hinder somebody that uses assistive tech from signing up.

CHARLES: Yeah. You just basically bounced that entire person. The thing is people don't appreciate the cost. This gets into the whole concept of accessibility that is how much money would that person or those group of people have actually brought into your site versus the cost that you spent redoing that select box. You might be thinking, "It only took this developer actually two weeks," but when you actually look at the actual cost over the course of your application, you're not factoring that into the decision to go with custom select box. Just in our experience, it's just the truly low cost option per quality of experience, that tradeoff there is almost invariably going with platforms select, right?

ROBERT: Yeah. Your secret sauce and the best UX that you're going to provide is not going to be nicely styled select box and seriously, a battle that I had to fight a lot was if you really want to implement a custom widget, decide if this is what you want to spend your time on because custom widgets aren't just quick and easy things you implement. You're not going to implement the select that's fully accessible across all the AT combos in a couple of days.

CHARLES: It's a lot of work.

ROBERT: Yeah. You're going to fall down on a huge rabbit hole.

CHARLES: Yeah. It is just you're committing to that work over the lifetime of your application.

ROBERT: Exactly, if you can maintain that now. If you implement custom check boxes and custom radios and custom input that of content edible for some reason, I think --

WIL: I think what I see is like custom date pickers --

ROBERT: Oh, I just had a rant about that.

CHARLES: Date pickers are hard and there's not really a good option. You just have to open your eyes to the true cost.

ROBERT: Right, exactly. That's what I always try to explain, just like you have ownership over this now and you now have to maintain this and you can't regress.

CHARLES: Right and if you do regress, it's your neck that gets choked.

ROBERT: Yes. A good way to put it. We've talked a lot about things that are just general accessibility but nothing specific to single page apps. I do want to say like a lot of other things that people recommend is like if you're using React, use like the JSX ES1 plugin to help you analyze if you're writing any JSX that might not be accessible or use like HTML_CodeSniffer or aXe to statically analyze the DOM that you have.

Those tools are great. I'd see a lot of people [inaudible] those things as like, "Look, this automated checkers says I'm inaccessible so I'm inaccessible," and that's not the case, especially in a single page apps. You can have 100% passing automated checkers but your app also could be 100% broken and why is that?

WIL: I don't know.

ROBERT: I'm wobbling the router question here.

WIL: Yes. I guess that would just be due to routing. In single page applications, they handle their own routing, whereas static web sites and whatnot, the routing is handled despite URLs and the browser and reloading pages.

ROBERT: Right. There are probably other major differences between single page apps but the biggest thing is the routing is managed by the client, the JavaScript and on a server-rendered page, you click a link, it's going to rerender the entire page for that next page and the screen reader and the browser know exactly what to do with that, to put the focus back to the top of the page and start working through the page for you. But with single page apps, you're just replacing a piece of the DOM and the screen reader has no idea of what's going on.

An automated checker cannot check for this. They cannot tell you if your routes are accessible. The reason I'm bringing this up is because this has never talked about. I don't see in accessibility talks and this is the thing that actually is most broken and makes your app pretty much unusable to anybody that's using assistive tech. There's ways that people if they're savvy can navigate around it but if they don't know what's going on, they're going to think, "All right, I pressed this button and nothing happened, so I'm just going to leave now because it's not working."

CHARLES: There's a great video that you point to me, I believe a guy from the UK who has recorded a bunch of his experiences using websites and apps --

ROBERT: Yes, I want to dig it up and put it in the show notes.

CHARLES: Yeah. Those are great to watch because you'll really get it.

ROBERT: Yeah and that's a really unique case because he's really savvy. I believe, I could be wrong but I think he might have said that he definitely work in tech somehow --

CHARLES: Right. He knows the workarounds and he knows these things but in some of the cases it's like, "If I didn't work in tech, there's no way that I would be able to use this website."

ROBERT: Yeah. This is kind of the crux of like if you ever listen to anybody that is an accessibility consultant, they'll say, "You will never ever be able to automate accessibility," and this why I tie accessibility so much to user experience, would you ever have a user experience test that can tell you in a binary fashion? Yes or no, that your app has a great user experience? No, because it's pretty subjective.

CHARLES: Of how does your users feel? right?

ROBERT: Yeah.

CHARLES: You can rate, "My users feel great."

ROBERT: Accessibility is like that because there's a lot of context that you have to carry around. It's all about context. When I transition from this page, the next thing does the user have enough context of where they're coming from to where they're going to be able to operate on that page. Is there enough information to achieve the task they want? That is pretty much the crux of why there is no binary yes or no for that because it's contextual. It varies from person to person but you do your best to make sure that that works and that you provide enough information to do something.

That's kind of a single page app as a crux. This is why we have a philosophy of testing as a whole. We don't test components individually because again, you can make all of your components individually accessible there but as a whole, they might not work together because you're not providing enough context on an entire page. We did this in one of the apps that we work on, which is open source, so we can link to the PR that Wil wrote for this and I wrote an entire routing documentation around our philosophy and the different things that we tried. How did we manage the focus in that application? I'm kind of just going to lob it over to you Wil since you did the work. Do you want to give context of the holdings and how that all kind of came together?

WIL: Yes. One of the features of the app is these panels. It's like this three panel system. When you click an item in a list, a third panel pops up and this goes back to the context thing where if you're using a keyboard, you can't see the screen. You click an item in a list, how do you know that third panel popped up? The solution isn't for a component. The component can't be responsible for this. The list can't focus the item. It opens or vice versa, so this is definitely an application concern, where we needed to check against the route and see whether or not, the pane is opening or was already open or wasn't open before and when this pane opens for the first time, it will just focus it and that gives a lot of the context that the user needs when they click it, like they click an item in the list, it focuses this third pane and they're on a third pane.

ROBERT: Right. We didn't even just focus on the entire div. We focused on the heading of the thing that you selected.

WIL: Yeah because focusing the div, it might read something off but not all the time. The main thing we're focusing on that third pane opens is the heading to let them know that the item you click, you are now on that page and reading that heading. This is the same thing that would happen if you loaded up that page statically and the screen readers would usually just focus on that first heading.

ROBERT: Right. That helped a lot. For a little bit more context, the middle pane is like kind of a master detail thing going on here and in the middle pane there that we have, it's an infinite scrolling list. You have different things there and one of them is like a package. If you select the package and without the focus management and focusing on that heading, you would have to go through every single package that's in that list which could be a thousand of them before you actually get over to the pane that you just opened because source order. You have to go through each one of those.

WIL: And it's the same thing is true for power users, not just screen readers. It's like if they want to use tab once that pane open, they have to tab through the entire list.

ROBERT: Exactly. It wasn't keyboard accessible and it made it really hard to navigate around with a keyboard because the focus just wasn't being managed. There was a lot of work that we did there. I want to focus on the routing situation there because if you can't navigate around with a keyboard in your single page app, like you click a link and it's not selecting the next thing that should be focused and you provide the right amount of context, your app probably won't be usable without a lot of trial and error to a user, which depending on what your product is, they may not have a lot of patience for trial and error, right?

WIL: Yeah.

ROBERT: It's really important to try and nail down the routing situation and there are some frameworks and things out there that can do it. In the app that we're talking about, it's React app and we use React router but we don't actually really hook into the router to handle that and that's because React router doesn't provide very much information.

WIL: Yeah. You can think of React router as more of an outlet system, where your routes can render anything, anywhere on the page. That's kind of dangers of accessibility and that's kind of the reason that React router can't handle accessible things very well because at any point, the route can just change the button on the page to look like something else.

CHARLES: Yeah. It just pop in and pop out. There's no deeper model, right? There's not --

WIL: Yeah, there's no tree.

CHARLES: Right, there's the internal state --

WIL: Like a component tree, yeah.

CHARLES: The internal state of what is happening is completely opaque. You can only analyze the second and third order effects of the React tree.

ROBERT: Right and especially if you have nested route components, it's really hard to determine. One of the things that I've seen people do is focus on mount, which is a very naive approach because what if you have three nested routes, they're all going to focus on mount and the last one that mounts wins and that's a focus for which nobody wins.

CHARLES: You all tried that, right?

ROBERT: Yes. That's part of the document that I wrote up. We tried three different approaches and we ended up landing on something because we were using React router that was more of like an application state thing, so we were checking props because we knew what the user flow was. We knew what the user, when they come to this page is trying to accomplish, so we're able to kind of figure out from where they came from or where they're going through props and focus the right things for them.

WIL: One of the examples, like we talked about the third pane opening and focusing the heading inside but what you know what happens when you close that third pane, you kind of lose contexts again, so we have to focus the previous list item that was active because if you focus back at the top of the list, they've essentially lost their place and the list of results. In that case, we couldn't use on mount. That list item is already mounted. We have to listen to props and we have to look at the route through these props to determine if that third pane was open. If it just closed and if an item in the list is active, it should have focus. It's a lot of logic going on. You have to really understand your app in order to make it a very good accessible app. You can't just sprinkle in ARIA attributes and focus on mount everywhere and think it'll work fine because you're accessible. You have to really know the flow.

CHARLES: Right. All those signposts point towards having a deeper application state, a deeper understanding of your application than just the render tree. At that point, it's too late and so by that, I'm definitely lobbying a Reach/Ember router.

WIL: Yes. We talked a lot about the React router and we can't really be too accessible with it but to create a React router to go out and there is now Reach router. Robert, have you heard any good things about that? You're the accessibility expert here.

ROBERT: I haven't played with it myself, so [audio glitch] things that were lost from the React router three to four transition, I think and also, it's actually accessible routing which is nice because it comes out of the box and you know how to implement it. I haven't played with it. I know Gatsby has implemented that for their V2, that's their default router now, which makes me really happy because a lot of static sites that were being built with Gatsby were very inaccessible and broken which made me sad but now, they're not with V2.

I haven't played with Reach router but one of the things that I think it provides which was missing from React router was transition hooks. It has a concept of like where you're going from and to for your routes, which really helps figure out what you need to focus. The one thing I will say about Reach router and I'm sure it's got to be configurable somewhere but I haven't really looked and the demos that I saw, they were just focusing the div of the content that's being rendered, like it kind of just wraps with a generic div or the tabindex="-1" and then focuses it. If you have a lot of content that's inside of that div, it probably will be confusing but at least, it manages the focus somewhere.

If you're now off in a no man's land, you have no idea what's going on, at least it focuses that. But if you want to really nail the experience to be better, I would recommend trying to figure out what you should focus inside of that route that you just transition to, what is the best thing. That's for React. There are other things out there for other frameworks, like I am on Ember accessibility team that's out there and we have Ember A11y, which just provide you a focusing outlet for you to be able to just drop in your app and then when the route changes, it does the same thing. It focuses the wrapping div of that outlet that was just rendered.

I want to emphasize more in talking about e-holdings work that we did that we were focusing the heading because that told you exactly where you're at and what package you're on or what thing you're on. You know the name of it and now you know how you can go and navigate through that.

CHARLES: But it's conceivable, so how would you do that with the Ember router? Would you just introduce some way to delegate down to a particular component? Like when I'm rendered into an outlet, it focus on this component?

ROBERT: That would be interesting. I actually don't know. It's been a long time since I've messed around with the Ember router and Ember accessibility. It's definitely a great first step and that's where it kind of came from. I think there needs probably some work done to help implement on what you want to focus. That's kind of where I was going with when we were first exploring the React router stuff and e-holdings was I wanted to have this like high order component that knew of your routing tree and it knew where you're coming from or where you're going.

Then from there, it would just tell you, this is the route that we're going to and there's need to be focus management done, like if this prop exists, then you can pick inside of that component that what you want to focus. It leaves it up to the implementor of what they need to focus but it could be probably just like a fallback to the general div because that's not bad. It's a good first step. There needs to be a little bit of work there to get that done probably.

CHARLES: Yeah. But it is worth pointing out that by starting from a position of having an externalized application state via the route structure, in an Ember application, you're starting 95% of the way there. An Ember application, at any point, you know where you are and during your transition, you know where you're starting and where you're going to end up and you know when you leave and you know when you got there.

ROBERT: Yeah. Having an Ember route pivot handler there, like where you're pivoting from and to is just so nice and it kind of made it click together a lot easier.

CHARLES: Right. Reach router looks interesting but as I understand, it's still couched in React components and it feels to me like this is a problem that ought to be solved, that ought to be framework agnostic. Because if I use something like Reach or I use some other routing library for another framework, some other tool might come out that I want to use and I shouldn't be locked in --

ROBERT: Right like what if I really like Redux little router, then I have to make a choice between Redux state that I like or accessibility.

CHARLES: Exactly and that feels like a false dichotomy to me. What I would want to be looking for is a platform independent or a framework independent routing library that really just helps you represent your application state and the concrete state in which your application is in at any moment and then, also be able to represent the transitions between those states fully and completely. Then if you have that, you could embed that into any framework.

ROBERT: Right. That would be really nice to have. Just like pull it off the shelf and help you out there.

CHARLES: If anybody is listening who wants something to do for the next 18 months, no one will thank you for 18 months but you had to get on that. We'll give you lots of thanks then.

ROBERT: [inaudible] compare with you. That would be awesome.

CHARLES: I would love to be part of that.

ROBERT: Yeah, that would be awesome. To kind of tie back to other things, I don't know too much about Angular but I'm sure there are solutions out there to help with Angular. I know Marcy Sutton used to do a lot of work in the Angular world, so I'm sure there's something out there that helps with that. I just don't know off the top of my head right now.

I wrote a Medium 'think piece'... No, I wrote a little medium blog post about how all of single page out for routers are broken. I was pleasantly surprised by Vue. Vue brings its own router and their router has a concept of before each, so before each route transition you can run some code and that really helped with implementing the live, the announcer pattern where you use ARIA live to announce something but even beyond there, if you wanted to dig in there, you could probably figure out where you're transitioning from and to and give that next route some kind of attribute that says, "This needs to be focused. Figure out what you need to focus there and inside that route, you can focus wherever you want." I thought that was a really awesome.

That's kind of the crux of what I was missing from React router. I wanted the concept of knowing where I'm coming from or where I'm going and I would help with everything but unfortunately, that kind of doesn't exists because they're just components. For better or for worse, they're just components.

CHARLES: The world is so much more than just components.

ROBERT: Yeah, a little bit off topic, I think it's kind of funny how React kind of just shoved everything into the Vue layer, just to make it all a component.

CHARLES: Yeah, it's very easy.

ROBERT: Yeah, until it's not.

CHARLES: Yeah, exactly. It's easy but it's not simple.

ROBERT: That's a lot of talking about how routes need to be done and what you can kind of do to manage focus. It's really about managing the context and how you can provide the most contexts. For somebody to operate on that information, can they complete this thing? What can you do to make sure that you've done this properly? What steps can you take to make sure that you actually are accessible and that your routes work?

WIL: Manually testing those screen reader is probably the biggest thing, you know?

CHARLES: Yeah. I think the biggest thing is really watching someone who uses assistive tech on a regular basis use your application and then trying to use it yourself.

WIL: Yeah.

ROBERT: Right. Yeah, I'll definitely --

CHARLES: If you can get a little bit of this yourself but then it's kind of like someone who is never used a mouse before and trying to learn something new. What really helps is seeing someone who's good at it and see how their expectations are either being met or not being met.

ROBERT: Yeah, it's definitely the best way. If you can find somebody that actually relies on assistive tech, there's nothing that beats that kind of feedback. If they get to your app and they're really confused, I see some people that just dismiss it because they just don't understand. That is the best feedback you can actually get. If they don't understand what's going on, you have --

CHARLES: That's on you.

ROBERT: Yeah, that is going to be what happens for everybody that uses that. Well, maybe not everybody because everybody has different experiences but it's probably going to be a thing that pops up everywhere. But if you don't have access to people that are actually using assistive tech regularly and are pros at it, WebAIM provides really great tutorials for how to use a screen reader. If you're using a Mac, you have a screen reader built in and you can use that called VoiceOver. If you ever want to turn it on or turn it off, its command F5.

WIL: You might have to have that shortcut enabled, though.

ROBERT: Really? I'm pretty sure it's quite default.

WIL: I thought I had to enable mine but I could be wrong.

ROBERT: It's interesting.

CHARLES: Yeah. They got great tutorial too. It's like it notices the first time you turn it on, so it tries to help you navigate bullet lists and select boxes and input fields and check boxes and all kinds of good stuff.

ROBERT: Yeah. It gives you a little bit of a boot camp but WebAIM also helps with specific to website and stuff because one of the things that we ran into while working on the e-holdings project is they're transitioning from a native app to a web app and there were just things that you can do on a native app that you cannot do on a web app. Just keep that in mind also when you're testing, there's just things that will behave differently. Like you're not going to have a lot of crazy key shortcut commands like you're not going to press command F to get to the search box if your app has a prominent search box because that is going to clobber a bunch of other assistive tech key commands.

WebAIM is really a good help with giving you the tutorial of how to test a web app and many different screen readers, so they have it for JAWS, they have it for NVDA, they have it for VoiceOver iOS, they have it for TalkBack, which is the Android screen reader. There's a lot of really good resources there for you to start using as screen reader and test with your app.

I highly recommend using it with a screen off. You gain a lot of context by looking ahead of your cursor and --

CHARLES: Yeah. It's true.

ROBERT: -- The example that I usually give to people is like, "I worked on Visa Checkout. I went through that checkout flow pretty much every day for a year and eight months in, even then turning off the screen, I was still lost." Even though I knew what each screen was and what each component was there, I would find myself confused of like, what just happened and it's because sometimes, you'll get into something like you have a dropdown that sets the focus inside the dropdown and then the dropdown disappears from the DOM and your focuses in nowhere. You're like, "What is going? what am I doing. I don't even know where I'm at," and I turn the screen back on and I'm like, "Oh, now, I know what's going on."

CHARLES: Right. It really is a lot like interacting with your application as though you were interacting with Siri or Alexa but with a keyboard instead, instead of voice commands. That's an excellent point. We would understand if where would Amazon be if Alexa couldn't successfully navigate those situations. The counterpoint or even the flip side of that is if you model your web application in such a way that it can handle that type of serial interaction, instead of the highly parallel environment to being able to perceive huge amounts of information concurrently, like you can on a screen, if that were effectively serialized, that means you could represent your application through nothing but voice commands.

ROBERT: Yeah. Did you provide enough context for me to build this mental map is really what I'm going on to?

CHARLES: Yup.

ROBERT: Which I always thought was really interesting because I always wanted to know how my mom visualizes what a website looks like because it's wildly different than any of us. She's never been able to see what a website looks like. Does it look like a bunch of nodes and graphs and webs connecting to each other and how things pieced together? It's just a different way. But it's not only about screen readers, right? You can use a keyboard to navigate and that's definitely what we did with e-holdings is like can we tab through this [audio glitch] list, hit enter and go into the detail record, edit it, close it and go back and edit another one. Is that something that we can do with just a keyboard, not even a screen reader?

WIL: And with the keyboard, we're not talking about shortcuts to hit edit. We're talking about like tabbing over and hitting enter like people with accessibility issues would have to do.

ROBERT: Right and that's kind of a good segue into creating use cases. If you want to know if your app actually works, if your screen reader users or if your keyword users or if your dictation users are going to be able to navigate this app, create use cases. Things like actual user flows like how would somebody actually going to use this app? What task are they going want to complete? In that case, e-holding was like an electronic holdings management system for libraries. They probably want to get in there, add a couple of books or whatever you might have to their library and get out.

A use case could be like, "Can I search for this thing? I'm going to search for something specific. I'm going to go through the list, find the thing that I want. I'm going to add it, close the pane, go back and then remove one thing and can they complete that flow successfully without running into any issues or any blockers or any showstoppers." I can tell you before we did the routing management stuff, you would hit search and that was it. There was nothing else that you could do.

CHARLES: Yeah. They wouldn't even announce that anything can happen.

ROBERT: Exactly.

WIL: Yeah, and with these librarians, it's not necessarily a matter of they can't see the screen. It's just that they don't use the mouse because they're power users.

ROBERT: Right. They don't have any disabilities but that was essential to the workflow.

CHARLES: Yeah, exactly. Do that change. You just lowered the activation energy of that workflow by... What? three orders of magnitude?

WIL: Yeah, at least.

ROBERT: Let me click here right now. I can tab, now I can tab. Right now, let me click here and now I can tab, I can tab, I can tab. It's not as nice as just being able to completely do it through a keyboard. Through us making it super keyboard accessible, that also became super screen reader accessible and the people who use dictation were able to work through and get through the app and use it now, which was really cool. It really helps when you go for those things and create use cases to really figure out how a user is actually going to work through this app. That's the best way. Just get right through it. With Visa Checkout, we're like, "Can somebody buy something?" or if they don't have an account, can they sign up and buy? those were some of the use cases that we had because it turns out, those are actually pretty important to the business.

That all have been said, you also can test your components for accessibility individually because even at a smaller level, some of your components might have to manage focus. The best one I can think of is models. When you open a model, you should trap the focus inside of that model and you should be able to hit escape to close the model and when you close the model, it should go back to what triggered the model to be opened. These are all things that are individual that can be tested also.

But just because your components are individually accessible, it does not mean your application as a whole was accessible either. I don't want to paint the picture like, you don't have to care about your components accessibility individually because you do. It really does help but I think a lot of people miss the whole of the application, rather than individual with components.

CHARLES: Right. The components are the individual stitches but you have to follow the thread throughout the entire garment.

ROBERT: Right, exactly.

CHARLES: Man, what I really want to do is I want to find out how your mom visualizes website navigation and use that as a visualization technique.

ROBERT: That might be a fun webcast or something.

CHARLES: Yeah and then see like could we actually use it as a tool because I have to imagine, it's probably pretty simple. It's much simpler than what we think of when we think of a website because it has to be really condensed down to its essence.

ROBERT: Right. Yeah and [inaudible] users, they're not dumb. They have different ways. They know the gotchas. They know things that happen. They know there are different ways of getting trolled like my mom knows about focus jumping and she gets irritated what happens but she knows generally of what to do and it depends on her patience level. Like if it's focused jumping, she's like, "I don't need to use this thing. See you," and it's just not worth the frustration but there's different ways to navigate around like if you're a real power user, you might be able to recognize like the routing is inaccessible and you can navigate by headings or by regions or different landmarks. There's many different ways for users to navigate but to use those different navigation methods, you need to have a real strong coherent document structure. Your H1s have to actually be H1s and you have to have some things that they can navigate around to kind of work around those things.

It's interesting and basically, do everything you can to help those situations. If you can provide semantic markup and give it a proper structure. If you can do the focus management, it's going to help everybody. It really will. I did see when we went through the use cases and defining those things, we actually learned a little bit more about our product because we had to put ourselves in a different seat and think about it because you get real close to it. You go through that same flow nine million times and you pick up real power user things that you can do like, "I'll work on that. I got this. I'll click this button. All right, we're good." Somebody that's going through it for the first time and you put yourself in that seat, it kind of opens your eyes a little bit and makes the experience better for everyone. I think that's kind of the underlying tone there. That's the message.