OwlTail

Cover image of Mark Nunnikhoven

Mark Nunnikhoven

10 Podcast Episodes

Latest 28 Jan 2023 | Updated Daily

Episode artwork

Data Privacy Week 2022 | Redefining The Relationship Between Privacy And Technology With Jennifer King, Ph.D, Nikole Davenport, And Mark Nunnikhoven

ITSPmagazine

It's 2022 and time for another opportunity to take a look at our digital world and what we are doing with and to privacy. During this special live panel we will take a journey to see how and where we might redefine privacy. THE CONVERSATION Do we need a new definition for privacy?Is technology friend or foe?Is it too late? Can we take privacy back? Ways to watch the on-demand live streamLinkedInYouTubeFacebookTwitter ____________________________ GuestsJennifer King, Ph.DOn Linkedin | https://www.linkedin.com/in/jenking/On Twitter | https://twitter.com/kingjen Nikole DavenportOn Linkedin | https://www.linkedin.com/in/nikoledavenportprivacy/ Mark NunnikhovenOn LinkedIn | https://www.linkedin.com/in/marknca/On Twitter | https://twitter.com/marknca ____________________________ ResourcesLearn more about Data Privacy Week: https://staysafeonline.org/data-privacy-week/ Watch the live (or on-demand) panel for Data Privacy Week 2022 on ITSPmagazine: https://youtu.be/0CSIX7JJF9E Listen to this podcast: "Book | Privacy Is Power - How To Take Back Control Of Your Data | Redefining Society With Dr. Carissa Véliz": https://itsprad.io/redefining-society-845 ____________________________ This Episode’s Sponsors BlackCloak 👉 https://itspm.ag/itspbcweb ____________________________ To see and hear more Live Panels on ITSPmagazine, visit:https://www.itspmagazine.com/live-panels Are you interested in sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/podcast-series-sponsorships

1hr

24 Jan 2022

Episode artwork

Mark Nunnikhoven: Providing clarity about security. [Cloud strategy] [Career Notes]

CyberWire Daily

Distinguished Cloud Strategist at Lacework, Mark Nunnikhoven, has gone from taking technology to its limits for his own understanding to providing clarity about security for others. Mark fell in love with his Commodore 128 and once he realized he could bend the machine to his will, it set him on the path to technology. While he had some bumps in the road, dropping out of high school and not following the traditional path in college, Mark did complete his masters in information security. His professional life took him from Canadian public service to the private sector where Mark noted the culture shift was an eye-opening experience. Mark always looks to learn something new and share that with others and that is evidenced as his includes teaching as a facet of his career. We thank Mark for sharing his story with us.

8mins

24 Oct 2021

Similar People

Episode artwork

Mark Nunnikhoven: Providing clarity about security. [Cloud strategy]

Career Notes

Distinguished Cloud Strategist at Lacework, Mark Nunnikhoven, has gone from taking technology to its limits for his own understanding to providing clarity about security for others. Mark fell in love with his Commodore 128 and once he realized he could bend the machine to his will, it set him on the path to technology. While he had some bumps in the road, dropping out of high school and not following the traditional path in college, Mark did complete his masters in information security. His professional life took him from Canadian public service to the private sector where Mark noted the culture shift was an eye-opening experience. Mark always looks to learn something new and share that with others and that is evidenced as his includes teaching as a facet of his career. We thank Mark for sharing his story with us.

8mins

24 Oct 2021

Episode artwork

Episode #72: Serverless Privacy & Compliance with Mark Nunnikhoven (PART 2)

Serverless Chats

About Mark NunnikhovenMark Nunnikhoven explores the impact of technology on individuals, organizations, and communities through the lens of privacy and security. Asking the question, "How can we better protect our information?" Mark studies the world of cybercrime to better understand the risks and threats to our digital world. As the Vice President of Cloud Research at Trend Micro, a long time Amazon Web Services Advanced Technology Partner and provider of security tools for the AWS Cloud, Mark uses that knowledge to help organizations around the world modernize their security practices by taking advantage of the power of the AWS Cloud. With a strong focus on automation, he helps bridge the gap between DevOps and traditional security through his writing, speaking, teaching, and by engaging with the AWS community.Twitter: https://twitter.com/markncaPersonal website: https://markn.ca/Trend Micro website: https://www.trendmicro.com/Watch this episode on YouTube: https://youtu.be/QXZT-DQwGk0Transcript:Jeremy: Yeah. So you mentioned two separate things. You mentioned compliance and you mentioned sort of legality or the legal aspect of things. So let's start with compliance for a second. So you mentioned PCI, but there are other compliances there's SOC 2 and ISO 9001 and 27001 and things like that. All things that I only know briefly, but they're not really legal standards. Right? They're more of this idea of certifications. And some of them aren't even really certifications. They're more just like saying here, we're saying we follow all these rules. So there's a whole bunch of them. And again, I think what, ISO 27018 is about personal data protection and some of these things, and their rules that they follow. So I think these are really good standards to have and to be in place. So what do we get... Because you said, you have to make sure that your underlying infrastructure, has the compliance that's required. So what types of compliance are we getting with the services from AWS and Google and Azure and that sort of stuff?Mark: Yeah. So there's two ways to look at compliance... Well, there's three ways. Compliance you can look at as an easy way to go to sleep if you're having troubles, just read any one of those documents in you're out like a light. And then the other two ways to look at it are, a way of verifying the shared responsibility model, and then a way of doing business in certain areas. So we'll tackle the first one because it's easiest. So us as builders, building on GCP or Azure or AWS or any of the clouds, and they all have in their trust centers or in their shared responsibility page, they will show you in their compliance center, all the logos of the compliance frameworks that they adhere to. And what that means is that the compliance organization has said, you need to do the following things. You need to encrypt data at rest or encrypt data in transit.You need to follow the principle of least privilege. You need to reduce your support infrastructure like here are all the good things you need to do. And what the certifications from the cloud providers mean, is that they've had an audit firm. So one of the big five, Ernst & Young or Deloitte, come in and audit how they run the service. So Azure saying that, Hey, we are PCI compliant for virtual machines means that they are meeting all the requirements that PCI has laid out to properly secure their infrastructure. So that as a builder means that we know they are doing certain things in the background because we're never going to get a tour. We're never going to get the inside scoop of how they do updates and they do patching. And frankly, we shouldn't care. That's the advantage of the cloud. Right?Is like, it's your problem, not mine, that's what I'm paying you for. So compliance lets us verify that they're holding up their end of the bargain. So that's a huge win for everybody building in the cloud, whether or not you understand the mountain of compliance frameworks, the big ones are basically PCI, 27001 from ISO is basically just general IT security. We don't set our passwords to password, that kind of stuff it's basic hygiene. And then the SOC stuff is around running efficient data centers. Right? So it's like we don't let Joe wander in from the street and pull plugs. We have a process for that kind of stuff so great there. And the others are, if you're in a specific line of business. So if you're in the United States and you're doing business with the government, you need a cloud provider that is FedRAMP certified. Right?Because that is the government has said, if you want to do business with us, here's the standard you need to meet. Therefore, FedRAMP is this thing that vendors and service providers can adhere to, which means they meet the government's requirements to do that. And most of these are set up like that. So even PCI is a combination of the big credit card processors. They've formed this third party organization that said, anybody who wants to do business with us, so anybody who wants to take credit cards needs to adhere to these rules. If you don't take credit cards, you don't care about rules. So, that's the different way of looking at compliance. So it's very case by case. If we're building a gaming company, if we're taking in-app transactions like, Fortnite through the App Store, that's a huge bonus they get, is that Apple covers the PCI side of it. If they were doing it themselves, they would then have to be compliant. So if we're not falling under those anythings, if we're just making a cool little...We're not falling under those anythings if we're just making a cool little game where you upload a photo and we give you back a funky version of that photo, we don't have to comply to anything, right? As long as it's just a promise to our users. So that's the general gist of compliance. I don't know why I did wavy jazz hands, but there it is.Jeremy: Well, no, I think that makes sense. I mean, you need to do something to make compliance exciting because I think for most people you're right, it's a document they could read and easily fall asleep. If you have insomnia, then compliance documents are probably the way to go.So the other thing you mentioned, though, is that, again, you are always responsible for your data. And I think up until fairly recently, there were no super strict laws on the book that were about privacy in general. And so obviously we get GDPR, right? What does it even stand for? The General Data Protection Regulations, right? Did I get that right?Mark: Yeah, you did.Jeremy: So, that is European, it has to do with the European Union and that came out and that was really strict. And what they said was essentially, "Hey, I don't care if you're hosting in the United States, if you're Amazon or Google or wherever you are, if it is a European user's data that you have, then you are subject to these bylaws." And then very recently, I mean the same type of law, I don't know if they were modeled together, but the CCPA, the California Consumer Protection Act that came out for the United States. And again, it was just for California residents users' data, but also extends and applies all these different places.So these are very strict privacy control rules. I mean, it even gets to the point where you're supposed to have a privacy control officer and some of these other things, depending on the size of your company. If we go back to this idea of where our data is being stored, so think about this, I am writing an application that uses DynamoDB, and my DynamoDB application has to de-normalize data in order to make it faster for it to load some different access pattern. Or, I'm using Redis or I'm using a SQL server that I'm backing up transactions, or I'm running data through Kinesis or through EventBridge. I mean, you've got hundreds of places where this data could go. Maybe it ends up in S3 as part of a backup, so I can run it through Athena and do some of these things. Now somebody comes along and says, "Hey, I have a right through GDPR and CCPA for you to delete my data and for you to forget me." Finding that data in this huge web of other people's services is not particularly easy.Mark: Correct. So a little additional context around that, so CCPA is relatively new. When it was initially proposed, it was fantastic and then it got lobbied down significantly to the point where it doesn't even apply, unless you make at least $25 million a year. So it's not even...Jeremy: Welcome to America.Mark: Yeah, exactly. But it is a first test at scale in the United States as to whether or not legislation will work on that. And the reason it's in California is very specifically, a lot of the tech is based there. It is a good first step. So let's use GDPR as an example, because it's been out for two years now and there was a preview for two years before that, and it was 27 different nations coming together to figure out where they wanted to go. And we've got a lot more examples around it, but the core principles are the same, the United States is moving closer, but it's going to take a long time just because of the cultural differences, the political differences.So GDPR really boils down for the users to something very simple. As a European citizen, I have the right to know what you know about me and what you're doing with that information. And if there's any issues with that, I have the right to ask you to remove it or to change anything that's incorrect. That's the user side of GDPR. And now there's a whole bunch of stuff behind that from the business side of GDPR, you already laid out one of the biggest challenges is, how the hell do I answer that question? Right? Especially if you're not building it fresh, if you have an existing application that was never designed with this in mind.Now, the interesting thing for GDPR is that there are two very big sticks associated with it, why as a security and privacy guy, I love it. It's not perfect. But the first stick is that if you do not take reasonable steps to provide security controls with your infrastructure, you can get a fine of up to 4% of your global turnover. So not profit, 4% of your global take. So if you make a billion dollars, you could be fined up to 4% of a billion dollars, whether or not that's profit, paying off debt or whatever. So that's for not doing due diligence of adhering to something like an ISO 27001, or the basic security controls, right? So if I'm setting my passwords to password, I can get a big, big, fine.The second big stick of GDPR is if I know there's a breach and fail to tell you about it, I can get hit with another 2% of my overall global take for failing to tell you within an appropriate amount of time, and that appropriate amount of time is 30 days or less. The average law in the United States says its best case effort for notification or at most 45 days. So GDPR is a very big stick, lots of reasonability behind there from the user's perspective. But from a builder's perspective, what you just laid out runs counter to most of the things we're looking for, right? We are trying to optimize, we're trying to denormalize data. You mentioned S3, think about Glacier. Glacier, just the costs alone of if I archive your personal data and shove it into Glacier, not only do I have to find it, and then I have to pull it out and either remove it or modify it and then put it back. That is a huge thing.But again, like we talked about earlier, if you plan for this stuff ahead of time, it's not nearly that bad because it turns out when you look at this kind of data management through your application, there's actually a lot of benefits just to building your application, to being able to trace a piece of data through your system, to know what I know about you, Jeremy, as a user of my application, there are huge benefits because you lose these sort of legacy bugs where it's like, "Oh, you opened your account before 2008? Well you have this check mark instead of this one." That kind of stuff gets solved.So for new businesses, I think if you understand it, it's a minimal cost just because it's really getting the expertise in to help you do those design work. For existing businesses though, it is a nightmare. Literally people spent two years getting ready for GDPR and then the regulators still gave them another year before they hit anybody with any substantial fines because of the massive undertaking it is to actually build that kind of infrastructure.Jeremy: Yeah, no, I mean, and that's the other thing, I guess my advice would be if you're designing these systems and you're building these systems, I think people hopefully think about tenancy or multi tenancy when it has to do with building sort of bulkheads around different clients. So especially if you're a SAS company and you have a big client, you might want to separate out their data from other people's data and have those in separate places. You can do that to some extent, even with user data, right? And so knowing what data you're logging, where you're saving data, using an identifier that maybe obscures that user.I mean, one way that we tried to handle this in the past was only having personally identifiable data associated in one place where that could be removed. So even though there was a unique identifier for that person, as long as you're removed in that one place, right, which again, and backups, but at least you removed in that one place then you would essentially forget. So you'd have the data anonymized, but you essentially forget it. Now, does that go far enough? I don't even know. I've read the GDPR documents before. And I mean, I read summaries of GDPR, what the rules are that I think were longer than the actual rules themselves. Because again, it is kind of confusing to go through that. So I think that's one thing, again, people of think about GDPR, think about CCPA.The other thing that's been around for quite some time around privacy for children has been COPPA, which I always thought stood for the Child Online Privacy Protection Act. But I think you told me that it actually is the rule.Mark: Yeah, the A is just made up.Jeremy: The A is just made up. So thinking about Fortnite and YouTube and TikTok and all these things that children like to use and like to share stuff and are very quick to say, "Oh, I was born in 2007, I'm going to say I was born in 2005 so that now I'm over the age limit." And of course, there's no verification, there's nothing that stops somebody from doing that. So I'd love to talk about this because this is something, I mean, again, I'm a dad, I have a 14 year old and a 12 year old, and I will not say whether or not my 12 year old is using a Fortnite account that has an incorrect birthday on it, but it's possible she is in order to get access to that stuff. So what do we have to do from a privacy perspective and from a legal perspective in terms of protecting ourselves from this? Because this one has a lot of teeth.Mark: It does. It absolutely does. So if we take it from the builder perspective, not the parental perspective, the builder perspective, there is a lot of, and we can cover the parental in a second because both of mine are under 13, so it's double whammy. But from a builder perspective, this is where you see in the terms of service that, again, nobody reads, it says you can't open an account if you're under 13 and I'm not a lawyer, thank God, I didn't even play one on TV, what that means is they're trying to shift liability to the user and saying, "If you lie to sign up, that's on you not me."Because the nice thing about COPPA and it's design, it does actually have a reasonable structure to it to try to prevent companies from tracking kids under 13 online. So you see it a lot of impacts in advertising. And so YouTube at the start of the year, there was a huge push where YouTube basically asked anybody uploading any videos, is this made for kids? Do you think kids might be interested in this? Because if so, we're not putting ads against it because we don't have the ability to turn off all the tracking in the backend so it's ads or nothing. And there was a huge uproar around it and they've softened that interpretation, but it's because they got hit with $170 million fine against this rule because they weren't following it.So from a builder perspective, it's being aware that if you're serving to children, like if you have an application that is... So let's back up for two seconds, ignore the case where people are lying to get on, right? You need to put that reasonable and say for a Fortnite example saying, "Hey, we rated it 13+ so it's not marketed towards children. We've said it's 13+ for maturity level, just like the movies are rated, the games are rated, and we've added in the terms of service that you shouldn't be playing, you shouldn't open an account unless you're 13+." So we're pushing liability to the user. So in that case, you should probably be covered.But if you're actually making something that covers kids and families, this is a very real thing that you need to adhere to the rules of the act, which essentially say you can't track kid, you can't advertise directly to them. So where this falls, a question I get a lot, is around schools, especially now that kids are back in school or going back to school, even remotely is G Suite for Education versus G Suite are the same thing, software wise, but very different things legally. And so school boards need to understand what services from Google fall under the G Suite for Education license, because that license follows COPPA to the letter and says, we're not tracking kids, we're not moving this, that, and the other thing. So when you're signed in as a child from the school board and then surf YouTube, the normal tracking doesn't happen on YouTube, it actually creates a shadow account that's not associated to your account and tracks that and doesn't link it back to you as a child. Whereas if you're a normal G Suite user to start to surf YouTube, all that activity is linked back to your G Suite account.So as a builder, if you're designing something that could be targeting kids legitimately, you need to understand that there's a very hard line that you can't do a bunch of tracking. You can't take the same level of PII, if any, at all, you need to provide adult controls. There's a whole bunch of things that are worth consulting an expert on this for to make sure that you don't follow through. On the parenting side, it's a great excuse to say no for social media, if you want no for social media for the young kids, when they're like, "I want Insta," and you're like, "You legally can't have it. You're not 13."Jeremy: Right, right. Well, I mean, I again, I think about those situations that pop up all the time, where people are building things that affect kids, I mean, in Fortnite is a good example in the sense that yeah, because kids aren't going to play Fortnite. I mean, it's clearly made for kids. And again, you say 13 and older, and that's fine, but think about Netflix profiles, right? You create a profile for your kid and it tells you what your kid was watching. Now so that's good, right? Because you can go and see what your kid watches, but are they using that data for advertising or optimizing what they might show to your kid? If they're using that for recommendations, where does the privacy line kind of sit for those things?Mark: Yeah. And that's a very good use case because a lot of kids I know, unless they're really young, don't want the Netflix Kids interface. They want the actual Netflix interface, right? Because Netflix Kids, for little ones it's great because it just shows you Dora and Teletubbies, cool, I can just click on the icon I want. For kids, once they pass five, they're like, "I want to search, I know what I want, I want Transformers or Glitch Techs," or whatever.The interesting thing about COPPA is optimizing your service is almost always the out for a lot of these privacy regulations and there's always outs. And this is where it really comes down is, none of these regulations are perfect. There's the letter of the law and then there's the intention behind the law. And that really depends on company culture. Almost every company follows the letter of the law or what they think they can argue that letter to be because COPPA has very real fines behind it. There was "Silicone Valley" on HBO had an episode where they were freaking out because their chat app was popular with pre-teens and I think it's 48 or 58,000 per user fine, right? So if you have millions of users, it's an insanely high fine. And that's great. We want that as parents for that protection.But the line, what your question really hits on is even with GDPR, even with all the CCPA, is what is personal information is really the core question. And there's no clear answer because what you think of personal information and what the law thinks is very different because this is where a pet peeve of mine with Facebook in general is when they're arguing in front of Congress and the first time Zuckerberg testified, they directly asked him and said, "Do you sell user data?" And he honestly in his data-like face said, "No, we don't sell user data." Because they don't sell user data because their understanding of user data by their definition is data that you have uploaded to the service. So your status updates, the photos and movies that you upload is user data, the things you type in are user data.But, what Facebook sells is access to your behavioral and demographic profile that they have created from user data. So what's Facebook sells is not user data, Facebook sells access to data about users. Now that seems like a super fine semantic hairsplitting thing and it is, but that's the fundamental thing that you're talking about with even the Netflix example, what your kids watch is that user data, or is that data about users? Because all this regulation protects user data, not data about the users and there's a multi-trillion dollar economy dealing in data about users and they don't care about user data.Jeremy: Right, yeah. And I don't think we have enough time to get into all the details of that. But what I will say is I do know that for me personally, I don't like to share a lot of my personal data. Facebook is just a pet peeve, nevermind pet peeves about it. I mean the whole thing, I'm not a fan of it just because I do feel like it's very exploitive and it's something that, again, once our parents got on it, it just ruined the whole thing anyways.But I do think that there are valid use cases for taking data about a user, maybe not user data, but for optimizing your application. So I do think that that does make a ton of sense. I mean, again, what are the most popular products that are being clicked on? If you couldn't record that and then use that to show products, I mean, that would be pretty bad. But knowing your particular preference for certain things, and then being able to sell that to an ad company that they can combine with something else that then can serve you up targeted ads, good for the ad companies. maybe good for you if you're getting really relevant ads, but at the same time, it's just a lot of that feels dirty and creepy to me when you start getting very specific on profiling individual users.But hey, all you got to do is read these privacy documents, these terms and conditions and you'll see exactly what they're doing. So if you don't have a problem with it, I mean, it's kind of hard because I think most people would just glaze over them.So another one though, another law that has a ton of teeth and I think is going to be more important given the fact that we are now living in a COVID-19 world and more people are building telehealth apps, or they're building other apps, like even tracking COVID cases and some of these other things. For a very, very long time, there is a law called HIPAA, right, that is to protect medical data and all that kind of stuff. Where does privacy play in with that, especially now with all of this medical data, a lot of it being shared?Mark: Yeah. Yeah. And that's a really interesting example and I'm glad you brought it up because of the privacy, because there's lots of opportunity here and if you're building an application to service the medical community, and that medical community expand far beyond doctors, it's all the third-party connections and things, they all have to follow HIPAA when it comes to health information, so now we're talking about PHI so personal health information in addition to personal information PII, right? So you have both of these in this application. And HIPAA dictates what you're allowed to do with that health information. And again, it comes down to a lot of transparency required because us as patients want that data shared. If you give a history to your doctor and your doctor refers you to a specialist, you want your doctor to share that history information with a specialist because you don't want to go to the specialist and take half an hour of that first appointment reiterating what you already told the first person, right?And when you go get x-rays or an MRI, you want the results of that to be sent back to your doctor. You don't want to walk around with a USB stick and go, "I brought my data, please analyze it," right? So there's a lot of efficiencies to be had. And HIPAA dictates the flow of information between different providers. And that's why there's a lot of consent forms dealt with HIPAA. So when you sign up with your doctor, if you go to a GP for the first time, they're going to get you to sign a data-sharing document that basically says they're allowed to share information with other specialists that they refer you to, with insurers in the States, again, an outlier given how the rest of the world works with insurance.But the interesting thing, again, is that as a builder, you need to make sure the service you're dealing with is HIPAA compliant otherwise you can not be HIPAA compliant. But specific to COVID HIPAA has an exemption like most privacy acts that says if it's in the interest of public health, all of these controls can be foregone and that data can be shared with centralized health authority. So in the States, that information could be shared with the CDC. So everything you've told your doctor could theoretically be shared with the CDC if it's in interest of going and helping prevent the spread of COVID-19.Now, there are lawyers on every side of it, that is the one advantage of the United States. While you lack the overall frameworks, you have more than enough lawyers to make up for it. So the originator, your doctor's office is going to push that through a lawyer before they release all the information up to the HMO. The HMO is going to go through their legal team before they send it to the CDC and so forth. But it is an interesting exemption saying essentially, I don't necessarily care about your history of back issues or sports injuries or blah, blah, blah but I want to know every patient that has tested positive or had any test for COVID-19 because we need those stats up, right? And we need to roll that up at the municipal level, at the state level and at the federal level, because it's in the public interest.And in this case, and it's a common challenge and it's always all shades of gray, is your personal privacy is not more important than the general health of the community or of the state or the nation in certain cases. And I think a global pandemic provides a lot of argument on that front, but we had a case here in Canada where law enforcement made an argument early in the pandemic that they wanted a central database they could query that would tell them if someone they were dealing with had tested positive for COVID. And that went up to our privacy commissioner then to our federal privacy commissioner, because their argument, the first responders, the police argument, was we could be potentially exposed and we want to know, which there's validity to that argument. But the flip side was, well, is it enough to breach this person's personal privacy? And the current result the last time I checked was, no, it wasn't.Whereas the aggregate stat, if I test positive or negative, that stat is absolutely pushed up with not my name, but with the general area where I live. So in my case, my postal code, which is the same as the zip code, that gets pushed up because that's not a breach of my privacy, but it helps the information, right? I shouldn't say it's not a breach, it's a tiny breach compared to the big benefit that the community gets. So fascinating, all shades of gray, no clear answers, but it is an exemption and those are not uncommon. There's also exemptions and privacy laws for law enforcement requests, right? If a law enforcement officer goes to a judge gets a subpoena or a warrant, all the privacy protections are out the window.Jeremy: Right. Well, I mean, it's funny. I mean, you mentioned sort of, this idea of the contact tracing application and obviously there's privacy concerns around that if it's about specific people, but from the law enforcement perspective, I mean, obviously I'm sure you've been paying attention to what's been going on in the United States, if somebody has preexisting conditions even using a taser on somebody, which is probably a bad idea in most situations anyways, but if there were underlying health concerns that they could say, "Oh, this person does have, I don't know, asthma, or they have a heart condition," or things like that where using different types of force or using different methods could cause more harm than it does good if it does good in some cases, but that would be interesting data to potentially be shared maybe with a police officer or a law enforcement officer or maybe not, right? So, I mean, that's the problem is that, you're right, there's data that could be shared that could be beneficial to the public good, but then on the other side, there's probably a lot that you don't want to share.Mark: Yeah. So the more common example with law enforcement is outside of the health information is your cell phone, right? So your cell phone location, the question of, so not only getting access to the phone, but the fact that your phone constantly pings the network in order to get the best cell service, right? So at any given time, you're normally within, if you're in the city, there's five cell towers you could be bouncing off of and one or two of them are going to be better than the rest because they're closer physically.And the question you see this in the TV all the time where depending on the jurisdiction, law enforcement may or may not need a warrant to get your location information. There are requests from certain cell providers where they can file it and get the location of your SIM card right now, or your identifier for your phone without going through a significant legal process, it's just a simple request either from the investigating officer or from the DA, instead of going through a judge.And that's an interesting one because that's never been argued out in public. And I'm a big fan of, there's no wrong answer, you need to just have transparency so people understand. And when decisions are made behind the scenes, because you can see the argument either way, right? If a kid is lost and besides sending out an amber alert, which I think you guys have, we have them here where they blast everybody's phone, instead of sending an amber alert, if you could ping that kid's phone and know where they were, you may be able to retrieve that child. And we know when children are missing, every minute counts, right? The outcomes are significantly better the faster you find that kid, regardless of the situation. But on the flip side, if they're tracking you because they suspect you of a crime, but that hasn't been proven, is that a violation of your rights?So when it comes to privacy, one of the reasons I love diving into it is because it is all nuance and edge cases and specific examples that overriding things. But most of it's done behind the scenes, which is the one thing I do not like, because I think it needs to be out in the open so people understand.Jeremy: Yeah. And I think the other thing that is sort of interesting about this electronic surveillance debate versus the traditional analog thing, I'll give you this example, I remember there was a big to-do, and I think it was going around on Facebook or one of these things where people are like fo not put your home address into your GPS in your car, because if somebody breaks into your car, then they can just look at your GPS and get your home address, or they could open your glove box and take out your registration that also has your home address on it, right? So that's the kind of thing where that to me was kind of dumb.Now, the other thing, going back to the police example, is that I read somewhere and again, I'm really hoping this is true because it's so hard to trust information nowadays, but that if a police officer or a detective wanted to wait outside someone's house and follow them, right, that's perfectly legal for them to do. If they have some reasonable suspicion, they can follow somebody, they can tail somebody, whatever they call it, they can do that. But to put some sort of tracking device on their vehicle, that they can't do, although it's sort of the same thing, except one requires a human to be watching and the other one... sort of the same thing, except one requires a human to be watching and the other one doesn't. And again, I'm not a huge fan of surveillance, so I'm definitely on the more restrictive side of these things. But at the same time, those arguments just seem really strange to me, that it's like it's legal in one sense if it's analog, but it's illegal if it's digital.Mark: Yeah. And so even clearer example, in most states, and again, not a lawyer, but you require a warrant to get the passcode or password for a user's phone, but do not require a warrant to use a biometric unlock. So I can force you to use your thumb or your face to unlock your phone, but I can't force you to give me your passcode. Both unlock the phone, right? The passcode, the biometric, I mean, there's technical differences in the implementation, but the end of the day, you're doing the same thing. But it's the difference between something you are and something you know, and you can't be compelled to incriminate yourself in the United States. So, there's that difference, right? And if you go to the border, all this is moot, because there's an entire zone in the border where all your rights are essentially suspended.I think it comes down to the transparency, but for all the examples you just mentioned, the law is always about 10 to 15 years behind technology. So this comes back to one of my core experiences as a forensic investigator. Now I've never testified in court, but I'm qualified to. Most of the reports and things that I worked on were at the nation state level and never get to court. But the interesting thing there is, the amount of cases I've reviewed from court findings and having forensics done, they're all over the map, right? And the case of like, "Oh, an IP is an identifier." And I'm like, an IP is not an identifier of a specific device. If it goes into a building that has 150 devices behind a NAT gateway. Which of those devices committed the act that you're in question?" You can't prove with just an IP address, right?But it's been accepted in a ton of court cases, because again, comes back to privacy as well, the law is way behind the capabilities. And this is the challenge of writing regulations, of writing law, you need to keep it high enough that it's principled and then use examples and precedent for the specific technologies, because they keep changing. Because yeah, some of the stuff, like the tailing something, is such a ridiculous difference between the two, but it is legally a difference. Similar, to tie this back to the main audience of builders building stuff in the cloud, especially around serverless, how we handle passwords is always very frustrating for me. So let me ask you a question. Why does a password field obscure the information you're entering into it? This is not a trick question, I'm not trying to put you on the spot.Jeremy: If people looking over your shoulder can't see it, or so that you don't remember what password you typed in.Mark: Both are true, but yes. So the design of that security control is to prevent a shoulder surf, right? To prevent somebody looking over the shoulder and typing in. So the question is, why do we have that for absolutely every password field everywhere, when there are very low likelihoods of certain situations where people are looking over the shoulder. Which is why I love, and amazon.com has this, and a bunch of other people are starting to add the show my password, to allow the users to reduce the number of errors you put in. Because if I'm physically alone in my office, this is a real background, nobody is looking over my shoulder and watching my password. So why I can't see what I'm typing, right? Similarly, when people say, "I'm going to prevent copy and paste into a password box."Jeremy: Oh, I hate that.Mark: Absolutely. Prevent copy a hundred percent of the time. But paste is how password managers work. And what's the threat model around pasting into a box, right? So you paste the wrong password, who cares? So it's understanding the control and why you're doing it, is really, really critical. Same thing on the passwords, why people freak out when I tell them, "Well, write it down. If you don't have a password manager, write it down on a piece of paper." They're like, "What? Oh my God, why would I do that?"Well, writing it down on a piece of paper and putting it under your keyboard in an office is a dumb idea, because that's not your environment. But we're both at home and if I put my password under my keyboard, it's my kids and my partner that are the threat, potentially someone I invite into my home. But if I've already invited them in my home or it's someone in my family, there's a bunch of other stuff they already know anyway, whereas that will help me having it written down. Again, no bad decisions, understanding the implications of your decisions and making them explicitly, covers security, it covers privacy.Jeremy: Right. And I can think of 10 people off the top of my head that probably could answer every security question that I've ever given to a bank or something like that because they know me.Mark: Exactly, right?Jeremy: And I think that's a good point about some of these security controls we put into place that are, I guess, again there's just friction and it gives security a bad name. My payroll interface that I use is all online, and whenever I have to have a deposit for my taxes, it tells me how much money it's going to take out of my account for my taxes. Well, I move money into separate accounts for those certain things. So I like to take that amount, copy it, and paste it into a transfer window on another browser in order to transfer that money, so that it's in the account that will be deducted from. I cannot copy from that site, it won't let me copy information from that site. And I think to myself, "Why? I can print it to a PDF and copy it from the PDF. I can print it, I can do other things." So why do you add that level of friction that potentially creates mistakes? Like you said, which is why that show password thing is so important.So anyways, I want to go back to the HIPAA thing for a second, because this is something where we may have gotten a little off topic. I think it was all great discussion, this stuff to me is fascinating. But the point that I wanted to get to with the HIPAA is, if I'm sharing your x-rays, okay, I get it, I've got to be HIPAA compliant. But where is the line for these builders that are building these peripheral applications around medical services, medical devices, medical professional buildings, hospitals, whatever. Where's the line? Because I think about an application that says...You see this all the time, I just started dealing with this. We just got a new dentist, my old dentist retired, he was completely analog, I don't even think they had an email address. Everything was phone calls, I mean, they were excited when they could print out a little paper card and give it to you with your next appointment on it. So I moved to a new dentist. This new dentist has a hundred percent online scheduling, right? It's great, you pick your hygienist, you can say when you want to set your appointment. And I think about this for doctor offices as well, because I know with my doctor's office it's through a larger, I don't know, coalition or whatever it is. And so they have this health center that you can log into. I don't think you can make appointments, but there's some stuff there.But let's say someone's building a simple application that is just a scheduling app, right? Maybe you're a little doctor's office or a dentist office, whatever, and you want this scheduling capability. So if I go and I allow this scheduling, if I'm booking an appointment for a general physician or whatever, or a general practitioner, okay, probably not that big of a deal. But what if I'm booking for an oncologist? What if I'm booking for an obstetrician? What am I'm booking for Planned Parenthood or something like that, that gets into really specific things about, obviously, my health, or my spouse's health, my kids' health, whatever it is. When you start booking into specific types of doctors, even though you're saying, "Well, we're not sharing any information about your medical." That reveals a lot right. So when does that get triggered? When does HIPAA get triggered?Mark: Yeah. And you'd have to consult a lawyer to get the actual official answer, because it's case by case. And I always have to say that, most of my conversations start with big disclaimers. The challenge here, so if I take this from a builder perspective, right? So if we're focusing on the audience who's listening, who are watching, they're probably building applications like this or interacting with them. It is easier to take a more strict approach from the builder side, because you're never going to regret having taken more precautions, if you do it early so that you're not introducing friction. So treating everything as a personal health information or personal identifiable information is going to give you a better outcome. Because if you're like, "Oh, I treated that as health information." And it wasn't, the cost is almost minimal to you when you're designing it from day one.Because even, you said, well, the GP is not that big of deal. Well, not only is the doctor still a big deal, because it means something is of concern, even if it's a checkup. But if you have a notes field where you say, why are you requesting this appointment? Lord knows what people are going to type in there. Right? Because they're going to assume if this is just between me and my doctor, they will be like, "Well, I have a lump on my neck that I want to get checked out." Oh my God, that right there, diagnosis or symptomatic information is health information. Right? Because even if they just said like, "Oh, everything's fine." Well that still can be treated as profile information. Now the problem is, just like most of the privacy legislations, there's this concept of user data and data about the user.So, HIPAA mainly focuses on user data, which is again, what you're typing in and the specific entry. So the information you just said, now I believe this to be true, but I'll have to double-check, the fact that you're seeing a doctor is not necessarily protected under HIPAA. So the fact that you've booked in with the oncologist, or the pediatric surgeon or whatever the case may be, is not a specific class of data that needs to be protected.The information you share with them, your diagnostic results, all your blood work, all that kind of stuff absolutely is. But the fact that you haven't, that you just booked an appointment or you spoke to them on the phone, isn't necessarily protected. I believe it should be, because it is very clear that I don't care what type of cancer the target has, I just care that they do. Because that's something I, as a cyber criminal, can manipulate to get what I want, right?So that's a big problem, is that there's not that line. So similarly, a different example under medical is the genetic testing, right? So 23andMe, ancestry.com, hey, test your genes at home. They all advertise like, "Hey, we keep your data super secure. We protect your health information." And blah, blah, blah. But they aggregate out your genetic code and use that for a whole bunch of stuff in the back end. And they said, "Well, we don't tie it to you." Well, it's easy enough, if somebody gets that piece of information to then tie it to you, if they have access to the backend systems anyway. And that's the challenge we deal with, with all these data and specifically with healthcare, is it's very rarely one piece of information that is the biggest point of concern. It's the multiple pieces of information that I can put together in an aggregate to get a better picture of what I wanted as a malicious actor.So I don't care that you spoke to this doctor, but I do care that you went from no doctor's appointments to five in a month, right? Because I don't know what's wrong, but I know it's something big, because who sees five doctors in a month, from never seeing a doctor in the last year, right? Something is happening. And so if you put your bad guy hat on, if I'm trying to break into the government and you work for the government and I realized there's something wrong, there's a good chance that I can make a cash offer, that you're looking at a mountain of medical bills, and I could probably compromise that way and say, "Hey, here's a million bucks. I need your access." and I'm in. Even not knowing what was wrong, but just knowing that pattern has changed.So it's again, a lot of nuance and a lot of challenge. But from a builder perspective, if you treat everything in a health application as PHI, your cost is not going to increase significantly. If you plan early enough, your friction isn't going to increase, but you're definitely going to protect your users to a higher level, which is actually a competitive differentiator as well.Jeremy: Yeah. Yeah. Well, I think the best advice out of that is, seek professional legal help for those sorts of things. And that's the thing that just makes me a little bit nervous. Whenever you're building something new, you might have a great idea, or you're going down a different path, you're pivoting a little bit, that when these things come up, you do need to have legal answers and solid legal advice around these things to protect yourself.All right, we've been talking for a very long time and thinking back about what we've talked about, we've probably scared the crap out of people thinking, "Oh my goodness, I'm not doing this anymore." But let's bring it back down, because again, I don't think anything we talked about as hyperbole. I think all of these things are very, very real. The laws are real, the compliance regulations are real. Just this idea of making sure that the data that's being saved is encrypted and that you put these levels of control into place are there. Those are all very, very real things.But you had a talk back last year at Serverlessconf New York. And I thought it was fascinating, because essentially what you said was, "The sky is not falling." We are not now opened up to all these new attack vectors, that, yes, there are all these kinds of possibilities. So let's rebuild up everyone's confidence now that we've broken it all down and let them know that, again, building in the cloud and building in serverless, that, yes, you have to follow some security protocols and you have to do the right things, but it's nowhere near the gloom and doom that I think you see a lot of people talking about.Mark: Yeah, for sure. And I think that's absolutely critical. If there's a second takeaway besides find legal advice is, stop worrying so much. And I think that's where I have challenges and interesting discussions with my security contemporaries, because when we're talking amongst ourselves, we talk about really obscure hacks, interesting vulnerability chains, zero day attacks, criminal scams, all this kind of stuff, because we're a unique set of niche experts talking about our field, right?Whereas, when you're talking about general building and trying to solve problems for customers and things like that, you have to look at likelihood. Because risk is really two things; it's the probable impact of an event and the probability that that event will occur. So security is very good, outside of our security communities, about talking about the probable impact of an event. So we say, "Oh my God, the sky is falling. If this happens, your entire infrastructure is owned." But what we don't talk about is the likelihood of that happening. And be like, "Yeah, the chances are one and 2 trillion." And you're like, "Well, I don't care then."It's interesting, nerd me is like, "That's interesting and I like that." But the reality is that often, the simple things that cause... So if you look at the S3 buckets you mentioned in one of the early questions. I followed those breaches very, very closely. If you want to follow at home, Chris Vickery from UpGuard, his career over the last couple of years has been focused almost exclusively on S3 bucket breaches, fantastic research from him and his team. In every single case, it has not been a hack that has found the data, it has been simply that it was accidentally exposed.Given the probability, yes, zero-day vulnerabilities are real, cyber crime and attacks are real. You will see them over the course of your career. Your infrastructure will be attacked, simply because it's connected to the internet, that's just the reality. You actually don't have to worry about that nearly as much as you think, if at all. What you need to focus on is building well, building good resilient systems that are stable, that work reliably, and that do only what you want them to do, is going to fix 95% of the security issues out there, and then you can worry about the other stuff. So the S3 bucket are just mistakes, they're just misconfigurations. Even Capital One, who unfortunately got hit with a $70 million fine because of it, it was a far more complicated mistake, but it was still a mistake.Basically, they had a WAF that they would custom build on an EC2 instance in front of S3 and that WAF's role had too many permissions. That's it, right? A mistake was made and it cost them, literally and reputationally. So it's the likelihood of something happening is that you are going to mess up. So putting in tools in place, things like Cloud Custodian, the great open source project, little testing around security configurations, AWS Config, using Google's security operations center, all these tools that are at your disposal that are free or almost no cost, to help prevent making mistakes. The idea to keep in your head as a builder is that, you drew it up on PowerPoint, that's great. You drew it on the whiteboard, that's fine. You need something that checks to make sure that production is what you drew, and that's going to cover the vast majority of your security concerns. And if you get that done, then you can worry about the obscure cool stuff.Jeremy: Right. Yeah. And I think you're totally right. I think that if you cover the bases, and again, just like I said, especially with serverless, it almost all comes back to application security. I did an experiment two years ago at this point, where I was able to upload a file to S3 and the file name itself was SQL server injection attack, basically, that was the name of it. And so when it tried to load the ID or whatever the file name was, into a piece of code that, again, didn't think about SQL injection, because maybe it thought the source was trusted or whatever, that then there's the problem there. How many people are going to even try that one? That's one thing.And then the other thing is, that again, if you're building solid applications and following just best practices, you should be thinking about SQL injection. That's a very real thing that if people don't build... And of course, there's so many tools now that you just shouldn't be building SQL injection anymore, but people still do. But again, I think there is a sense of a doom and gloom or FUD, you know what I mean? Trying to get people to buy these security applications and things that they do. Because again, I think that if you don't think there's a problem, you're not going to buy a solution to fix it, right?And for the very few people I think, who do get hacked, they're like, "Oh, I really wish I bought that solution." So I don't know what the right level of advice is. I think you're right, as to say, your likelihood is very low, but I still think people should think about security and put that first in a way that says, yes, maybe I don't have to worry about the obscure attack this way, but I should just make sure that I'm following best practices. And like you said, I like that idea of using open source projects to make sure that your infrastructure, or your proposed infrastructure and your actual infrastructure do match.Mark: Yeah. So let me say this, coming from a vendor. So Trend Micro was obviously a vendor, or one of the top vendors there. I don't agree with everything we put out from a marketing perspective, because sometimes it does skew negative. We try not to, but it still does, because like you said, if you don't believe there's an issue, you're not going to buy a product, and all the vendors are generally guilty of this.But I think it's a misunderstanding of what security's role is in the build process and in building applications. And I think if you're sitting at home right now, you're listening to this, thank you for sticking along, it's been a long episode, or broken up into a couple. But I think it's all important and it's interesting, but it's not just academic, like you said. There's real issues here, there are real things that are going on, but the best way to look at security controls is actually from a builder's perspective.You mentioned SQL injection, there is an open source, fully tested, phenomenal input validation library for every language out there. There is no reason you should ever write your own. You should just import one of these, there are any sort of different licensing available as well, and so you import that and you get that to check your validation. Because I think that is an example of larger security controls.Security controls can help ensure that what you're writing does what it's supposed to and only that. So input validation is a great example. If I take a lambda function that takes an input that is a first name and a last name, well, I need to verify that I'm taking in a name, that's a valid name, first name and last name and not a SQL injection command and not a picture, somebody trying to upload a data file and things like that. That's a security control.And if you see it as not trying to stop bad stuff, but from making sure that what you want to happen is the only thing that's happening, you start to adjust how you see these security controls and go, anti-malware the classic, classic security control, you're like, "Well, I'm not going to get attacked by malware." Don't think of it as something to stop attacks, even though it will. Think of it as, I'm an application that takes files from the internet, there's bad things on the internet. I cannot possibly write a piece of software, in addition to building a solution, that is going to scan this file to make sure that only good things are in there.Well, there's an entire community of security vendors that do that for a living. Pay for one of those tools, not to stop attacks, but to make sure the data you're taking in is clean. So when you adjust that kind of thinking and realize the security controls are just there to help you make sure what you think is happening is actually what's happening, you start to change your perspective and go, "Well, there's great open source stuff. There's great stuff for purchase, but I don't need to buy anything and everything. I don't need all this crazy advanced threat hunting stuff. I just need to make sure what I'm building does what I want and only that."Jeremy: And I think if we tie this back to the original topic here, is that the privacy of your user's data, is going to be dependent upon the security measures and the rules that you follow to make sure that that data is secure. So Mark, listen, thank you so much for spending all this time with me and just sharing that perspective. I mean, this was an episode I really was excited about doing, because I do think this is these things that just people don't necessarily think about. I know we didn't talk a ton about serverless, but I really feel like it all does tie back to it and I mean, just cloud development in general. So again, thank you so much for being here. If people want to find out more about you, watch your video series, things like that, how do they do that?Mark: Yeah. And thank you for having me, I've really enjoyed this conversation and hopefully, the audience and we can all continue this conversation online as well. You can hit me up on Twitter and most social networks @marknca. My website is markn.ca and everything's linked up from there, my YouTube channel and all that is there as well. So happy to keep this conversation rolling in the community as well, because yeah, even though we weren't specifically talking about a lot of serverless aspects, I think the principles apply to everybody. The good news is, if you're building in a serverless environment and serverless design, you're already way further ahead than most people, because you've delegated a lot of this to the cloud providers, which is a massive win, which is one of the reasons I'm such a huge fan of the serverless community.Jeremy: Awesome. All right. Well, we'll get all your contact information into the show notes. Thanks again, Mark.Mark: Thank you.

57mins

26 Oct 2020

Most Popular

Episode artwork

Episode #71: Serverless Privacy & Compliance with Mark Nunnikhoven (PART 1)

Serverless Chats

About Mark NunnikhovenMark Nunnikhoven explores the impact of technology on individuals, organizations, and communities through the lens of privacy and security. Asking the question, "How can we better protect our information?" Mark studies the world of cybercrime to better understand the risks and threats to our digital world. As the Vice President of Cloud Research at Trend Micro, a long time Amazon Web Services Advanced Technology Partner and provider of security tools for the AWS Cloud, Mark uses that knowledge to help organizations around the world modernize their security practices by taking advantage of the power of the AWS Cloud. With a strong focus on automation, he helps bridge the gap between DevOps and traditional security through his writing, speaking, teaching, and by engaging with the AWS community.Twitter: https://twitter.com/markncaPersonal website: https://markn.ca/Trend Micro website: https://www.trendmicro.com/Watch this episode on YouTube: https://youtu.be/aPg7WE3Q3SQTranscript:Jeremy: Hi, everyone. I'm Jeremy Daly, and this is Serverless Chats. Today I am speaking with Mark Nunnikhoven. Hey, Mark. Thanks for joining me.Mark: Thanks for having me, Jeremy.Jeremy: So you are the vice president of cloud research at Trend Micro. So why don't you tell listeners a little bit about your background and what Trend Micro is all about?Mark: Yeah, so Trend Micro is a global cybersecurity provider. We make products for consumers all the way through to massive enterprises. And I focus in our research wing. So we have a really large research component. There's about 1400 researchers in the company, which is a lot of fun, because we get to dive into the minutia of anything and everything related to cybersecurity, so from the latest cybercrime scam to where I focus, which is in the cloud. So a lot more what I'm looking at is how organizations are adapting to the new reality of things like the shared responsibility model, keeping pace with cloud service providers, adjusting to DevOps philosophies, that kind of thing, which is a lot of fun.And for me, I come from a very traditional security background, if there is such a thing. I've been at Trend for a little over eight years. Before that, I was with the Canadian federal government for a decade, doing all sorts of different security work, a lot of nation state attacks and defense, things like that. And my background in education is actually in forensic investigation, so that nerd in the lab on your favorite crime drama when they come up with the burned-out hard drives and are like, "Fix this," and somehow they do, it's all BS, but that's technically what I do.Jeremy: Very cool. All right. So I have wanted you on the show for a very long time, because I've been following the stuff that you've been doing. I love the videos that you do, the blogs that you write. You're just out there. And I know you're on the edge of the serverless space, I know you do a lot of stuff in the cloud as well, but you're obviously into serverless as well. And just recently I came across this impact assessment video series that you're doing. I don't know if it's a regular series or whatever, but it was really good. And you were talking about Fortnite and Apple, and I want to get into that. But really what made me think about things a little bit deeper that goes beyond just some of these surface-level billionaires arguing with billionaires is this idea of privacy and how important are online privacy is. And I thought it'd be really interesting to talk about how serverless and privacy, since it's in the cloud, is all the stuff that you're sharing, where that kind of aligns. So let's start. First of all, why is privacy or online privacy so important?Mark: Yeah. That's a really broad and great question. So yeah, this new video series I'm doing, Impact Assessment, is going to be regular. I was doing a livestream called "Mornings with Mark" for the last few years, did, I think, like 200 episodes where it was mainly talking about cybersecurity issues of the day, and a lot of those are privacy. And where I wanted to go with this new series was just a little broader audience, which is why Apple and Fortnite and Twitter hack and stuff like that are coming up, because I think privacy is a really important aspect, and it mirrors security. You can't have one without the other. And it's directly related to the audience, to people who are building in a serverless space or in any space.But privacy, a traditional definition of privacy is really your right as a person to not be observed, essentially to be alone and to have control over your data and your well-being. And when you go into the digital world, it's infinitely more complicated than a physical world, right? You can lock yourself away in a room in the real world and be relatively confident that nobody is invading that space, that you have kind of control over that space, so if you want to just sit there and veg out, if you want to read a book, that's an activity just amongst yourself, right? When you come to the digital world, everything we do leaves a trail somewhere. There are tons of exposures potentially. You as a user don't really have a ton of control over your data.And one of the things that I wanted to do with this video series and with a bunch of my other work was just enlighten people to help sort of expose this so that they're aware, because one of the challenges I get on the security side of what I do, and it directly relates to the privacy side, is that people assume there are correct decisions. And really, the only incorrect decision is one that you are unaware that you're making. So you could make the argument that it's okay that you're tracked everywhere on the internet, and I think the trade-off you get for the free services may be correct, but if you're unaware that that is the trade-off, I think that's the problem. So that's the intention behind this video series, is to look at privacy issues, to look at some security issues, to help people just make a conscious decision instead of just being pulled along for the ride.Jeremy: Right. Yeah, no, and I think that that's probably something that a lot of people miss, is that people say, "Well, I'll sign up for Facebook, and I will share every photo, every place that I visit, all my friends, all my likes, all my dislikes." And what I think people say is, "Oh, well, whatever. It's free." And they don't realize that they're the product, and most of that is because they are giving up so much of their privacy.And it's actually funny. This just happened the other day to me, and I didn't even realize. I knew it was coming out, but Chrome just released a new update that blocked third-party cookies if they weren't... I think you had to have like "secure" on and some of these other things. So no user is going to have any idea what that actually means. But what happened for something we were doing is, we were loading a third-party cookie behind the scenes for something, and all of a sudden that stopped working. And so the whole flow of this module or this modal pop-up thing completely broke because of that extra thing of security. And I remember way back in the days of early web development dealing with IE5 and IE6 and the browser wars, like what works on this browser and what works on that browser. Now privacy seems to be the new browser war thing that are conflating those two things.But anyway, so that's one thing, but let's go to this idea of the Fortnite and Apple thing, because I have two kids, two daughters. They've played Fortnite more this summer than I think... I don't know how anybody could play Fortnite more than that. But they love it. And then I told them the other day, because you and I were talking, I saw your assessment video about them not releasing it on iOS because of the whole Apple Store thing and all this kind of stuff. But why is it a good thing, I guess? And maybe we can talk more about Fortnite. I mean, I'm not really into it. I know you are, but I'm not really into it. But maybe we can talk more about why that review process, why that purchase process through Google Play or through the app store, why is that important to your security and to your privacy?Mark: Yeah, and I thought this was really interesting. So I got into Fortnite a couple of years ago when I did a piece on it for my regular radio column here in Canada. And I thought it was interesting because it's a microtransaction model game, so it's always taking a lot of money from people, but not to win the game. It's purely cosmetic. And I thought that in general, especially as a parent myself, I thought that was a really positive thing, because it wasn't like a bunch of these games where you need to pay to actually have a realistic chance at winning. The only thing you're paying for in Fortnite is to make things look different. There's no performance differences, right? And since that... Then there was this great Saturday Night Live sketch a couple years back on Fortnite, where this character Adam Driver was playing was solely there to learn how to be better than the stepfather, to show off to the kids. And I always think, "That's me," even though... Just trying to be cool to the kids. But I do play regular.I thought it was interesting, you know, being pulled up in this drama, because most of the drama between Epic and Apple and Google somewhat right now is related around the business side, because the Apple policy... and Google is the exact same, but we'll just use Apple because it's more prominent right now... the policy basically says, as a condition of being in the app store, you need to follow a whole bunch of these rules. And the rules that Epic is calling out is the one around transactions, and it says basically, if you're taking money through the $, so directly through the app, Apple gets a 30% cut. That's their fee as a middleman for bringing you customers. And as a part of that, Apple will facilitate the transaction. So for Apple users, you're well familiar with this. For Android users, it's similar. But that's why you can use face ID to authorize a transaction through Apple Pay, and you don't actually have to enter a new password. You don't have to give them your credit card information. All of that stuff is handled by Apple as a proxy for those businesses.And so Epic, they make north of $300 million a month from Fortnite. And they said, "You know what? 30% of the chunk we make from mobile, which is north of $100 million, is too much." So they are contesting that, and they actually have plans, and in their legal filings are saying, "We're not going for the money. We want the right to be our own app store." So there's a really interesting business case there, and they're really petty and low blows, which is fascinating and fun to watch from the outside.But I did a video in the assessment around, what do we actually get from a security and privacy perspective? Because everybody is saying, "Oh, 30% is a huge amount," even though it's not uncommon in the retail space or in other business transactions. But there's a lot of stuff that goes on behind the scenes, and that's really beneficial to us. So when you submit as a developer, Apple makes sure that there's no obvious malware, though this week there was a case where they actually approved malware, which is one out of eight years of app store, which is not bad. They look for malware. They look for using undocumented APIs, which could create vulnerabilities. They look for your use of personal data, which is what I really dug into, was that they have restrictions around what developers can do with your data, how they can track you, what they have to ask permission for.And that actually goes to your transactions as well, because a lot of the stuff that happens behind the scenes that we don't even think about is when you go to a store, like a retail store, if you still can in these days, and use your credit card, most of the larger retailers actually track that credit card usage within their physical store. So they will take a hash of your number instead of storing your actual number, and they will look for that reused to create a profile for you if you're not actually signed up for the loyalty rewards thing. Same thing happens online. So not only is the money important, but the more... having someone between you and your customer means you can't track them as much. So from a business perspective, they're saying, "I want the data to be able to track Jeremy and Mark more accurately." But as a user, we want Apple or Google in between us, Apple, definitely more so than Google, given the business models, because they're that blocker. They're preventing us from having our privacy unknowingly breached, in that people are tracking our transactions online. And that's part of the big thing we get through the app store.Jeremy: Yeah, and I think that having that broker in between is another major thing that dramatically helps with privacy, just from a... Not only privacy, but I guess security as well. And I never use anything but PayPal on most sites that are not amazon.com, because I don't trust some little site.I mean, actually the funny thing is, I just bought something that I was almost... It was from one of those... What's the store there? My mind is drawing a blank here, but the... Shopify, right? It was a Shopify store. And essentially Shopify says, "Yeah, anybody can build a store." I don't even think they check, and I may be wrong on that, so I apologize if that's wrong, but it seems like it, because there's a lot of stories of Shopify scams. And there was this thing listed, and it was actually a pool for... It was one of those Intex pools, those temporary pool things. We just needed something. You couldn't buy them anywhere unless it was like thousands of dollars, which was crazy. So I saw this deal, and I'm like, "I'm going to buy it, but I know it's a scam. I'm almost 100% sure it's a scam." But I used PayPal, and I knew that the worst case scenario was I'd have to send a few emails back and forth and I'd get my money back. It turned out to be a scam.But if I hadn't, if I had given that person my credit card number, who even knows if that credit card number would have went to a valid processor, or if it would have been run through some third-party thing, or it would have had thousands of dollars of transactions across the dark web or whatever. So I do think that there is a tremendous amount of added benefit to having that middleman protect your privacy.Mark: Yeah, and that's an interesting example. And I'm sorry that you got scammed. And I understand, especially in these times, trying to get those items in. Because at the start of the pandemic, it was like basketball nets, trampolines, bikes. You couldn't get this stuff, right? And the nice thing is PayPal as a middleman works. There's some downside when you're the collector from PayPal, for sure. But Visa and MasterCard have the same protections in place. It's very rare that you're going to be financially on the hook. But the difference is, it's a pain in the butt to go back review where you have your normal subscriptions charging to your credit card and things like that, to redo all of that. So even though you're not out money necessarily, you're still out time and frustration.And that's happened to me pre-pandemic when I was traveling, literally one time when I crossed at the US customs to here in Canada. We cross in the airport itself, and I found out when I tried to buy some food that, oh no, my credit card had been blocked, and so I had to get a new one shipped and all that kind of stuff. So I wasn't out any money. I was just out of frustration.But there is important aspects, both advantages and disadvantages, to the middleman. But specifically when it comes to that online, a great example there of knowing that there's a good potential for a scam, understanding the risk of, okay, a couple of emails? It's not that big of a impact to you to try. And the upside where, if they did actually ship you the inflatable pool, you're the hero to the kids and happy and cool. So it's finding that balance. And again, like we said in the intro, is really, for me, it's, there's no bad decision. It's just making it explicitly. So you just gave a fantastic example of explicitly understanding that you might get scammed here. There's a high chance of it. But then you used a way to protecting yourself. You had four options to pay, and you picked the one that was going to provide you the most amount of protections, because you were aware of the situation. And I think that's commendable. I think the flip side is, most people are unaware on that scale of what we're doing in the online world, of the types of ramifications of those decisions.Jeremy: Right. And so speaking about unaware, I mean, one thing that I think people might not understand when they make financial transactions or they share data, they're often giving it to a machine, right? And we think it's super secure if we just slide our credit card in with a little chip on it, or if I enter my information on a website somewhere, or I save my password or something like that and I know it's only saved locally. The problem with people is people, right? And I love people. Don't get me wrong. But once you introduce the human factor into any of these security or privacy issues, or potential privacy issues, it gets exacerbated because people are fallible and people make mistakes.I think the most important one that happened recently is this Twitter hack. And people are like, "Oh, Twitter got hacked." Well, it depends on what you mean by hacked, because nobody brute-forced into and broke into the system and figured out somebody else's password. They literally scammed people who had access to this stuff. It was a social engineering attack. So how do you prevent against that?Mark: Yeah, and this is the challenge. And so one of the things for those people who look into sort of the history of my work, I always feel like I'm an outlier, because a popular sort of feeling in the security community is what you just said to the extreme, that the users are a problem. Everything would be great if we didn't have users. Well, we wouldn't have jobs if we didn't have users, so put that aside. But the reality is, people very rarely are trying to do something in their daily work to cause harm. So criminals, obviously that's their daily work. They are trying to cause harm. So this case of Twitter was that the people who were doing the support work were just trying to support users and to get their job done, right? Now, it turns out that Twitter was a little lax and they had about 1500 people with access to the support tools.But if you step back for a second... Okay, ignore the hack. It totally makes sense if you're running a service that supports 330 million people that you as a business are going to need some tools to be able to reset passwords, to adjust email addresses, to give people access back to their accounts, because someone's going to forget their password and not have access to the email that they signed up with legitimately. They're going to change phone numbers, so they don't have the SMS backup. Stuff happens, especially when you have 300 million plus users. So to build a tool to help you deliver better customer service 100% makes sense. The problem in this case, as you pointed out, is that it was also a vulnerability, because the controls around it, the process around it was a little too lax, these cyber criminals didn't do any crazy hack. And I think if there's one fallacy on the security side of things, it's that, and it's partially because of all the TV and movies, which makes for great TV and movies, but very rarely do big-name hacks actually use anything remotely resembling state-of-the-art hacking. Nine times out of 10 it's a phishing email. Actually, 92% of all malware infections start with a phishing email, because they work. They're super easy to send and to confuse people. I always remember a talk from Adrienne Porter Felt who's at Google. She was in the Chrome team at the time. They'd done a massive, million-plus-person study, and basically the key result was, nobody reads security prompts. So it doesn't matter what you prompt the user, they're just going to click okay. Which is frustrating, because you're trying to educate them and move forward.So with the Twitter thing, it was just a social engineering attack. They got some extra access by basically just tricking a support employee, which then got them access to the Slack. In the Slack channel, to make the support team's lives easier, they had some credentials posted that said like, "Hey, to get into the big super tool here, here's the login. Here's the URL." Which, I mean, you totally understand, working with a team, that you drop stuff in Slack like that all the time, because the assumption is you're in a private room, right? And in this case, that wasn't it. And thankfully it was a very visible hack, so it got shut down very, very quickly.But it's these of things that I think are interesting, because my point in that particular video was, most people who use an account, A, assume it's theirs, when you're just actually using it, you're renting it kind of thing. And they aren't aware that there's a support infrastructure behind it that gives people access legitimately, because if that was you who lost your password you'd want access back to your account. You've worked hard to grow your social media following. So it's, again, being aware of those trade-offsJeremy: Yeah. And again, there's so many examples of things where people are sharing a lot of information that's getting recorded and they probably aren't even aware that it's being recorded. I mean, every time you talk to Alexa... "Alexa, cancel," because she's just going to come up on me. And then every time you talk to Siri, every time you type on your computer if you have Grammarly installed, all of that information is being sent up somewhere. And so when you introduce... Even if you have the best security protocols in the world, and you're in AWS Cloud or in Google Cloud and you're all locked down, you still have that potential that somebody could simply accidentally share their password to some super tool, like you said, and your information gets shared. I mean, think about S3 buckets, right? Apparently S3 buckets are just... It has been one of the biggest... Or I guess the Capital One breach, right? Is this idea that you just make your things public, or you make it easy for them to be copied, or whatever it is. You don't do it on purpose, but those are human mistakes that are causing those issues.Mark: Yeah, and there's a lot of trust there. So there's a couple examples that I think are really interesting that you gave there. So the voice assistants are popping up more and more in court cases where the law enforcement are actually requesting access through legal process to the records of what they have heard, because Alexa is a good example, and sorry if I triggered yours or any of the audience's. I have a good voice for that, apparently. But if you go into the app, you'll see actually a history of all your commands, everything you've asked for and whether or not it... Because you can provide feedback like, "Yes, it gave me what I wanted. No, it didn't."You had mentioned for keyboards on phones and stuff. So Grammarly is a good example. When iOS started allowing keyboards, third-party keyboards, I thought it was really interesting, because one of the prompts that people don't read that pops up says, "You are providing this keyboard with full access to everything you type." So everything you literally are typing, even if you delete it, is being sent to the cloud and back. Is that a bad thing? Not necessarily, but if you don't know that that's happening, you can't make that choice. And that's really the thrust of a lot of what I'm doing, is understanding that work. Because at the end of the day, one of the things I hear often on the privacy side is, "Well, I have nothing to hide. I don't care." And a lot of the time that may be true, but you still need to be aware of those data flows that are going out from you out into the world. And that's where things get more and more complicated the more technology we add.Jeremy: Right. Yeah, I totally agree. All right, so let's take this into the serverless realm here, because this is a serverless podcast, but I think this is super exciting, because I'd be interested to get your perspective on where serverless and privacy meet. And I think if we take a step back and we look at security first, I think we know, I think this has been demonstrated, that the security of a serverless application, just based on the shared responsibility model, how little you need to do from a maintaining a server standpoint, from even just... There's no direct TCP/IP access into a lambda function, for example, right? Like that all has to be routed through a control plane. So you just have all these levels of security. So the majority of the security concerns from a serverless perspective are going to come down to application-level security. And we have talked about it at length.And again, people make application security mistakes all the time, right? And the social engineering aspect of it is something where giving someone your password into an admin that you build for your customers... But I want to take it a little bit further and go beyond just this idea of, maybe we make an application mistake, maybe something gets compromised, maybe someone shares a password here. So from a serverless perspective, if I'm building a serverless application, how do I start building bulkheads around my application to protect some of this private user data?Mark: Yeah, and that's a really good setup. It's a good explanation. I 100% agree by default serverless gives you a better chance at security, because you're pushing almost all the work to the service provider, right? That's a huge advantage, which is why I'm a massive advocate of serverless designs.So maybe it's easier just to clarify for the user as well, because we've bouncing back and forth, focusing on the privacy, talking a bit about security. I said you can't have one without the other. And really, security is a set of controls that allow you as the builder or even you as the user to dictate who and what has access to that data. And then privacy is just the flip side of that of me going, "This data is about me, and I want to know who I'm entrusting it to." And security is then the controls that you... If I entrust you with my personal information, security is then the controls you're putting on top of that information to enable privacy, right? So they're intertwined. They are linked concepts.So if you as a builder are creating an application that is handling personal data or handling any type of data, you're fighting this inherent sort of conflict of nature, in that we've been taught as developers for the last few years that the more data we have the better, right? The more data that we're tracking, the more awareness. We can get better fine-tuning on our application. We can increase the performance. We can increase the reliability. We get a better operational view the more data we have. From a privacy and a security point of view, the more data you have, the bigger the liability you also have.So you need to first go through and make sure you understand what type of data you have. So cold start time on a lambda, total route time for a request, those kinds of things aren't sensitive to specific data. They're sensitive somewhat to your application, but in general, that's not something you need to take... You don't need to lock it in the vault that's encased in concrete, thrown into the ocean so that nobody can ever get to it. If I'm dealing with your social security, that's a far more private piece of information that I need to take further steps to protect. If I'm dealing with your health record, same kind of thing. So it's first step for anybody building any application is just listing the types of data you're actually hosting and processing and then mapping out where in the application they're required.So for permissions, we have on the security side the principle of least privilege, which is essentially, "I am only going to give you the bare minimum permissions you need to access something," which is the S3 problem at its core. When you create an S3 bucket, only the user or entity that created it has access rights by default, and then everything else has to be granted. And all of these breaches, billions and billions of records over the last few years, have been because somebody made a mistake in granting too many permissions.So understanding what the data is and where it actually needs to flow and saying, "You know what? This health information isn't going to flow to the standard logs. We're going to keep it in a Dynamo database, and that Dynamo database, that table is going to be encrypted with this KMS key, and it's actually going to break our single-table design, because this information is sensitive enough to merit its own table, because I don't want to take the risk of encrypting column by column, because I think I might mess that up. So I'm going to just separate it completely to make it a logical separation to make it easier." So really, step one is mapping that out and then restricting the breadth of that data or where that data touches, and that does a huge amount of effort, a huge amount of the work to maintain privacy right there.Jeremy: Right, yeah. And so if you're taking that data, though, and you're... And again, I think this makes complete sense. You're saying, "Look at what it is you're saving. If I'm saving somebody's preference, even if I'm saving somebody's like, whether they like a particular brand or something like that, is that really personally identifiable information? Is that something that I have to lock away and encrypt? Or can I be more lax with that? What about usernames and passwords and things like that?" And I think that all makes sense. Think about it that way. But I think where I'm curious where this goes is, you only have so much control over that data if you are saving it in DynamoDB, right? If you are capturing it through CloudWatch logs-Capturing it through CloudWatch logs because it's coming in, and maybe it's coming in and it's not encrypted, I mean, even though you are using an SSL or TLS, you come through and the information is encrypted from the user's computer or their browser, into the inner workings of AWS, for example. Then once it gets into that Lambda function, that's all decoded, that's all unencrypted. Right? That's all ready for you to do whatever you need to do. So then you need to take that and put that into a database, or send that off somewhere, or call an API, or any these other things. When you do that and you save that data into, let's just start with DynamoDB, there are backups, those are automatic backups. Right? There's again the CloudWatch logs. So this data is going all different places, so that seems like a lot of effort to make sure that a credit card number, or social security number, or anything that you want to be very careful about, that you have to take a lot of extra steps to make sure that's encrypted.Mark: Yeah, and I think this is spot-on example. And I think this is the number one failing of the security community over the last 20 years or so. And there's a lot of logical reasons for it, is that right now, the vast majority of security work, so that security work to ensure that privacy of data is done after the fact. Right? So if you think of your DevOps wheel and you've got the development side and the ops side, security exists almost entirely in the ops side. Which means we're taking whatever's already been built and then doing the best thing we can. So we end up with this very traditional castle wall sort of scenario of like, I've made a safe area, drop things into it, and they will be safe from anything outside of that wall. But unsafe from anything inside that wall.And that's had mixed results, I think is a generous way of saying it. And realistically, if you think of security is really a software quality issue, and we know you're not going to do testing only in production, you're going to do testing early stages, you're going to have unit tests, you're going to have integration tests, you're going to have deployment and functionality tests, You're going to do blue-green deployments to make sure that things are running before they hit prod. There's a whole bunch of testing we do as builders before we get to actually interacting with users. We need the same thing from security, because what you just mentioned is a lot, if you're thinking about it once you've designed the solution, but if you're designing the solution with those questions in your mind, as you're going forward, it's actually not a lot of additional effort to map out these security things as you're sitting there.If we're starting up a new application, me and you, we're doing Jer and Mark's super cool app. Right? And we go, okay, we're going to start logging in users. Well, we look at that and go, well, the bare minimum, we have a username and a password. So we're going to have to do something with that, we need to know what that flow is. So maybe we're going to loop in something like Cognito. Maybe we go, you know what, Cognito is not quite where we need it to be, so we're going to go to a third party Auth0. So now we're outside of, if we're building it in AWS, now we're outside of our cloud into a third party with a whole different set of permission sets. But if we're designing that from day one, we can map that out and go, okay, we know we get TLS from the browser to Auth0.We know that TLS doesn't actually guarantee when talking to Auth0, it just guarantees that a communication is secure in transit from A to B. It doesn't tell you who A or B are, which is a mistake a lot of people make. But then we go, okay, we're going to Auth0, fine we've got a secure connection for the user there, we verify who that user is from Auth0, our app will verify Auth0. This is the following method. And then we're going to take that data, and we're going to make sure that we don't actually store it, that we don't actually log the user, because what we've done is we've never taken the password out of Auth0. We've just gotten a token and now we map it there.So I think if you go after the fact to try to do this, it's really difficult. So even if we just simplify the example down to encryption, the thing I know you always see Vernors shirt, "Dance like nobody's watching, encrypt like everybody is." Love that shirt it's so dirty, it's amazing. But if you take an existing application and say, okay, we're going to go encrypt everything in transit and at rest, that's an annoying massive project that has no direct, visible benefit to the customer. It's really hard to get those things past the product manager, because you're like, Hey, I want to take, four sprints to do this work that will save us potentially if something may happen bad, like if a cyber criminal attacks us, we will be protected. But our customer's not going to see anything for four sprints because we're not doing any feature work. That's a hard sell.Whereas when you're designing that out of gate one, and you say, I'm just going to add a parameter and it's going to encrypt everything in transit, and I'm going to add a KMS parameter to the Lambda, and everything's going to be encrypted at rest that took five minutes and we're done. Nobody's going to bat an eye and you get the same end result. So it's really about planning ahead, I think.Jeremy: Yeah. Well, I think security first. I mean, I think that's the first thing just with the cloud and so many of these problems that happen from breaches that are again, not necessarily a vulnerability of the cloud, it's just more of these social engineering things. Then again, thinking about security right off the bat is a huge thing. And I guess here's another thing, and I know that like DynamoDB, for example isn't, you can do encryption at rest. Right? And that things like SQS and SNS, I think those have an encryption in transit as well and... Right? So there's a lot of that security built in, but again, all of those really great tools that the cloud provides and the encryption and whatever else, that goes away the second you build an admin utility that someone can log into and just query that data. Right?So what do you need to do around that? What should you be thinking in terms of, I mean, are there multiple layers, should we be thinking... You always hear things like Tier one, tier two support, things like that. Are those levels of access that they have to your private data? How would you approach that when you're building a new application?Mark: Yeah. And the tiering system is frustrating as it is for a lot of users, a lot of it does have that. If we use the AWS term, it's about reducing the blast radius. You don't want everyone in support to be able to blow up everything, and if you look at the Twitter hack was actually an interesting example, somebody raised the question and said, "Why didn't the president's account get hacked?", "Why wasn't it used as part of this?" And because it has additional protections around it, because, it's the leader of the free world ostensibly so, you want to make sure that that's not the average, temporary employee on a support contract, being able to adjust that. So the tiering actually is a strong play, but also understanding that the defense in-depth is something we talk about a lot in security. And it gets kind of a bad rap, but essentially it means don't put all your eggs in one basket.So don't use one control to stop just one thing. So you want to do separation of duties. You want to have multiple controls to make sure that not everybody can do certain things, but you also want to still maintain that good customer service. And I think that's where, again, it comes down to a very pragmatic business decision. If you have two sprints to get something out the door and you go, well, I'm going to build a proper admin tool, or you're just going to write a simple command that your team can run, that will give them the access, you're just going to write a command that does the job. And you know what, in your head, you always say the same thing.You put it in your ticket notes, you put it in your Jira and you say, we'll come back to this and fix it later. Later never happens, so most admin tools are this hack collection of stuff just to get the job done. And I totally get it from a business perspective. It makes sense. You need to go that route, but from a security and privacy perspective, you need to really think holistically. And I think this is a question I get asked often, actually, somebody just asked me this on my YouTube channel the other day, they said, "I'm looking for a cybersecurity degree, and I can't find one. All I can find is information security. What's the deal?" And I said, well, actually, what you're looking for is information security. In the industry, and especially in the vendor space, we talk cybersecurity because that's typically the system security.So locking down your laptop, locking down your tablet, locking down your Lambda function, that's cybersecurity, because we're taking some sort of cyber thing and applying security controls to it. Information security is an academic study, as a field of study in general, is looking at the flow of information as it transits through systems. Well, part of those systems are people, are the wetware. Right? Or the fact that people print it out. This is a big challenge with the work from home was, you said, well, your home environment isn't necessarily secure. And you said, well, yeah, it has different risk models. But the fact that I can connect into my corporate system and download a bunch of stuff and then print it, that's information, that's still needs to be protected.So I think if you think information security, you tend to start to include these people and go, wait a minute, Joe from support, we're paying him 15 bucks an hour, but he's got a mountain of student debt. He's never going to get out of it. That's a vulnerability that we need to address, not from locking it down, but help that person out and make them feel included, make them feel, as part of the team so that they're not a risk when a cyber criminal rolls up with some cash and says, Hey, give me access to the support tools.Jeremy: Right. Yeah. No, I mean, and the other thing too, when you're talking about, I guess people having access to things, one is having access to data. Right? And so if you have an admin account that can create other accounts, right? And you get into the admin account or the admin account does everything for example. That's really hard to prevent against if you let that go. Right? But there's another vulnerability with admin accounts, especially when it comes to the cloud, is any time somebody has access to a production environment. Right?So with AWS, if people are familiar, and I'm sure this is true with all the other cloud providers. You have multiple log-ins that are called roles in AWS and you can grant access to certain things. And the easiest thing to do is when someone's like, Hey, I can't mess with that VPC, or I'm trying to change something here, I'm trying to do that. Okay fine, I'll just give you admin access. Right? So admin access gives you everything except for billing access for some reason, but it gives you everything in the AWS cloud. And I'm not saying, I mean, you need that somebody needs to have admin access at some point. But when you're writing code that could potentially expose data by maybe having a vulnerability in an admin tool or just giving too much control in an admin tool, there needs to be a process that separates out, the development environments, the staging environments, and then that production environment where all that actual, sort of production user data is going to go.So I always look at this as like... And that maybe people don't think about it this way, but to me, CI/CD having a really good, whether it's Gitflow or something like that, that has control as a place where there are very, very, very few people who have keys to that main production account. Everything else is handled through some sort of workflow, with approval processes and things like that. And I mean, to me, that is the staple of saying you want a secure environment, you have to set up CI/CDMark: Yes, a 100%. So my general rule of thumb is nobody should ever touch production, systems should touch production. And so the pushback I get on that a lot, especially for people that are still in virtual machines or instances are like, well, no, I need to get data off of there. You should have a system that pulls data out and logs it centrally so you can analyze, because if you need to make a change, you push it through the pipeline because not only is that better for security, that's better for development as a practice in general. For those of you who are watching this episode, you can see how much gray and white is in my beard. For those of you just listening, think like Santa levels of white, and I've been doing this a long time. And the inevitably, I used to be keyboard jockey doing the Saturday night maintenance windows for Nationwide Networks.And you're typing the same thing in the system, after system, after system, you had your checklist, you did everything you possibly could to prevent yourself from making a mistake. You still ended up making at least two mistakes per change window, per Saturday night because it's late night, you already worked all week, you're only human. Mistakes happen and enforcing a consistent threes through a CI/CD pipeline. Not only gives you the security benefits, but it gives you the reliability that if a mistake did happen, it's the same mistake consistently across everything, which means you can fix it a lot easier. It's not that there was a different typo in every system, there's the same thing on every system, so you can roll forward. And that's an absolutely critical thing to do because, a lot of the time people see security as this extra step you need to take as this conflicting thing, it's going to slow you down at the end of the day, security is trying to achieve the same thing you are as a builder.We want stable, reliable systems that only do what you want them to. That only act as intended as opposed to some vulnerability or mistake being made that people could leverage into making that system do something unintended. And that, CI/CD pipeline is absolutely critical. You mentioned roles. There are equivalents in GCP and Azure as well. My big thing is accounts should have no permissions at all, other than the ability to assume a role. So if you can assume a role as an account or as an entity, then for specific tasks, you have a role for every task.So if I need to roll a new build into the CI/CD pipeline, don't give me a permanent rights to kick off builds. Let me assume a role to kick off a build to kick off the pipeline, because then I don't make a mistake, but also we get an explicit log saying, at this time I assumed this role, it's cryptographically signed, it shows that chain of my system, made that request in the backend, and then after assuming that role, I then kicked off this build and you just get this nice fidelity, this nice tracking for observability on the backend. We're so obsessed on observability and traceability and production. You need it as to who's feeding what into the system. And then that way I don't make a mistake and we get clarity. So it's roles are a massive win if you use them right.Jeremy: Yeah. And I think things like CloudTrail and some of the other tools that are built into AWS, I'm sure a lot of people aren't looking at them, but they should be. But so the other thing, it's funny, you mentioned this idea of, doing late night support. So I think we've all been there. I mean, if you're as old as us, I have as much gray as you do. I try to hide it a little bit, but I remember doing that as well. And I still have some EC2 instances that I have to deal with every now and then. And one of the most frustrating things about trying to do anything, and I think this is why security people try to find workarounds for it, is because security creates friction. Right? So the more friction you have, you can't access my SQL database in a VPC from outside, unless you set up a VPN or some other tunnel that you can get into. Right?I think about every time I log into my EC2 instances, first thing I do, sudo su. Right? Because, I just know, I don't want to try to go to the logs directory and not be able to get to a logs directory because security is preventing me from doing that. And, so again, being able to have ways in which... It's almost like people have to build those additional tools. Right? So you mentioned only machines should be touching these things or systems should be interacting with it. But those are systems that somebody has to set up, those systems that somebody has to understand. Right? So again, I totally agree with you. I'm a 100% with you. It's just one of those things where it's like these tools are not quite as prominent or they don't seem quite as prominent as some of these other workflow tools are, again like even CI/CD you could build a whole bunch of security measures into CI/CD but I think people just don't.Mark: Yeah. I agree and so I'll give you a good example that I can't remember who told me, but after talking in an AWS Summit two years ago, somebody gave me a brilliant example that they had set up that I thought was a really good demonstration of how security should be. Now, it almost never is, but it should be. And it was exactly that problem was that they still had cases where people had to log into EC2 instances. And they were trying to figure it out, they knew they couldn't just say no. So what this team had set up was a very simple little ping little automation loop that as soon as somebody logged in some of the SSH and EC2 instance, CloudWatch logs would pick it up, it would fire off a Lambda and it would send a Slack message to that user.And it would provide a button. And it would say, Jeremy, was this you logging into EC2 instance ID, blah, blah, blah. Yes or no. And if you hit, yes, it would then provide a second little message, it's just like, Hey, we're trying to cut down on this. Can you let us know what your use case was? What were you missing? Why did you have to dive in? Why did you have to log in? But if you said, no, it would kick off the incident response process, because somebody logged in as you, and it wasn't you. And I thought that was a really good example of saying like, look, we know we want to be here. We can't get there yet. So we're going to take a nice little friction-free response of sending out the standard survey to everybody and be like, how many EC2 instances do you log into and open? Nobody cares.But catch them in the moment. And I think further to the bigger question of, yes, somebody has to build those tools. Somebody has to develop those. And again, if you try to get that past a product manager, it's not going to happen because there's no direct customer benefit, It's against a theoretical issue down the road. The challenges or the failures on the security side, for the longest time, the security teams have been firefighting nonstop and have developed this rightfully so reputation of being grumpy of saying, no, I'm putting roadblocks in place that preventing people from achieving their goals. So people just ignore them or work around them.That's not what we as a security community we need to do. We need to work directly with teams, we need to hear a thing like you just sat and said, okay, no problem, we're going to build that for you, we're going to make sure we build some sort of flow that gives you the information you need in a way that we're comfortable with from a security side, so that there's no friction in place. And that is a huge challenge because it's cultural and the security teams continue to firefight and can't kind of get their head above water long enough to go, Oh, we could do this in a way better way, instead of just continually frustrating ourselves and everyone we work with.Jeremy: Right. Yeah. And I mean, the idea of being proactive versus reactive would be very nice. I know every time you get that thing where you're like, okay, something is not right. You can hear everybody in the IT, all the developers just sigh at once because you're like, ah, this is going to be a long night. We're going to have to figure out what exactly is happening here.All right, so let's go back to privacy for a second, or maybe for more than a second. Another piece of this that is important is we are saving data into somebody else's systems. Right? We mentioned DynamoDB, we mentioned, SQS, and some of these things are encrypted and that's great. But you've got GCP, you got Tencent, you've got Alibaba, you've got Microsoft Azure, you've got Auth0. Right? So you're saving data, personal data into other people's systems. So I guess sort of where my question is from a privacy standpoint, I can put all these controls in place and I can say, Oh yeah, I've done all my tiering, I have all the security workflows, I've got CI/CD set up, my admins are locked down and I know whatever. But where does, your responsibility as a developer start and end when it comes to privacy, when it's being saved on somebody else's system?Mark: Yeah. And that's a very good question because there are legal responsibilities and then there's "The right thing," quote, unquote. For various definitions of the right thing. I think most users expectation is that they have a modicum of control over their data. Now, the interesting thing here, as we start to get into a little international differences. So if people have been listening to the episode so far, have probably figured out that I'm Canadian by my accent, and Canada has a very strong privacy regulation. We're not quite as strong as it used...Jeremy: I did not say tell us about yourself though.Mark: Which is fair. So the Canadian perspective, we have a legal framework and a different expectation, the European expectation is completely different. The outlier when it comes to privacy is actually the United States. Now, the interesting thing is the United States is also the generator and the creator of the vast majority of the technology that the rest of us use. So when we look at the legal requirements, there're different things. When we look at what you should be doing and what the expectation is, it really comes down to cultural. So what a European citizen would expect to happen with their data, is very different than somebody in the United States, because there is a cultural and a legal expectation in the EU for their data to be treated very, very differently. So the generic answer is when you're building out serverless applications, specifically, you need to make sure that whatever level of data you're dealing with, the service that you're leveraging can support the controls you want around that data.So if we look at PCI, which is the Payment Card Industry framework, there is a legal requirement. If you're taking credit cards to have certain security controls in place, you need to be PCI certified, which is why a lot of smaller businesses just go to a provider, but bigger businesses it's worthy your time to set yourself up like this. There are legal requirements for the controls around it, which means if you're building in a serverless design, regardless of the cloud you're using, the aspects of your design that are processing payment cards, so processing MasterCard, Visa, Amex, need to be on services that are also PCI certified. So you can't achieve the certification if the service you're building on isn't also certified. So there's that aspect of it in general, that you need to just sort of go with that compliance, but it's really tricky because it comes down to what do you want to do versus what do you need to do?And that sort of, it's a difficult thing to respond to because sometimes there's very real legal penalties for not complying. But the good news is from the serverless aspect is that, the shared responsibility says that you're always responsible for two things, configuring the services you use. Right? So all the providers give you a little knobs and dials that you can change, so you can encrypt or not encrypt, you can optimize for this or that. You need to understand and configure that service, but you are always responsible for your data, always. There is at no point you cede responsibility for your data.If you leverage a third party... So if I'm the business and you're my user and you give me personal information or information, you want private, I am on the hook for it, regardless of who I use behind me, it's me. So I need to make sure that the services I'm leveraging in my serverless design, have the controls that I'm comfortable with to make and follow through on that promise to you as a user, and that changes but it's always you, and you need to verify as a builder that you're leveraging services that meet your needs.Jeremy: Yeah. So you mentioned two separate things. You mentioned compliance and you mentioned sort of legality or the legal aspect of things. So let's start with compliance for a second. So you mentioned PCI, but there are other compliances there's SOC 2 and ISO 9001 and 27001 and things like that. All things that I only know briefly, but they're not really legal standards. Right? They're more of this idea of certifications. And some of them aren't even really certifications. They're more just like saying here, we're saying we follow all these rules. So there's a whole bunch of them. And again, I think what, ISO 27018 is about personal data protection and some of these things, and their rules that they follow. So I think these are really good standards to have and to be in place. So what do we get... Because you said, you have to make sure that your underlying infrastructure, has the compliance that's required. So what types of compliance are we getting with the services from AWS and Google and Azure and that sort of stuff?

54mins

19 Oct 2020

Episode artwork

Making the Cloud More Secure with Mark Nunnikhoven

Screaming in the Cloud

About Mark NunnikhovenIs this system safe? Is my information protected? These are hard questions to answer. Mark Nunnikhoven works to make cybersecurity and privacy easier to understand.A forensic scientist and security leader, Mark has spent more than 20 years helping to defend private and public systems from cybercriminals, hackers, and nation states. A sought after speaker, writer, and technology pundit, his message is simple: secure and private systems are a requirement in today’s world, not a luxury.Links Referenced: Trend Micro: https://trendmicro.com/ Twitter: https://twitter.com/marknca Mark’s website: https://markn.ca/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored by a personal favorite: Retool. Retool allows you to build fully functional tools for your business in hours, not days or weeks. No front end frameworks to figure out or access controls to manage; just ship the tools that will move your business forward fast. Okay, let's talk about what this really is. It's Visual Basic for interfaces. Say I needed a tool to, I don't know, assemble a whole bunch of links into a weekly sarcastic newsletter that I send to everyone. I can drag various components onto a canvas: buttons, checkboxes, tables, etc. Then I can wire all of those things up to queries with all kinds of different parameters, post, get, put, delete, etc. It all connects to virtually every database natively, or you can do what I did, and build a whole crap ton of lambda functions, shove them behind some API’s gateway and use that instead. It speaks MySQL, Postgres, Dynamo—not Route 53 in a notable oversight; but nothing's perfect. Any given component then lets me tell it which query to run when I invoke it. Then it lets me wire up all of those disparate APIs into sensible interfaces. And I don't know frontend; that's the most important part here: Retool is transformational for those of us who aren't front end types. It unlocks a capability I didn't have until I found this product. I honestly haven't been this enthusiastic about a tool for a long time. Sure they're sponsoring this, but I'm also a customer and a super happy one at that. Learn more and try it for free at retool.com/lastweekinaws. That's retool.com/lastweekinaws, and tell them Corey sent you because they are about to be hearing way more from me.Corey: This episode is brought to you by Trend Micro Cloud One™. A security services platform for organizations building in the Cloud. I know you're thinking that that's a mouthful because it is, but what's easier to say? “I'm glad we have Trend Micro Cloud One™, a security services platform for organizations building in the Cloud,” or, “Hey, bad news. It's going to be a few more weeks. I kind of forgot about that security thing.” I thought so. Trend Micro Cloud One™ is an automated, flexible all-in-one solution that protects your workflows and containers with cloud-native security. Identify and resolve security issues earlier in the pipeline, and access your cloud environments sooner, with full visibility, so you can get back to what you do best, which is generally building great applications. Discover Trend Micro Cloud One™ a security services platform for organizations building in the Cloud. Whew. At trendmicro.com/screaming.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Mark Nunnikhoven, currently a VP of cloud research at Trend Micro. Mark, welcome to the show.Mark: Thanks, Corey. Long time listener, first time screamer.Corey: Excellent. So, you work at Trend Micro, an antivirus company, which seems like a very hip and relevant thing for 1998. Now, it seems like, so, you're effectively you're the equivalent of John McAfee, only they didn't put your name on the company? And yes, I understand that comparing you to John McAfee may very well be nastiest thing anyone has ever said to you in your life?Mark: Well, while my hair is as crazy as John McAfee is just generally, yeah, Trend definitely has that reputation. The good news is, and this may be eye-opening for the listeners, antivirus like it's 1998, that was old for Trend in 1998. Trend’s been around 31, 32 years now. Started with the early products like PC-cillin for those of you who have been around long enough. And now, anything that needs to be secured Trends got products or research that does it, whether that's in the Cloud—relevant, obviously, to this audience and to you and me—or smart cars, smart factories, anything like that. Really, really cool. But yeah, that company built its power base, its customer base on antivirus, and that’s still a big strong part of the business.Corey: Which I wouldn't doubt. The problem, though, is that I remember using Trend Micro. It was a solid product. There was a lot of, to be honest, crap in that market, where the systems were worse than the problems that they were designed to prevent against, and I don't talk about this too often, but I used to be a help desk person turned windows admin. And at that point, dealing with antivirus was part and parcel. Now, credit where due, Trend Micro’s offering was great, but that's not really the marketing angle anyone wants to care about these days because, “Yeah, we were awesome 20 years ago,” is really much more on-brand for IBM than it is for companies that are still, you know, relevant.Mark: Yeah, fair. Totally fair. And that's why at this point, I would say antivirus is probably the smallest portion of our business. One of the biggest, and growing by leaps and bounds, is actually helping cloud builders secure their deployments in the Cloud, whether that's servers and instances or all the way through to serverless architectures. So, the one thing about cybersecurity is you can't stay resting on your laurels. If you were good six months ago, that's not even as relevant as what do you have that viable today, and on top of that, the way I'd like to describe Trend, jokes aside, is really, we're a company that has a huge amount of knowledge around cybercrime, computer security in general, the products that the company sells are just a manifestation of that knowledge, and they're going to change constantly because technology is changing constantly. What doesn't change is that history and that continuing quest to keep learning more, to keep pushing more. And that's why, blissfully in my job, I'm on the research side, I don't really have to worry about the products too much.Corey: Yeah, that's one of the nice things, I think, about working on the research side is, for better or worse, you are a VP. You're not going to get on a podcast like this and then immediately savage the company, nor should you. That says something about a person, none of it good. But I've never gotten a sense from you that you were there to push a narrative or a company line, and we've hung out in passing it an awful lot of various AWS events. Which I think leads me to my next question for you. You by all appearances are a very traditional looking security person. Why do you focus on Cloud?Mark: Yeah, so I'm going to take that reference to traditional-looking because I've been dealing with the diversity angle when trying to help initiatives to get diversity in tech.Corey: Well, you do have earrings, at least one of them—Mark: I do—Corey: —so—Mark: —I have two.Corey: —it’s clear you’re not a culture fit for IBM.Mark: Sure. Though ironically, I worked for IBM, and at the time when I was a much, much younger man, I had, at that point, a multitude of piercings and tattoos while I was working for IBM in an externally facing role. So, I've toned it down as I've gotten older. But yeah, I'm a very traditional security person. My background and my training is in forensic investigation, I worked for the Canadian federal government for a decade working on nation state-level security stuff. And if you've said, “How do you get to be a security professional?” I think a lot of the stuff I've gone through and a lot of the certifications and training really fits there. But the reason why I've been focusing on Cloud for the last eight or nine years is because it's a huge opportunity to fix everything that is wrong with cybersecurity and information security, and there is a mountain of things that are wrong with the approach for security. Go no further than talking to any team in a large organization and ask them what they think of their security team, hopefully, you'll get a disclaimer about how they're nice people, but then you're just going to hear the complaints roll out about how they're grumpy, how it's the team that says no to everything, they slow everything down, they make things way more complicated than they need to be. And I get it, I understand how things got to that point because it's one of those baffling things where if you unravel it and look at each decision in turn, they make perfect sense. Back in the 90s, we used to have strong perimeters around everything, so everyone had a big, big firewall, and we used to use antivirus everywhere because that was the only tool we really had. And add up 30, 40 years of these kind of decisions, and you end up where we are now, where more often than not, security just sucks; it just slows people down when it doesn’t have to be that way. And we're starting to see that now that Cloud is far more mature than it was 10 years ago, you're seeing security really be an enabler, really push things forward—and I hate that word enabler, but it is accurate. Security's there to make sure that whatever you're trying to build does what you intend and only what you intend.Corey: That's an incredibly nuanced and difficult thing to do in a world of Cloud. Again, this was never easy working in on-prem environments, but at that point, you at least knew, for example, the number of computers you had running in the building, give or take. Now, at the time of Cloud, it's not just understand in some ways. You have a bit of an unbounded growth problem, but you also have the things that are running today could behave in different ways tomorrow, not just in terms of performance, but in terms of capability of, “Hey, the provider has now launched additional aspect of functionality.” A classic security example of this is tag-based access control which you can do what IAM now. And that's awesome and exciting, and it's more than a little confusing, but remember that for 10 years or whatever it's been now, that was not the case. So, everything was set to be able to set tags with reckless abandon, it wasn't scoped, people were encouraged to do it, and now as soon as you wind up doing anything that's using tag-based access control, you wind up accidentally creating security risks out of every single one of those things that are out there, perhaps unknowing, perhaps not. But it used to be that the worst-case scenario for tags was someone could wind up changing allocation rules or fill your logs with garbage. That was about it. Now, “Oh, wow. There's security problems here.” It's a new world. That feels like one of the dark side sharp edges to the amazing capability story that is Cloud.Mark: Yeah, absolutely. And that example, I think, is the prime example because it hits on a few things. Permissions are hard for a lot of people. So, one of the things I keep getting frustrated at and I keep talking to the AWS teams about—not to call them out specifically—but in IAM, in the Identity Access Management console, there's a bunch of managed policies, which are great. They help you get up and running, they help you figure out what permissions are linked to what actions, but there's a whole bunch of them that end in the two horrible words, “full access.” And while these are great for maybe a limited development environment, you should never see them in a production environment because I have yet to really honestly truly hit a case where a role or a user needed full access to a server or a service in production. So, that's a fundamental problem. Now you add tags on top of this, and like you said, for years, we've been adding them and using them primarily to route billing. And now you're saying, “Well, now you can accidentally grant production rights to somebody because the tag is wrong.” That is an absolutely significant challenge, and definitely, a downside. It actually came up in a discussion I was having with some development teams I was talking to today. They were discussing how they were leveraging the well-architected framework. They're just getting started with it, which is great. They're on their learning journey, but they were discussing it like it was a one-time thing. And they're saying, “Well, this is our architecture, and we've done a well-architected review. And we're good now.” And I brought up your exact point, which is, “Hey, wait a minute. You may be done with your architecture, but that's built in the Cloud, and you're using all of these different services, and those service providers are making changes, which then have implications to your architecture, whether that's security implications or not.” And that's a huge shift for development in the Cloud, but for security, it's one of those sort of brain breaking things where you go in the traditional model, we used to love to put our arms around it and go, “This is mine, I can protect it. I know where the boundaries are.” And that is fundamentally the history of cybersecurity. “I can put my arms around it, I can protect it. I'm good,” even though that's just sort of a false belief that a lot of people live with to make it easier to sleep at night. When you're in the Cloud, yeah, you don't know what your environment looks like now versus 20 minutes from now, if you start to see a surge in traffic and things scale. And then the advantage of having all of these new features come out from your cloud provider is that they also potentially do open up these avenues, like with the tag example.Corey: That is one of the best aspirational descriptions of cloud, the idea that the entire environment is highly dynamic, however—and maybe this is biased by the customers that I tend to work with—very often, that's not the case. I take a look at the big, expensive environments, and it's always stuff that was provisioned in 2016 or earlier. It's a lot less dynamic than people often believe that it's going to be. And that's not, incidentally, just an artifact of large accounts. I've been dealing with a lot of legacy cruft in my AWS account that I launched four years ago, and it's all serverless stuff, but I have a whole bunch of roles, I have lambda functions now using deprecated runtimes, that do ridiculous things I don't understand. I have a series of escalating challenges in that account where at some point, I want to burn it from orbit and start over, but that's kind of hard to do, and there's still library management problems and the rest. I'm spinning up new accounts constantly, like I work at Wells Fargo. It's awful. That stuff always accumulates in accounts, and fundamentally, no one understands it. I'm the only person that builds stuff in this [era’s] account and I still don't understand what all of the moving parts are. And look at how much time I spend on this stuff.Mark: Yeah. Hundred percent. The challenge and the balance I always have is ‘nerd me’ is like, “Look at all the possibilities. Look at the amazing, crazy autonomous self-healing deployments that we can make.” This stuff is great. More pragmatic, older ‘I’ve seen some things’ me is exactly what you said. There's legacy, people are slow-moving in general, the vast majority of cloud usage is not the cool stuff. So, Cloud generally suffers from the same thing that security does from its public image in the respect that if you look at security conferences, 99.9 percent of the material is about the latest zero-day vulnerability and exploit; it's about the latest cool hack. The reality of cybersecurity is that you spend your day worrying about patching threat assessments, a whole bunch of stuff that's never talked about publicly. The same challenge exists in Cloud is that if you look at what happens at conferences, whether they're physical or virtual, all the latest blog posts, a lot of this stuff is like, “Look at this cool stuff you can do on the cutting edge.” It's not, “This is the reality that you have a business app that's working, and there's no way your boss is ever going to let you completely rebuild it from the ground up to make it just do the exact same thing, but in a better way because at the end of the day, it's doing what you need it to do.” So, the reality is very, very messy. But I think the possibility is there. The biggest challenge is the lack of tooling in general, not just for development, but tooling in general, to help people shift their mental model. And this is one of the unfortunate things that I see every year at re:Invent—not to call re:Invent out, it just happens to be the biggest single source of announcements—is that a lot of those announcements—Corey: You’re telling me.Mark: Yeah, exactly. I mean, you probably burned through a keyboard in November alone. And the challenge is, is that a lot of those new service announcements are stuff that are designed to help people bridge that gap and to get more cloud-y in their thinking, but not pushing cloud forward. And it's the Cloud coming back to bring them along, which is a very good thing, but also a lot of the time reinforces some of these older mental models, these older approaches because you're very right, in the Cloud deployments are not nearly as dynamic as they could be, and I think that's not a lack of the Cloud backing or the services not being able to do that, it's the people can't think that way. It's like, if you sit a programmer down and go, “Okay, write me a script that does XYZ.” And then if you tell them to do the same kind of thing across multiple threads, people's brains don't work in multi-threading mode, they work in, sort of, just serial mode, one after the other. And while parallelizing everything would be better for a lot of cases, it's just not how our brains work. And that's the same thing that we see in Cloud.Corey: We talk about Cloud a fair bit, but let's make it a bit more specific. In addition to your apparently easy, minor day job as a VP at a major company, you also are an AWS Community Hero, which I interpret to mean that you looked at the vast landscape of what causes you could volunteer for, and picked a trillion-dollar company. What's that about? What's it like? What do you do?Mark: [laughs]. Yeah, so the Hero program’s been going for a number of years now. I've been in it for a while, and it's basically some folks within Amazon, recognize what you're doing out in the community or in a specific technology stack—so there's Serverless Heroes, Machine Learning Heroes, Data Heroes, that type of thing—and it's just an extra recognition from AWS. The advantage for us as Heroes is it gives us some opportunity to speak at AWS events, to publish on AWS properties, but also to get access, and to give feedback to the product teams a little more frequently than the customers normally do. So, it's a nice little recognition; it feels good to see that the efforts for me specifically was based on a lot of speaking that I'm doing, a lot of community outreach to help people understand security, it's a nice recognition there. But yeah, volunteering for a trillion-dollar company is not a totally incorrect way, but the good news for me is that's in addition to volunteering to run the youth basketball, and scouting, and stuff like that. But yeah, it's a good program. It gives us some of the insights that you get a glimpse of being a prominent influencer, so that when you speak people listen. The Heroes program is a little bit of that endorsement for the company itself.Corey: That’s, I think, a fair way to say it. One of the areas that I found the Hero program to be incredibly useful from my perspective—again, I am not a Hero for reasons that should be blindingly obvious. I have never been invited to be an AWS Hero because, let's not kid ourselves here, Amazon hires smart people who can read the room, but I've also only ever interacted with folks on the periphery of it. But one of the values that I find coming out of the Hero program is the fact that it helps me contextualize what a release means in a different context. Because the Heroes don't work for Amazon. There have been a few people who are no longer Heroes because they took jobs at AWS. At least one other is no longer a Hero because they took a job at Microsoft, but that's a different problem. And I see that this is a group of people who are not beholden to AWS for anything other than a bit of publicity, so if AWS does something egregious, they're in a much better position to sound off about it, or highlight things that may not make much sense. At least it feels to me like there's not any constraint on saying things because you don't work there; what are they going to do, take away your birthday? You can say what you think. And through that lens, I find that the stories that come out of the Hero program are a lot more authentic, for one, and also a lot more bounded to use-cases you might find in companies that aren't either Amazon or their specific target customer case studies. Would you agree with that?Mark: Yeah, I think that's accurate. I think what you're saying there in a very eloquent way is that the Heroes tend to provide a little more perspective. So, I'll give you a very pragmatic or very practical example, every year for re:Invent, registration is a disaster. Trying to get a reserved seat in a session is always frustrating because the third party app goes down, and people who paid good money to go to this conference to see good content, have a really hard time getting into those talks, and that's frustrating for absolutely everybody, AWS included. As part of the Hero program, I think three years ago, they actually briefed us all ahead of time because they said, “Listen, we know you guys are looped in on a bunch of social media posts, and people getting really frustrated. Here's what we've done behind the scenes. Here's what we're trying to do to make things whole.” And they actually gave us a contact during the week of re:Invent as well to make it easier to answer community people's questions. So, they said, “Look, we know you're going to give the best answer you can if somebody is asking for some help, but here's how you guys can make that simpler for everybody, and here's a direct line into somebody on the events team who can handle that.” But I think for me, that was a good thing in that they didn't tell any of us to stop complaining about broken sessions and registration, they just said, here's how we're trying to help. And that rolls out to services, you'll see that Heroes are not necessarily going to be disparaging the company or taking it down, but we’ll speak our mind, too. So, for me personally, Amazon Neptune, the unheard-of one of many data stores—Corey: Yes, their giraffe database, named after giraffes, which are of course themselves made-up animals that don't really exist.Mark: Fair. Totally fair. Because what sound does a giraffe make? Nobody can tell me.Corey: Exactly.Mark: So, for Neptune: great service, super cool possibility, and totally wasted for a huge amount of customers because it's traditional spin up an instance and pick out your capacity, no serverless capability in it whatsoever where having a graph database, fully serverless, would have been amazing. And that's a frustration, so I mentioned that when it was launched, I mentioned that a few times before. So, there's definitely that context, and I think the advantage for the Heroes not being employed by AWS, but also having sort of the insider track on some AWS stuff is when new services get released, most of the time, we've either used them already or had a chance to provide feedback and help shape them, so we can help people understand the better context because that continues to be sort of a weak spot. When a service comes out, everybody looks at it through their own lens and goes, “This solves my problem.” And it's like, “Yeah, probably not, but here's the problem it does solve.” And then it also, like you said, shows those additional use cases. So, from the Heroes, you're going to see here's how you can build serverless applications using Auth0 instead of trying to shoehorn something into Cognito. Here's how you can leverage DynamoDB while doing your compute in GCP. That kind of stuff because we're not restricted, and I think—not to speak for every Hero, but I'll speak for every Hero—we're just trying to help people because we find this stuff interesting, and we just like to help people.Corey: In what you might be forgiven for mistaking for a blast from the past, today I want to talk about New Relic. They seem to be a relatively legacy monitoring company, and I would have agreed with that assessment up until relatively recently. But they did something a little out there: they reworked everything. They went open source, they made it so you can monitor your whole stack in one place and, most notably from my perspective, they simplified their pricing into something that is much more affordable for almost everyone. There's even a free tier with one user and 100 gigs per month, totally free. Check it out at newrelic.com.Corey: One of the common threads that I see with the Heroes that I've dealt with has always been a willingness to help others. It's not about, “Look at how awesome I am,” It's, “Let me help lift other people up and teach people things,” and they do this on a volunteer basis, in all seriousness, which makes the naming of it, in typical Amazon fashion, terrible. If you call someone an MVP at Microsoft, that tends to be something that you can self-describe: when you're elected MVP for a game, you can say that; it's on your resume. When you call yourself a Hero though, it always feels weird, at least in the common baseline, English language perspective. It's like ‘entrepreneur.’ it's one of those things other people call you far more than it is something that you call yourself. If you say, “That person's a Hero,” then yeah, you feel good; that person did something great. When someone says, “I'm a Hero.” Oh my God, this person is so full of themselves, I can only assume they're a venture capitalist.Mark: Yeah, so I'm just thinking in the back of my head. Did I use the word Hero in my own intro when this started? I hope not.Corey: You did not. I was listening for it—Mark: Okay. [laughs].Corey: —because I would have needled you. Instead, I had to go to my fallback, and needle you about your employer instead.Mark: Fair.Corey: Aren’t you glad you went that way?Mark: Iffy. Depends what my boss says when he hears this. I agree, and Hero is a loaded word. I actually liked, or preferred—in Australia, there is a smaller regional program that's somewhat similar, that's the ‘Cloud Champions’ and that was a little more straightforward I think because these are people in the country who champion cloud usage and cloud technologies. And I thought that was simpler and easier to describe because I've been in a number of channels and outlets like the How to re:Invent series that AWS runs, and a few other venues, trying to explain the Hero program, and it the explanation of what we do is pretty straightforward but using the word Hero is uncomfortable because especially what goes on in the world on any given day, to call somebody who likes to teach people technology a Hero, I think that's a stretch, but I do very much like the program.Corey: Especially during a time of pandemic where you have people literally risking their lives to bring you groceries, for example, and you look around it's like, “What do you do?” “I'm a Hero.” It rings a little hollow I guess is probably the best way to put it. I do love what they're doing in other geographical locations in AWS. They have a similar program I believe called Cloud Warriors, which just sounds incredibly badass.Mark: Yes, for sure. But then I think the problem with warriors is you get this mental image, and then if I come on camera, you're like, “Oh, I didn't know it was nerd warriors. Not what I was expecting.”Corey: Oh, yeah, in my case, that's all about fitness is really what I'm about, namely fitness entire burrito in my mouth.Mark: You got it.Corey: So, we've talked a bit about AWS, but what do you focus on? Do you spend time with the other cloud providers, specifically GCP and Azure—when we say cloud, that's generally what we're referring to—or are you more of an AWS focused person?Mark: Yeah, I actually split my time among all three. The AWS designation and the reference point tends to go just to market share, and also the fact that they were first out so I've had the longest working relationship with them, but from Trend Micro’s perspective, we're officially partners with all three, and I've been involved since the beginning of all those, but just as a technologist I enjoy working with all three. They all have their ups and downs, but when I generally talk publicly about Cloud, it's Azure, GCP, and AWS.Corey: So, you're in a great position to wind up answering this in a probably more objective fashion than I do, given the fact that my entire business revolves specifically around AWS. Compare and contrast the big three: what do you like, what do you hate about each of them? Superlatives, I guess. Go.Mark: Nice. GCP, I think from a technology perspective, once you get your head around it, I like the basic structure. So, as a security guy, the fact that you need to turn everything on explicitly is a big win for me. Their project focus—as opposed to account focus—makes it a lot easier to implement some interesting boundaries. I also like the way that things like virtual machines are just sliders. Yes, they have templates for instance sizes in AWS, but it's easy to just say give me more CPU, give me less RAM, that kind of stuff. So very cool there. What I don't like about GCP is, when you're accessing it through the SDKs, you need to be Google-y. If you are not Google-y, it is going to be a very frustrating time. So, I do a lot of work in Python, the GCP Python SDK is the most un-Pythonic thing I have seen. It is tough to wrap your head around that, but once you realize it's really just all Java and Go primitives wrapped in Python, it works okay.For Azure, I think the breadth of services are great. There's some really interesting things like Cosmos DB, which is trying to be all DBs to all people. Not quite getting there, but more than anything, I find the interesting balance where especially GitHub being under the Microsoft umbrella, there's a big merging happening there with some of the online coding tools that are happening in GitHub. So, I think there's some really good dev tooling in Azure. That being said, what the heck is Azure DevOps? Like, you can't take a philosophy and make it a service. That's not even really one thing, but combining thing among a bunch of other services. Really challenging there, very Microsoft-y. Same with the SDK for them. You better like having extra attributes for no reason in your properties just because that's the way they do it.For AWS, it's the beast in the room, for sure. Very good at what they do. The biggest challenge I think, for AWS is trying to figure out what service at this point; there's just so many of them. But then also because of their rapid iteration in services, if you don't have the exact use case for a service when it comes out, it gets really frustrating really quickly because you see what it could be, and it will get there in a year or two, but not out of the gate, so whether or not you push through—and then I think lining up with with your opinion that you're never shy about, all three of them have a really, really hard time naming things.Corey: I have noticed that, and I can't believe I'm going to say this out loud, but I have a sympathy for that problem because it turns out that it is way easier by four orders of magnitude to make fun of a name than it is to come up with a good one. I mean, for God's sake, my first version of this company was the Quinn Advisory Group. And it took us two weeks to come up with that.Mark: Yeah. Yeah, it is hard, for sure—Corey: Names are super hard.Mark: —the challenge ends up being consistency. So, you look at AWS, and half of them seem to be named as nicknames, some of them are named very deliberately in what they do, others, you know, the whole AWS versus Amazon thing is just… okay. And then in GCP, it's better than their public-facing Google Meets, meet on Google, Google Hangouts, hang in a Google Meet, meet up in a Google Hang product fiasco, but it's not much better. And then Azure is actually surprisingly direct with the exception of the DevOps stuff. But again every once in a while, something more aspirational like Cosmos sneaks in. But at least they shoved a DB in there to make it make a little bit more sense.Corey: You're absolutely right. And one thing that I think is happening is that, on some level, the folks using Azure and the folks using AWS—on the typical customer side of things, not necessarily the partners, or the folks that interoperate between the two—there tend to be almost two camps that don't talk very often. And when I started playing around with Azure, I find some things to be actively impressive about—same story with GCP as well. And it sounds kind of weird, but I mean, the best analogy I've got is when I learned French, I found that I understood English a lot better as a result. When I learned about Azure or GCP, I find that I start to understand concepts about AWS more effectively as well. For example, I still maintain one of the absolute best descriptions of what a lot of what AWS services are, are Microsoft's list of analogies and comparisons, were just lists a bunch of AWS services, the Azure competitor, and then—and this is critical—it describes what each one of those services does. And wow, how come AWS marketing hasn't gotten in front of this?Mark: Yeah, [French]. The end of the day, as much as we poke fun at the naming, does the service work is really the key thing. But what frustrates me on the naming, you know, and again, I was having a discussion last week with some development teams, where they were having this revelatory experience because they found a service that addressed their need in one of these clouds. And I just kind of stepped back and went, wait a minute, that service has been out for, like, four years. But because of the name, it never clicked into them what problem it actually solved. So, they were trying a couple different workarounds, looking down different avenues, and then when they found this, they were like, “Wow, it's super easy. I just had to click a button, create the primitive in the service, and I'm done.” And that's where the naming thing—besides just being a fun pastime to poke fun at the names—that's where it really frustrated when it's hindering somebody to solve a problem because they don't understand at first glance what this thing does. That's a serious issue.Corey: It is. And I also take that a step further. Some people think I focus too much on names, and they may be right, but the more I make fun of cloud services, and the more opportunities I have—let's call it that—to speak to the teams that are building these services, and when you start tying a human face to these things, you develop—at least I develop—a sense of sympathy because it's hard to build these things; no one claims otherwise. And the challenge, as a result, is I don't want someone to release a new feature that they've been working on for months or years, and the first thing that happens I make fun of it. But no one spent 18 months naming Systems Manager Session Manager, and if they did, they should feel bad. So, names are a safe thing, and another secret angle to that as well is you don't need to have 10 years of experience as an infrastructure engineer to appreciate a joke about a bad service name, whereas if I can start making esoteric references to XML as applied to the S3 API, yes, I can: you’ll want me to absolutely wash my mouth out with SOAP after that one. Those kinds of jokes are not going to be nearly as accessible, and it feels like it's inside jokes among friends. And that's never as engaging, and it acts as a gatekeeping aspect more than it is about building a bridge towards inclusivity.Mark: Yeah, and I think there's another angle to that as well. I think using a joke to break down that barrier to be more inclusive also dispels the myth that cloud services are for the wizard on the hill, that you need this crazy level, that you need to be an expert before you just start experimenting and trying things. And that's one of the things that I think is amazing about Cloud is that you can just sort of stumble out of the gates, and have something working, and see the fruits of your labor. I spend a lot of time, especially now during the pandemic, looking at and helping kids learn to code. And one of the things that I find out amazing now vs way, way, way back when I learned was that there's a lot of tools, and kits, and toys that have a physical aspect that links to the digital so that kids are manipulating something physical with their code or with their hands, and it's changing the code. So, there's a linkage there that makes it easier to understand. And I think there's a parallel there to being able to make fun of some of these names that some of them are, they're fine, it's okay, but it's still fun to make fun of others, like System Manager Session Manager, that deserves to be shredded to the ground every chance we get, but it does make it a little more accessible as part of that inclusivity. It kind of demystifies them a bit to say, “Yes we're joking around about this stuff.” It's not only, “I need to be doing serious work with this stuff.” You can be having fun with it, which is where we see a lot of work around Alexa skills. There's a mountain of them that aren't ever going to be run more than once, but there's a lot of just little fun stuff that people use as an entry-level to start learning how to code, to start learning how to take advantage of cloud services, and that I think is really, really important because I don't think enough people experiment with technology and playing around with it not only just for the career-wise, but we're surrounded by this stuff. The more we understand it, and demystify it and make sure that it does break, but we can fix it. I think that's a win for everybody.Corey: And I think that that's really what this comes down to. And I've seen that trend in the AWS ecosystem by itself, but I see it across the others as well, where once upon a time when EC2 first came out, you basically needed a doctorate to get it up and running. There was an entire cottage industry of companies like RightScale that made that interface something a human being could use. Over time, it's become more broadly accessible. Look at things like Lightsail. You don't need to know much about anything within AWS’s very vast umbrella to get started with that. And we're seeing it now move even further up the stack in other arenas, too. I'm excited to see what the next five years hold in that context. The idea of low-code and no-code movements and that type of engagement means that suddenly you can have a business idea and not have to go spend the next few years learning how computers work in order to enable some of that.Mark: Yeah, and that's where I think Azure, may be the dark horse, especially with GitHub being under the umbrella, but as much as—Corey: Microsoft gets developers, credit where due.Mark: This is the thing. As much as I like to go at Microsoft, they have a long history of enabling development for outside of the traditional IT umbrella. So, Visual Basic, huge win. In the early windows days, you could throw a couple buttons on a form, double-click on it, and write a tiny bit of code that you could, you know—not Google at that point because Google wasn't a thing, but you could read through the Docs or kind of stumble your way through, and you had a working Windows application. You didn't know anything about the Win32 Library set, you didn't know anything about how this worked under the covers, but there was. You typed in something in your text box, you clicked a button, and it took an action. And I think we're starting to come back around to that with cloud services. We're seeing more and more of that in the data stack on Google, where Bigtable BigQuery, just shove a whole bunch of data in here, it's going to make some intelligent decisions, and then you can single-click or drag and drop things to get some really interesting business results out of that, and that's a huge win. Anything we can do to make it easier for non-technical people, or non-deeply-technical people, the better off we are. So, if we go back to our earlier part of our conversation, people like the AWS Heroes, people like yourself, people who are interested and really immersed in this stuff, we're going to figure it out no matter what. We can wade through the documentation, we can look at the mountains and mountains of bad JavaScript code and eventually make sense of something. But that's not who we need to worry about. It's a similar challenge we have in security is where we've made security this obscure art when really it's just one concern of many of everybody who's building technology, and we need to make that more accessible just like we need to make building the technology more accessible.Corey: I don’t think that I've ever regretted making things accessible to a broader audience. One of the early versions of my “Terrible Ideas in Git” talk was aimed at inside baseball on Git developers. And yes, I had to learn how to Git worked before I could give that talk. And the three people who understood that in the audience thought it was great, and the rest didn't know what the hell I was banging on about. By instead turning it into something that was this is what Git is, and this is what it does, and here's how to use it by not using it properly, is fun, it's engaging, but it means that you don't have a baseline level of you must already have X level of experience in order to begin using this new and exciting capability. And that is the biggest challenge for all of the cloud providers from where I sit. And it sounds like I'm not alone, from what you have to say.Mark: Yeah, I a hundred percent agree with that. And we're seeing it get better and better, with better interfaces, a better understanding that as far as—yes we need API first, but that also has a big assumption of technical level. If you're saying here, you can only interact with it via an API. But I have a similar experience. Last year—so 2019—one of the talks I gave at several of the AWS summits was about advanced security automations made simple, and it was targeted not at security people, but at developers. And the whole idea was breaking down this myth that cybersecurity was super hard, and it was something that they had to hand off to another team, and it was just showing them, “Look, this is the basic thing. There's a whole bunch of big language around this that doesn't mean anything. Here's what you're trying to accomplish. You're trying to make sure that if you write a function that creates a TPS report, it can only print TPS reports, and not HR, or finance, or inventory reports.” Which I always say because no one knows what a TPS report actually contains, so I just randomly name other things that it shouldn't be doing, and nobody's ever called me on it so I keep getting away with it.But the concept comes across pretty crystal clear for developers or people who are building something is that I want this to be an apple, and if it is an orange, that is bad. And that's really just cybersecurity in a nutshell. But I don't think many people present it that way. So, as far as making this more approachable—which may be a better word than accessible—more approachable, easier to understand, that's a huge thing for me personally, with security, with privacy, but with cloud in general. And I think the advantage—early in our conversation, we talked about environments not being nearly as dynamic as they could be, or as that we think they are, and I think a lot of that comes down to this tooling and that approachability. Once we get there, and we make it super easy for somebody not to worry, that's really going to pay off. I saw that this week. I was doing a virtual meetup for Cloud Security London, and they had deployed a polling app, like an open-source Kahoot! And it was interesting because the cloud security meetup is run by twin brothers, one works for AWS and one works for Azure, and the third brother is not—Corey: Thanksgiving dinner has got to be awkward.Mark: It gets worse. The third brother, who's not a twin, is a .NET developer who works for AWS. It's a complicated relationship, to say the least.Corey: And an NDA violation waiting to happen.Mark: Oh, massively so. I'm sure even just the recipe for the yams at Thanksgiving is probably under one or more NDAs. But they deployed this open-source version of Kahoot! which was this multithreaded sort of polling app, and the nice thing is that they didn't have to know, even though these are really technical folks, they didn't have to know the ins and outs of it. The tooling was simple enough that they actually just ran one script—in this case, I think it was a Terraform—that deployed the whole thing for them, and it worked. And to think about what actually happened behind the scenes is now you have a-real-time highly scalable, dynamic audience polling system in place after just running one command was pretty amazing. And I think that possibility is extremely exciting. We just need to continue to relentlessly chip away at the barriers to make it more and more approachable.Corey: That's, I think probably the best place to leave this. If people want to hear more about what you have to say, where can they find you?Mark: You can find me on social @marknca, M-A-R-K-N-C-A, or at my website, Markn.ca, as in Canada because that's where I'm at.Corey: And we will of course throw links to that into the show notes. Mark, thank you so much for taking the time to speak with me today. I really appreciate it.Mark: Thank you. I appreciate the opportunity.Corey: Mark, Nunnikhoven, vice president of cloud and research at Trend Micro.k I am Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple Podcasts, whereas if you hated this podcast, please leave a five-star review on Apple Podcasts anyway, along with a comment after you update your antivirus software.Announcer: This has been this week’s episode of Screaming in the Cloud. You can also find more Corey at ScreamingintheCloud.com, or wherever fine snark is sold.This has been a HumblePod production. Stay humble.

43mins

10 Sep 2020

Episode artwork

#7 - Mark Nunnikhoven VP of Cloud Research

Talking Serverless

In this episode, our host Ryan Jones sits down with Mark Nunnikhoven  the VP of Cloud Research at Trend Micro The episode is also live on https://buff.ly/2UeeNL1 --- Send in a voice message: https://anchor.fm/talking-serverless/message

38mins

15 Apr 2020

Episode artwork

Mark Nunnikhoven

Vince in the Bay Podcast

Mark Nunnikhoven is Vice President of Cloud Research at Trend Micro.

6 May 2019

Episode artwork

Mark Nunnikhoven

Vince in the Bay Podcast

Mark is Vice President of Cloud Research at Trend Micro. He joined me at RSAC 2018 to discuss developing new email security gateway tools, operational technology in IoT, the new Cybersecurity Tech Accord, information security buzzwords and more!

29 Apr 2018

Episode artwork

Mark Nunnikhoven, Trend Micro

DevOps Chat

DevOps.com chats with Mark Nunnikhoven, VP of cloud research at Trend Micro. Mark talks about cloud security in a hybrid cloud world. it is a good discussion around DevSecOps, cloud and the future. Enjoy!

15mins

15 Aug 2016