Harvard in Tech Seattle Podcast Episode 5 - Aran Khanna
Harvard in Tech Seattle Podcast
Aran Khanna is co-founder and CEO of Reserved.ai. He graduated from Harvard College in 2016. He discovered Facebook could track any user’s location, down to within 1 meter. He showcased this through his app, Marauder’s map. He has previously worked at several startups, one of which was acquired by Amazon Web Services (AWS). He noticed how some clients were spending time to track and make purchases that would take hours out of the day. He created a solution that would automate this process through his company, Reserved.ai. He hopes to help future clients in navigating not only AWS but other services like Azure and Google Cloud. One of the principles he follows and explains in this episode is extreme ownership.
When a developer spins up a virtual machine on AWS, that virtual machine could be purchased using one of several types of cost structures. These cost structures include on-demand instances, spot instances, and reserved instances. On-demand instances are often the most expensive, because the developer gets reliable VM infrastructure without committing to long-term pricing. Spot instances are cheap, spare compute capacity with lower reliability, that is available across AWS infrastructure. Reserved instances allow a developer to purchase longer term VM contracts for a lower price. Reserved instances can provide significant savings, but it can be difficult to calculate how much infrastructure to purchase. Aran Khanna is the founder of Reserved.ai, a company that builds cost management tools for AWS. He joins the show to talk about the landscape of cost management, and what he is building with Reserved.ai. Sponsorship inquiries: firstname.lastname@example.org The post Reserved Instances with Aran Khanna appeared first on Software Engineering Daily.
When Aran Khanna was a college student, he accepted an internship to work at Facebook. Even before his internship started, he started playing around with Facebook’s APIs and applications. Aran built a Chrome extension called Marauder’s Map, which used Facebook Messenger’s web APIs to track where people lived, what their schedule was, and other highly sensitive information. These were not public features of Messenger, but Aran was able to reverse engineer the APIs. As a result, of making Marauder’s Map, Aran’s invitation to work at Facebook was retracted. Aran remained curious about the norms of publicly available social network data, and the second order data sets that could be built on top. Out of this curiosity, Aran created a tool called Money Trail, which used public Venmo data to model a graph of how users were paying each other. Aran showed for a second time that data that seems innocent to share can be repurposed to identify, classify, and incriminate users. Developers of these online applications face tradeoffs between privacy, convenience, and security. By interacting with these applications, we generate data that suggests how we think, what we like to do, and who we are affiliating with. Google and Facebook probably understand you better than you understand yourself. Aran Khanna previously was on the show to talk about machine learning at the edge. At the time he worked at Amazon Web Services. He now works as a digital privacy researcher. His background in machine learning makes him well-equipped to talk through the subtleties of modern digital privacy. In this show, Aran returns to talk through the finer points of privacy, data, and artificial intelligence. The post Digital Privacy with Aran Khanna appeared first on Software Engineering Daily.
A modern farm has hundreds of sensors to monitor the soil health, and robotic machinery to reap the vegetables. A modern shipping yard has hundreds of computers working together to orchestrate and analyze the freight that is coming in from overseas. A modern factory has temperature gauges and smart security cameras to ensure workplace safety. All of these devices could be considered “edge” devices. Over the last decade, these edge devices have mostly been used to gather data and save it to an on-premise server, or to the cloud. Today, as the required volumes of data and compute scale, we look for ways to better utilize our resources. We can start to deploy more application logic to these edge devices, and build a more sophisticated relationship between our powerful cloud servers and the less powerful edge devices. The soil sensors at the farm are recording long time series of chemical levels. The pressure sensors in a centrifuge are recording months and years of data. The cameras are recording terabytes of video. These huge data sets are correlated with labeled events–such as crop yields. With these large volumes of data, we can construct models for responding to future events. Deep learning can be used to improve systems over time. The models can be trained in the cloud, and deployed to devices at the edge. Aran Khanna is an AI engineer with Amazon Web Services, and he joins the show to discuss workloads at the cloud and at the edge–how work can be distributed between the two places, and the tools that can be used to build edge deep learning systems more easily. To find all of our shows about machine learning and edge computing, as well as links to learn more about the topics described in the show, download the Software Engineering Daily app for iOS or Android. These apps have all 650 of our episodes in a searchable format–we have recommendations, categories, related links and discussions around the episodes. It’s all free and also open source–if you are interested in getting involved in our open source community, we have lots of people working on the project and we do our best to be friendly and inviting to new people coming in looking for their first open source project. You can find that project at Github.com/softwareengineeringdaily. Transcript Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript. Sponsors The octopus: a sea creature known for its intelligence and flexibility. Octopus Deploy: a friendly deployment automation tool for deploying applications like .NET apps, Java apps and more. Ask any developer and they’ll tell you it’s never fun pushing code at 5pm on a Friday then crossing your fingers hoping for the best. That’s where Octopus Deploy comes into the picture. Octopus Deploy is a friendly deployment automation tool, taking over where your build/CI server ends. Use Octopus to promote releases on-prem or to the cloud. Octopus integrates with your existing build pipeline–TFS and VSTS, Bamboo, TeamCity, and Jenkins. It integrates with AWS, Azure, and on-prem environments. Reliably and repeatedly deploy your .NET and Java apps and more. If you can package it, Octopus can deploy it! It’s quick and easy to install. Go to Octopus.com to trial Octopus free for 45 days. That’s Octopus.com Digital Ocean is a reliable, easy-to-use cloud provider. More and more people are finding out about Digital Ocean, and realizing that Digital Ocean is perfect for their application workloads. This year, Digital Ocean is making that even easier, with new node types–a $15 flexible droplet that can mix and match different configurations of CPU and RAM, to get the perfect amount of resources for your application. There are also CPU optimized droplets–perfect for highly active frontend servers, or CI/CD workloads. And running on the cloud can get expensive, which is why Digital Ocean makes it easy to choose the right size instance. And the prices on standard instances have gone down too–you can check out all their new deals by going to do.co/sedaily. And as a bonus to our listeners you will get $100 in credit over 60 days. Use the credit for hosting or infrastructure–that includes load balancers, object storage, and computation. Get your free $100 credit at do.co/sedaily. Thanks to Digital Ocean for being a sponsor of Software Engineering Daily. Your company needs to build a new app, but you don’t have the spare engineering resources. There are some technical people in your company who have time to build apps–but they are not engineers. OutSystems is a platform for building low-code apps. As an enterprise grows, it needs more and more apps to support different types of customers and internal employee use cases. OutSystems has everything that you need to build, release, and update your apps without needing an expert engineer. And if you are an engineer, you will be massively productive with OutSystems. Find out how to get started with low-code apps today–at OutSystems.com/sedaily. There are videos showing how to use the OutSystems development platform, and testimonials from enterprises like FICO, Mercedes Benz, and SafeWay. OutSystems enables you to quickly build web and mobile applications–whether you are an engineer or not. Check out how to build low-code apps by going to OutSystems.com/sedaily. The post Edge Deep Learning with Aran Khanna appeared first on Software Engineering Daily.