Fighting Fraud with Machine Learning at Shopify with Solmaz Shahalizadeh - TWiML Talk #60
The podcast you’re about to hear is the first of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest for this show is Solmaz Shahalizadeh, Director of Merchant Services Algorithms at Shopify. Solmaz gave a great talk at the GPPC focused on her team’s experiences applying machine learning to fight fraud and improve merchant satisfaction. Solmaz and I dig into, step-by-step, the process they used to transition from a legacy, rules-based fraud detection system system to a more scalable, flexible one based on machine learning models. We discuss the importance of well-defined project scope; tips and traps when selecting features to train your models; and the various models, transformations and pipelines the Shopify team selected; and how they use PMML to make their Python models available to their Ruby-on-Rails web application. The notes for this show can be found at twimlai.com/talk/60 For Series info, visit twimlai.com/GPPC2017
30 Oct 2017
Building Conversational Application for Financial Services with Kenneth Conroy - TWiML Talk #61
The podcast you’re about to hear is the second of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest for this interview is Kenneth Conroy, VP of data science at Vancouver, Canada-based Finn.ai, a company building a chatbot system for banks. Kenneth and I spoke about how Finn.AI built its core conversational platform. We spoke in depth about the requirements and challenges of conversational applications, and how and why they transitioned off of a commercial chatbot platform--in their case API.ai--and built their own custom platform based on deep learning, word2vec and other natural language understanding technologies. The notes for this show can be found at https://twimlai.com/talk/61
1 Nov 2017
Deep Neural Nets for Visual Recognition with Matt Zeiler - TWiML Talk #22
Today we bring you our final interview from backstage at the NYU FutureLabs AI Summit. Our guest this week is Matt Zeiler. Matt graduated from the University of Toronto where he worked with deep learning researcher Geoffrey Hinton and went on to earn his PhD in machine learning at NYU, home of Yann Lecun. In 2013 Matt’s founded Clarifai, a startup whose cloud-based visual recognition system gives developers a way to integrate visual identification into their own products, and whose initial image classification algorithm achieved top 5 results in that year’s ImageNet competition. I caught up with Matt after his talk “From Research to the Real World”. Our conversation focused on the birth and growth of Clarifai, as well as the underlying deep neural network architectures that enable it. If you’ve been listening to the show for a while, you’ve heard me ask several guests how they go about evolving the architectures of their deep neural networks to enhance performance. Well, in this podcast Matt gives the most satisfying answer I’ve received to date by far. Check it out. I think you’ll enjoy it. The show notes can be found at twimlai.com/talk/22.
5 May 2017
Intel Nervana Update + Productizing AI Research with Naveen Rao And Hanlin Tang - TWiML Talk #31
I talked about Intel’s acquisition of Nervana Systems on the podcast when it happened almost a year ago, so I was super excited to have an opportunity to sit down with Nervana co-founder Naveen Rao, who now leads Intel’s newly formed AI Products Group, for the first show in our O'Reilly AI series. We talked about how Intel plans to extend its leadership position in general purpose compute into the AI realm by delivering silicon designed specifically for AI, end-to-end solutions including the cloud, enterprise data center, and the edge; and tools that let customers quickly productize and scale AI-based solutions. I also spoke with Hanlin Tang, an algorithms engineer at Intel’s AIPG, about two tools announced at the conference: version 2.0 of Intel Nervana’s deep learning framework Neon and Nervana Graph, a new toolset for expressing and running deep learning applications as framework and hardware-independent computational graphs. Nervana Graph in particular sounds like a very interesting project, not to mention a smart move for Intel, and I’d encourage folks to take a look at their Github repo. The show notes for this page can be found at https://twimlai.com/talk/31
5 Jul 2017
Most Popular Podcasts
Using AI to Diagnose and Treat Neurological Disorders with Archana Venkataraman - #312
Today we’re joined by Archana Venkataraman, John C. Malone Assistant Professor of Electrical and Computer Engineering at Johns Hopkins University. Archana’s research at the Neural Systems Analysis Laboratory focuses on developing tools, frameworks, and algorithms to better understand, and treat neurological and psychiatric disorders, including autism, epilepsy, and others. We explore her work applying machine learning to these problems, including biomarker discovery, disorder severity prediction and mor
28 Oct 2019
Towards Artificial General Intelligence with Greg Brockman - TWiML Talk #74
The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this episode, I’m joined by Greg Brockman, OpenAI Co-Founder and CTO. Greg and I touch on a bunch of topics in the show. We start with the founding and goals of OpenAI, before diving into a discussion on Artificial General Intelligence, what it means to achieve it, and how we going about doing so safely and without bias. We also touch on how to massively scale neural networks and their training training and the evolution of computational frameworks for AI. This conversation is not only informative and nerd alert worthy, but we cover some very important topics, so please take it all in, enjoy, and send along your feedback! To find the notes for this show, visit twimlai.com/talk/74 For more info on this series, visit twimlai.com/openai
28 Nov 2017
Marrying Physics-Based and Data-Driven ML Models with Josh Bloom - TWiML Talk #42
Recently I had a chance to catch up with a friend and friend of the show, Josh Bloom, vice president of data & analytics at GE Digital. If you’ve been listening for a while, you already know that Josh was on the show around this time last year, just prior to the acquisition of his company Wise.io by GE Digital. It was great to catch up with Josh on his journey within GE, and the work his team is doing around Industrial AI, now that they’re part of the one of the world’s biggest industrial companies. We talk about some really interesting things in this show, including how his team is using autoencoders to create training datasets, and how they incorporate knowledge of physics and physical systems into their machine learning models. The notes for this show can be found at twimlai.com/talk/42.
14 Aug 2017
Trends in Machine Learning & Deep Learning with Zack Lipton - #334
Today we kick off our 2019 AI Rewind Series joined by Zack Lipton, Professor at CMU.You might remember Zack from our conversation earlier this year, “Fairwashing” and the Folly of ML Solutionism. In today's conversation, Zack recaps advancements across the vast fields of Machine Learning and Deep Learning, including trends, tools, research papers and more.We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via Twitter @samcharrington or @twimlai.
30 Dec 2019
Automated Machine Learning with Erez Barak - #323
Today we’re joined by Erez Barak, Partner Group Manager of Azure ML at Microsoft. In our conversation, Erez gives us a full breakdown of his AutoML philosophy, and his take on the AutoML space, its role, and its importance. We also discuss the application of AutoML as a contributor to the end-to-end data science process, which Erez breaks down into 3 key areas; Featurization, Learner/Model Selection, and Tuning/Optimizing Hyperparameters. We also discuss post-deployment AutoML use cases, and much more!
6 Dec 2019
Machine Learning at GitHub with Omoju Miller - #313
Today we’re joined by Omoju Miller, a Sr. machine learning engineer at GitHub. In our conversation, we discuss:• Her dissertation, Hiphopathy, A Socio-Curricular Study of Introductory Computer Science, • Her work as an inaugural member of the Github machine learning team• Her two presentations at Tensorflow World, “Why is machine learning seeing exponential growth in its communities” and “Automating your developer workflow on GitHub with Tensorflow.”
31 Oct 2019
Understanding Deep Neural Nets with Dr. James McCaffrey - TWiML Talk #13
My guest this week is Dr. James McCaffrey, research engineer at Microsoft Research. James and I cover a ton of ground in this conversation, including recurrent neural nets (RNNs), convolutional neural nets (CNNs), long short term memory (LSTM) networks, residual networks (ResNets), generative adversarial networks (GANs), and more. We also discuss neural network architecture and promising alternative approaches such as symbolic computation and particle swarm optimization. The show notes can be found at twimlai.com/talk/13.
3 Mar 2017
Practical Deep Learning with Rachel Thomas - TWiML Talk #138
In this episode, i'm joined by Rachel Thomas, founder and researcher at Fast AI. If you’re not familiar with Fast AI, the company offers a series of courses including Practical Deep Learning for Coders, Cutting Edge Deep Learning for Coders and Rachel’s Computational Linear Algebra course. The courses are designed to make deep learning more accessible to those without the extensive math backgrounds some other courses assume. Rachel and I cover a lot of ground in this conversation, starting with the philosophy and goals behind the Fast AI courses. We also cover Fast AI’s recent decision to switch to their courses from Tensorflow to Pytorch, the reasons for this, and the lessons they’ve learned in the process. We discuss the role of the Fast AI deep learning library as well, and how it was recently used to held their team achieve top results on a popular industry benchmark of training time and training cost by a factor of more than ten. The notes for this show can be found at twimlai.com/talk/138
14 May 2018
Philosophy of Intelligence with Matthew Crosby - TWiML Talk #91
This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests.This time around i'm joined by Matthew Crosby, a researcher at Imperial College London, working on the Kinds of Intelligence Project. Matthew joined me after the NIPS Symposium of the same name, an event that brought researchers from a variety of disciplines together towards three aims: a broader perspective of the possible types of intelligence beyond human intelligence, better measurements of intelligence, and a more purposeful analysis of where progress should be made in AI to best benefit society. Matthew’s research explores intelligence from a philosophical perspective, exploring ideas like predictive processing and controlled hallucination, and how these theories of intelligence impact the way we approach creating artificial intelligence. This was a very interesting conversation, i'm sure you’ll enjoy.
21 Dec 2017
Building an Autonomous Knowledge Graph with Mike Tung - #319
Today we’re joined by Mike Tung, Founder, and CEO of Diffbot. In our conversation, we discuss Diffbot’s Knowledge Graph, including how it differs from more mainstream use cases like Google Search and MSFT Bing. We also discuss the developer experience with the knowledge graph and other tools, like Extraction API and Crawlbot, challenges like knowledge fusion, balancing being a research company that is also commercially viable, and how they approach their role in the research community.
21 Nov 2019
Systems and Software for Machine Learning at Scale with Jeff Dean - TWiML Talk #124
In this episode I’m joined by Jeff Dean, Google Senior Fellow and head of the company’s deep learning research team Google Brain, who I had a chance to sit down with last week at the Googleplex in Mountain View. As you’ll hear, I was very excited for this interview, because so many of Jeff’s contributions since he started at Google in ‘99 have touched my life and work. In our conversation, Jeff and I dig into a bunch of the core machine learning innovations we’ve seen from Google. Of course we discuss TensorFlow, and its origins and evolution at Google. We also explore AI acceleration hardware, including TPU v1, v2 and future directions from Google and the broader market in this area. We talk through the machine learning toolchain, including some things that Googlers might take for granted, and where the recently announced Cloud AutoML fits in. We also discuss Google’s process for mapping problems across a variety of domains to deep learning, and much, much more. This was definitely one of my favorite conversations, and I'm pumped to be able to share it with you. The notes for this show can be found at twimlai.com/talk/124.
2 Apr 2018
The Fastai v1 Deep Learning Framework with Jeremy Howard - TWiML Talk #186
In today's episode we're presenting a special conversation with Jeremy Howard, founder and researcher at Fast.ai. This episode is being released today in conjunction with the company’s announcement of version 1.0 of their fastai library at the inaugural Pytorch Devcon in San Francisco. In our conversation, we dive into the new library, exploring why it’s important and what’s changed, the unique way in which it was developed, what it means for the future of the fast.ai courses, and much more!
2 Oct 2018
Deep Reinforcement Learning Primer and Research Frontiers with Kamyar Azizzadenesheli - TWiML Talk #177
Today we’re joined by Kamyar Azizzadenesheli, PhD student at the University of California, Irvine, who joins us to review the core elements of RL, along with a pair of his RL-related papers: “Efficient Exploration through Bayesian Deep Q-Networks” and “Sample-Efficient Deep RL with Generative Adversarial Tree Search.” To skip the Deep Reinforcement Learning primer conversation and jump to the research discussion, skip to the 34:30 mark of the episode. Show notes at https://twimlai.com/talk/177
30 Aug 2018
Reinforcement Learning Deep Dive with Pieter Abbeel - TWiML Talk #28
This week our guest is Pieter Abbeel, Assistant Professor at UC Berkeley, Research Scientist at OpenAI, and Cofounder of Gradescope. Pieter has an extensive background in AI research, going way back to his days as Andrew Ng’s first PhD student at Stanford. His research today is focused on deep learning for robotics. During this conversation, Pieter and I really dig into reinforcement learning, a technique for allowing robots (or AIs) to learn through their own trial and error. Nerd alert!! This conversation explores cutting edge research with one of the leading researchers in the field and, as a result, it gets pretty technical at times. I try to uplevel it when I can keep up myself, so hang in there. I promise that you’ll learn a ton if you keep with it. The notes for this show can be found at twimlai.com/talk/28
17 Jun 2017
Bridging the Patient-Physician Gap with ML and Expert Systems w/ Xavier Amatriain - #316
Today we’re joined by return guest Xavier Amatriain, Co-founder and CTO of Curai, whose goal is to make healthcare accessible and scaleable while bringing down costs. In our conversation, we touch on the shortcomings of traditional primary care, and how Curai fills that role, and some of the unique challenges his team faces in applying ML in the healthcare space. We also discuss the use of expert systems, how they train them, and how NLP projects like BERT and GPT-2 fit into what they’re building.
11 Nov 2019
Growth Hacking Sports w/ Machine Learning with Noah Gift - TWiML Talk #158
In this episode of our AI in Sports series I'm joined by Noah Gift, Founder and Consulting CTO at Pragmatic Labs and professor at UC Davis. Noah and I discuss some of his recent work in using social media to predict which players hold the most on-court value, and how this work could lead to more complete approaches to player valuation.Check out the show notes at twimlai.com/talk/158
28 Jun 2018