Cover image of The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
(288)

Rank #70 in Technology category

Technology
News
Tech News
Science

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Updated 2 months ago

Rank #70 in Technology category

Technology
News
Tech News
Science
Read more

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders.Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader.Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, deep learning, computer science, data science and more.

Read more

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders.Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader.Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, deep learning, computer science, data science and more.

iTunes Ratings

288 Ratings
Average Ratings
252
16
8
5
7

Excellent Perspectives in Machine Learning

By Joel Sapp - Feb 26 2019
Read more
Love this podcast. Give it a try.

Awesome podcast

By daniel432! - Aug 13 2018
Read more
Love this podcast! The perspectives from experts are great

iTunes Ratings

288 Ratings
Average Ratings
252
16
8
5
7

Excellent Perspectives in Machine Learning

By Joel Sapp - Feb 26 2019
Read more
Love this podcast. Give it a try.

Awesome podcast

By daniel432! - Aug 13 2018
Read more
Love this podcast! The perspectives from experts are great
Cover image of The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Latest release on Jul 30, 2020

Read more

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders.Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader.Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, deep learning, computer science, data science and more.

Rank #1: Fighting Fraud with Machine Learning at Shopify with Solmaz Shahalizadeh - TWiML Talk #60

Podcast cover
Read more
The podcast you’re about to hear is the first of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest for this show is Solmaz Shahalizadeh, Director of Merchant Services Algorithms at Shopify. Solmaz gave a great talk at the GPPC focused on her team’s experiences applying machine learning to fight fraud and improve merchant satisfaction. Solmaz and I dig into, step-by-step, the process they used to transition from a legacy, rules-based fraud detection system system to a more scalable, flexible one based on machine learning models. We discuss the importance of well-defined project scope; tips and traps when selecting features to train your models; and the various models, transformations and pipelines the Shopify team selected; and how they use PMML to make their Python models available to their Ruby-on-Rails web application. The notes for this show can be found at twimlai.com/talk/60 For Series info, visit twimlai.com/GPPC2017

Oct 30 2017

37mins

Play

Rank #2: Building Conversational Application for Financial Services with Kenneth Conroy - TWiML Talk #61

Podcast cover
Read more
The podcast you’re about to hear is the second of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest for this interview is Kenneth Conroy, VP of data science at Vancouver, Canada-based Finn.ai, a company building a chatbot system for banks. Kenneth and I spoke about how Finn.AI built its core conversational platform. We spoke in depth about the requirements and challenges of conversational applications, and how and why they transitioned off of a commercial chatbot platform--in their case API.ai--and built their own custom platform based on deep learning, word2vec and other natural language understanding technologies. The notes for this show can be found at https://twimlai.com/talk/61

Nov 01 2017

38mins

Play

Rank #3: Deep Neural Nets for Visual Recognition with Matt Zeiler - TWiML Talk #22

Podcast cover
Read more
Today we bring you our final interview from backstage at the NYU FutureLabs AI Summit. Our guest this week is Matt Zeiler. Matt graduated from the University of Toronto where he worked with deep learning researcher Geoffrey Hinton and went on to earn his PhD in machine learning at NYU, home of Yann Lecun. In 2013 Matt’s founded Clarifai, a startup whose cloud-based visual recognition system gives developers a way to integrate visual identification into their own products, and whose initial image classification algorithm achieved top 5 results in that year’s ImageNet competition. I caught up with Matt after his talk “From Research to the Real World”. Our conversation focused on the birth and growth of Clarifai, as well as the underlying deep neural network architectures that enable it. If you’ve been listening to the show for a while, you’ve heard me ask several guests how they go about evolving the architectures of their deep neural networks to enhance performance. Well, in this podcast Matt gives the most satisfying answer I’ve received to date by far. Check it out. I think you’ll enjoy it. The show notes can be found at twimlai.com/talk/22.

May 05 2017

24mins

Play

Rank #4: Understanding the COVID-19 Data Quality Problem with Sherri Rose - #374

Podcast cover
Read more

Today we’re joined by Sherri Rose, Associate Professor at Harvard Medical School. 

Sherri’s research centers around developing and integrating statistical machine learning approaches to improve human health. We cover a lot of ground in our conversation, including the intersection of her research with the current COVID-19 pandemic, the importance of quality in datasets and rigor when publishing papers, and the pitfalls of using causal inference.

We also touch on Sherri’s work in algorithmic fairness, including the necessary emphasis being put on studying issues of fairness, the shift she’s seen in fairness conferences covering these issues in relation to healthcare research, and her paper “Fair Regression for Health Care Spending.”

Check out the complete show notes for this episode at twimlai.com/talk/374.

May 11 2020

44mins

Play

Rank #5: Intel Nervana Update + Productizing AI Research with Naveen Rao And Hanlin Tang - TWiML Talk #31

Podcast cover
Read more
I talked about Intel’s acquisition of Nervana Systems on the podcast when it happened almost a year ago, so I was super excited to have an opportunity to sit down with Nervana co-founder Naveen Rao, who now leads Intel’s newly formed AI Products Group, for the first show in our O'Reilly AI series. We talked about how Intel plans to extend its leadership position in general purpose compute into the AI realm by delivering silicon designed specifically for AI, end-to-end solutions including the cloud, enterprise data center, and the edge; and tools that let customers quickly productize and scale AI-based solutions. I also spoke with Hanlin Tang, an algorithms engineer at Intel’s AIPG, about two tools announced at the conference: version 2.0 of Intel Nervana’s deep learning framework Neon and Nervana Graph, a new toolset for expressing and running deep learning applications as framework and hardware-independent computational graphs. Nervana Graph in particular sounds like a very interesting project, not to mention a smart move for Intel, and I’d encourage folks to take a look at their Github repo. The show notes for this page can be found at https://twimlai.com/talk/31

Jul 05 2017

42mins

Play

Rank #6: Using AI to Diagnose and Treat Neurological Disorders with Archana Venkataraman - #312

Podcast cover
Read more

Today we’re joined by Archana Venkataraman, John C. Malone Assistant Professor of Electrical and Computer Engineering at Johns Hopkins University, and MIT 35 innovators under 35 recipient.

Archana’s research at the Neural Systems Analysis Laboratory focuses on developing tools, frameworks, and algorithms to better understand, and treat neurological and psychiatric disorders, including autism, epilepsy, and others. In our conversation, we explore her lab’s work in applying machine learning to these problems, including biomarker discovery, disorder severity prediction, as well as some of the various techniques and frameworks used.

The complete show notes for this episode can be found at twimlai.com/talk/312.

Oct 28 2019

47mins

Play

Rank #7: Towards Artificial General Intelligence with Greg Brockman - TWiML Talk #74

Podcast cover
Read more
The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this episode, I’m joined by Greg Brockman, OpenAI Co-Founder and CTO. Greg and I touch on a bunch of topics in the show. We start with the founding and goals of OpenAI, before diving into a discussion on Artificial General Intelligence, what it means to achieve it, and how we going about doing so safely and without bias. We also touch on how to massively scale neural networks and their training training and the evolution of computational frameworks for AI. This conversation is not only informative and nerd alert worthy, but we cover some very important topics, so please take it all in, enjoy, and send along your feedback! To find the notes for this show, visit twimlai.com/talk/74 For more info on this series, visit twimlai.com/openai

Nov 28 2017

58mins

Play

Rank #8: Marrying Physics-Based and Data-Driven ML Models with Josh Bloom - TWiML Talk #42

Podcast cover
Read more
Recently I had a chance to catch up with a friend and friend of the show, Josh Bloom, vice president of data & analytics at GE Digital. If you’ve been listening for a while, you already know that Josh was on the show around this time last year, just prior to the acquisition of his company Wise.io by GE Digital. It was great to catch up with Josh on his journey within GE, and the work his team is doing around Industrial AI, now that they’re part of the one of the world’s biggest industrial companies. We talk about some really interesting things in this show, including how his team is using autoencoders to create training datasets, and how they incorporate knowledge of physics and physical systems into their machine learning models. The notes for this show can be found at twimlai.com/talk/42.

Aug 14 2017

55mins

Play

Rank #9: Trends in Machine Learning & Deep Learning with Zack Lipton - #334

Podcast cover
Read more

Today we kick off our 2019 AI Rewind Series joined by Zack Lipton, a jointly appointed Professor in the Tepper School of Business and the Machine Learning Department at CMU.

You might remember Zack from our conversation earlier this year, “Fairwashing” and the Folly of ML Solutionism, which you can find at twimlai.com/talk/285. In our conversation, Zack recaps advancements across the vast fields of Machine Learning and Deep Learning, including trends, tools, research papers and more.

We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via Twitter @samcharrington or @twimlai.

To get the complete show notes for this episode, head over to twimlai.com/talk/334

Dec 30 2019

1hr 19mins

Play

Rank #10: Automated Machine Learning with Erez Barak - #323

Podcast cover
Read more

In the final episode of our Azure ML series, we’re joined by Erez Barak, Partner Group Manager of Azure ML at Microsoft. In our conversation, we discuss:

  • Erez’s AutoML philosophy, including how he defines “true AutoML” and his take on the AutoML space, its role and its importance.
  • We also discuss in great detail the application of AutoML as a contributor to the end-to-end data science process, which Erez breaks down into 3 key areas; Featurization, Learner/Model Selection, and Tuning/Optimizing Hyperparameters.
  • Finally, we discuss post-deployment AutoML use cases and other areas under the AutoML umbrella that are currently generating excitement.

Check out the complete show notes at twimlai.com/talk/323!

Dec 06 2019

43mins

Play

Rank #11: Machine Learning at GitHub with Omoju Miller - #313

Podcast cover
Read more

Today we’re joined by Omoju Miller, a Sr. machine learning engineer at GitHub. In our conversation, we discuss:

  • Her dissertation, Hiphopathy, A Socio-Curricular Study of Introductory Computer Science, 
  • Her work as an inaugural member of the Github machine learning team
  • Her two presentations at Tensorflow World, “Why is machine learning seeing exponential growth in its communities” and “Automating your developer workflow on GitHub with Tensorflow.”

The complete show notes for this episode can be found at twimlai.com/talk/313

Oct 31 2019

43mins

Play

Rank #12: Understanding Deep Neural Nets with Dr. James McCaffrey - TWiML Talk #13

Podcast cover
Read more
My guest this week is Dr. James McCaffrey, research engineer at Microsoft Research. James and I cover a ton of ground in this conversation, including recurrent neural nets (RNNs), convolutional neural nets (CNNs), long short term memory (LSTM) networks, residual networks (ResNets), generative adversarial networks (GANs), and more. We also discuss neural network architecture and promising alternative approaches such as symbolic computation and particle swarm optimization. The show notes can be found at twimlai.com/talk/13.

Mar 03 2017

1hr 18mins

Play

Rank #13: Practical Deep Learning with Rachel Thomas - TWiML Talk #138

Podcast cover
Read more
In this episode, i'm joined by Rachel Thomas, founder and researcher at Fast AI. If you’re not familiar with Fast AI, the company offers a series of courses including Practical Deep Learning for Coders, Cutting Edge Deep Learning for Coders and Rachel’s Computational Linear Algebra course. The courses are designed to make deep learning more accessible to those without the extensive math backgrounds some other courses assume. Rachel and I cover a lot of ground in this conversation, starting with the philosophy and goals behind the Fast AI courses. We also cover Fast AI’s recent decision to switch to their courses from Tensorflow to Pytorch, the reasons for this, and the lessons they’ve learned in the process. We discuss the role of the Fast AI deep learning library as well, and how it was recently used to held their team achieve top results on a popular industry benchmark of training time and training cost by a factor of more than ten. The notes for this show can be found at twimlai.com/talk/138

May 14 2018

45mins

Play

Rank #14: Philosophy of Intelligence with Matthew Crosby - TWiML Talk #91

Podcast cover
Read more
This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests.This time around i'm joined by Matthew Crosby, a researcher at Imperial College London, working on the Kinds of Intelligence Project. Matthew joined me after the NIPS Symposium of the same name, an event that brought researchers from a variety of disciplines together towards three aims: a broader perspective of the possible types of intelligence beyond human intelligence, better measurements of intelligence, and a more purposeful analysis of where progress should be made in AI to best benefit society. Matthew’s research explores intelligence from a philosophical perspective, exploring ideas like predictive processing and controlled hallucination, and how these theories of intelligence impact the way we approach creating artificial intelligence. This was a very interesting conversation, i'm sure you’ll enjoy.

Dec 21 2017

31mins

Play

Rank #15: Building an Autonomous Knowledge Graph with Mike Tung - #319

Podcast cover
Read more

Today we’re joined by Mike Tung, Founder, and CEO of Diffbot. In our conversation, we discuss: 

  • Their various tools, including their Knowledge Graph, Extraction API, and CrawlBot.
  • How Knowledge Graph was inspired by Imagenet, how it was built, and how it differs from other, more mainstream knowledge graphs like Google Search and MSFT Bing.
  • How they balance being a research company that is also commercially viable.
  • The developer experience with their tools, and challenges faced.

The complete show notes can be found at twimlai.com/talk/319.

Nov 21 2019

44mins

Play

Rank #16: Systems and Software for Machine Learning at Scale with Jeff Dean - TWiML Talk #124

Podcast cover
Read more
In this episode I’m joined by Jeff Dean, Google Senior Fellow and head of the company’s deep learning research team Google Brain, who I had a chance to sit down with last week at the Googleplex in Mountain View. As you’ll hear, I was very excited for this interview, because so many of Jeff’s contributions since he started at Google in ‘99 have touched my life and work. In our conversation, Jeff and I dig into a bunch of the core machine learning innovations we’ve seen from Google. Of course we discuss TensorFlow, and its origins and evolution at Google. We also explore AI acceleration hardware, including TPU v1, v2 and future directions from Google and the broader market in this area. We talk through the machine learning toolchain, including some things that Googlers might take for granted, and where the recently announced Cloud AutoML fits in. We also discuss Google’s process for mapping problems across a variety of domains to deep learning, and much, much more. This was definitely one of my favorite conversations, and I'm pumped to be able to share it with you. The notes for this show can be found at twimlai.com/talk/124.

Apr 02 2018

56mins

Play

Rank #17: The Fastai v1 Deep Learning Framework with Jeremy Howard - TWiML Talk #186

Podcast cover
Read more

In today's episode we’ll be taking a break from our Strata Data conference series and presenting a special conversation with Jeremy Howard, founder and researcher at Fast.ai.

Fast.ai is a company many of our listeners are quite familiar with due to their popular deep learning course. This episode is being released today in conjunction with the company’s announcement of version 1.0 of their fastai library at the inaugural Pytorch Devcon in San Francisco.

Jeremy and I cover a ton of ground in this conversation. Of course, we dive into the new library and explore why it’s important and what’s changed. We also explore the unique way in which it was developed and what it means for the future of the fast.ai courses. Jeremy shares a ton of great insights and lessons learned in this conversation, not to mention mentions a bunch of really interesting-sounding papers.

The complete show notes, and links to the fastai library can be found here.

Oct 02 2018

1hr 11mins

Play

Rank #18: Deep Reinforcement Learning Primer and Research Frontiers with Kamyar Azizzadenesheli - TWiML Talk #177

Podcast cover
Read more

Today we’re joined by Kamyar Azizzadenesheli, PhD student at the University of California, Irvine, and visiting researcher at Caltech where he works with Anima Anandkumar, who you might remember from TWiML Talk 142.

We begin with a reinforcement learning primer of sorts, in which we review the core elements of RL, along with quite a few examples to help get you up to speed. We then discuss a pair of Kamyar’s RL-related papers: “Efficient Exploration through Bayesian Deep Q-Networks” and “Sample-Efficient Deep RL with Generative Adversarial Tree Search.” In addition to discussing Kamyar’s work, we also chat a bit of the general landscape of RL research today. So whether you’re new to the field or want to dive into cutting-edge reinforcement learning research with us, this podcast is here for you!

If you'd like to skip the Deep Reinforcement Learning primer portion of this and jump to the research discussion, skip ahead to the 34:30 mark of the episode.

Aug 30 2018

1hr 35mins

Play

Rank #19: Reinforcement Learning Deep Dive with Pieter Abbeel - TWiML Talk #28

Podcast cover
Read more
This week our guest is Pieter Abbeel, Assistant Professor at UC Berkeley, Research Scientist at OpenAI, and Cofounder of Gradescope. Pieter has an extensive background in AI research, going way back to his days as Andrew Ng’s first PhD student at Stanford. His research today is focused on deep learning for robotics. During this conversation, Pieter and I really dig into reinforcement learning, a technique for allowing robots (or AIs) to learn through their own trial and error. Nerd alert!! This conversation explores cutting edge research with one of the leading researchers in the field and, as a result, it gets pretty technical at times. I try to uplevel it when I can keep up myself, so hang in there. I promise that you’ll learn a ton if you keep with it. The notes for this show can be found at twimlai.com/talk/28

Jun 17 2017

54mins

Play

Rank #20: Bridging the Patient-Physician Gap with ML and Expert Systems w/ Xavier Amatriain - #316

Podcast cover
Read more

Today we’re joined by return guest Xavier Amatriain, Co-founder and CTO of Curai. In our conversation, we discuss

  • Curai’s goal of providing the world’s best primary care to patients via their smartphone, and how ML & AI will bring down costs healthcare accessible and scaleable. 
  • The shortcomings of traditional primary care, and how Curai fills that role, 
  • Some of the unique challenges his team faces in applying this use case in the healthcare space. 
  • Their use of expert systems, how they develop and train their models with synthetic data through noise injection
  • How NLP projects like BERT, Transformer, and GPT-2 fit into what Curai is building. 

Check out the complete show notes page at twimlai.com/talk/316

Nov 11 2019

39mins

Play

ML and Epidemiology with Elaine Nsoesie - #396

Podcast cover
Read more

Today we continue our ICML series with Elaine Nsoesie, assistant professor at Boston University. 

Elaine presented a keynote talk at the ML for Global Health workshop at ICML 2020, where she shared her research centered around data-driven epidemiology. In our conversation, we discuss the different ways that machine learning applications can be used to address global health issues, including use cases like infectious disease surveillance via hospital parking lot capacity, and tracking search data for changes in health behavior in African countries. We also discuss COVID-19 epidemiology, focusing on the importance of recognizing how the disease is affecting people of different races, ethnicities, and economic backgrounds.

To follow along with our 2020 ICML Series, visit twimlai.com/icml20. The complete show notes for this episode can be found at twimali.com/talk/396.

Jul 30 2020

48mins

Play

Language (Technology) Is Power: Exploring the Inherent Complexity of NLP Systems with Hal Daumé III - #395

Podcast cover
Read more

Today we’re joined by Hal Daume III, professor at the University of Maryland, Senior Principal Researcher at Microsoft Research, and Co-Chair of the 2020 ICML Conference. 

We had the pleasure of catching up with Hal ahead of this year's ICML to discuss his research at the intersection of bias, fairness, NLP, and the effects language has on machine learning models. 

We explore language in two categories as they appear in machine learning models and systems: (1) How we use language to interact with the world, and (2) how we “do” language. We also discuss ways to better incorporate domain experts into ML system development, and Hal’s experience as ICML Co-Chair.

Follow along with our ICML coverage at twimlai.com/icml20. The complete show notes for this episode can be found at twimlai.com/talk/395.

Jul 27 2020

1hr 4mins

Play

Graph ML Research at Twitter with Michael Bronstein - #394

Podcast cover
Read more

Today we’re excited to be joined by return guest Michael Bronstein, Professor at Imperial College London, and Head of Graph Machine Learning at Twitter. We last spoke with Michael at NeurIPS in 2017 about Geometric Deep Learning

Since then, his research focus has slightly shifted to exploring graph neural networks. In our conversation, we discuss the evolution of the graph machine learning space, contextualizing Michael’s work on geometric deep learning and research on non-euclidian unstructured data. We also talk about his new role at Twitter and some of the research challenges he’s faced, including scalability and working with dynamic graphs. Michael also dives into his work on differential graph modules for graph CNNs, and the various applications of this work.

The complete show notes for this episode can be found at twimlai.com/talk/394.

Jul 23 2020

56mins

Play

Panel: The Great ML Language (Un)Debate! - #393

Podcast cover
Read more

Today we’re excited to bring ‘The Great ML Language (Un)Debate’ to the podcast! In the latest edition of our series of live discussions, we brought together experts and enthusiasts representing an array of both popular and emerging programming languages for machine learning. In the discussion, we explored the strengths, weaknesses, and approaches offered by Clojure, JavaScript, Julia, Probabilistic Programming, Python, R, Scala, and Swift. We round out the session with an audience Q&A (58:28), covering topics including favorite secondary languages, what languages pair well, quite a few questions about C++, and much more. 

Head over to twimlai.com/talk/393 for more information about our panelists!

Jul 20 2020

1hr 33mins

Play

What the Data Tells Us About COVID-19 with Eric Topol - #392

Podcast cover
Read more

Today we’re joined by Eric Topol, Director & Founder of the Scripps Research Translational Institute, and author of the book Deep Medicine. 

Eric is also one of the most trusted voices on the COVID-19 pandemic, giving those that follow his Twitter account (link) daily updates on the disease and its impact, from both a biological and public health perspective. We had the pleasure of catching up with Eric to talk through several Coronavirus-related topics, including what we’ve learned since the pandemic began and the role of technology—including ML and AI—in understanding and preventing the spread of the disease. We also explore the broader opportunity for medical applications of AI, the promise they offer for personalized medicine, and how techniques like federated learning and homomorphic encryption can offer more privacy in healthcare.  

The complete show notes for this episode can be found at twimlai.com/talk/392.

Jul 16 2020

41mins

Play

The Case for Hardware-ML Model Co-design with Diana Marculescu - #391

Podcast cover
Read more

Today we’re joined by Diana Marculescu, Department Chair and Professor of Electrical and Computer Engineering at University of Texas at Austin. 

We caught up with Diana to discuss her work on hardware-aware machine learning. In particular, we explore her keynote, “Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design” from the Efficient Deep Learning in Computer Vision workshop at this year’s CVPR conference. 

In our conversation, we explore how her research group is focusing on making ML models more efficient so that they run better on current hardware systems, and what components and techniques they’re using to achieve true co-design. We also discuss her work with Neural architecture search, how this fits into the edge vs cloud conversation, and her thoughts on the longevity of deep learning research. 

The complete show notes for this episode can be found at twimlai.com/talk/391.

Jul 13 2020

44mins

Play

Computer Vision for Remote AR with Flora Tasse - #390

Podcast cover
Read more

Today we conclude our CVPR coverage joined by Flora Tasse, Head of Computer Vision & AI Research at Streem. 

Flora, a keynote speaker at the AR/VR workshop at CVPR, walks us through some of the interesting use cases at the intersection of AI, computer vision, and augmented reality technology. In our conversation, we discuss how Flora’s interest in a career in AR/VR developed, the origin of her company Selerio, which was eventually acquired by Streem, and her current research.

We also spend time exploring the difficulties associated with building 3D mesh environments, extracting metadata from those environments, the challenges of pose estimation, and other papers that caught Flora’s eye from the conference.

The complete show notes for this episode can be found at twimlai.com/talk/390. For our complete CVPR series, head to twimlai.com/cvpr20.

Jul 09 2020

40mins

Play

Deep Learning for Automatic Basketball Video Production with Julian Quiroga - #389

Podcast cover
Read more

Today we return to our coverage of the 2020 CVPR conference with a conversation with Julian Quiroga, a Computer Vision Team Lead at Genius Sports.

Julian presented his recent paper “As Seen on TV: Automatic Basketball Video Production using Gaussian-based Actionness and Game States Recognition” at the CVSports workshop. We jump right into the paper, discussing details like camera setups and angles, detection and localization of the figures on the court (players, refs, and of course, the ball), and the role that deep learning plays in the process. We also break down how this work applies to different sports, and the ways that Julian is looking to improve on this work for better accuracy. 

The complete show notes for this episode can be found at twimlai.com/talk/389. To follow along with our entire CVPR series, visit twimlai.com/cvpr20.

Thanks again to our friends at Qualcomm for their support of the podcast and sponsorship of this series!

Jul 06 2020

42mins

Play

How External Auditing is Changing the Facial Recognition Landscape with Deb Raji - #388

Podcast cover
Read more

Today we’re taking a break from our CVPR coverage to bring you this interview with Deb Raji, a Technology Fellow at the AI Now Institute at New York University. 

Over the past week or two, there have been quite a few major news stories in the AI community, including the self-imposed moratorium on facial recognition technology from Amazon, IBM and Microsoft.There was also the release of PULSE, a controversial computer vision model that ultimately sparked a Twitter firestorm involving Yann Lecun and AI ethics researchers, including friend of the show, Timnit Gebru. The controversy echoed into the broader AI community, eventually leading to the former’s departure from Twitter. 

In our conversation with Deb, we dig into these stories in depth, discussing the origins of Deb’s work on the Gender Shades project, how subsequent work put a spotlight on the potential harms of facial recognition technology, and who holds responsibility for dealing with underlying bias issues in datasets.

The complete show notes for this episode can be found at twimlai.com/talk/388.

Jul 02 2020

1hr 21mins

Play

AI for High-Stakes Decision Making with Hima Lakkaraju - #387

Podcast cover
Read more

Today we’re joined by Hima Lakkaraju, an Assistant Professor at Harvard University with appointments in both the Business School and Department of Computer Science. 

At CVPR, Hima was a keynote speaker at the Fair, Data-Efficient and Trusted Computer Vision Workshop, where she spoke on Understanding the Perils of Black Box Explanations. Hima talks us through her presentation, which focuses on the unreliability of explainability techniques that center perturbations, such as LIME or SHAP, as well as how attacks on these models can be carried out, and what these attacks look like. We also discuss people’s tendency to trust computer systems and their outputs, her thoughts on collaborator (and former TWIML guest) Cynthia Rudin’s theory that we shouldn’t use black-box algorithms, and much more.

For the complete show notes, visit twimlai.com/talk/387. For our continuing CVPR Coverage, visit twimlai.com/cvpr20.

Jun 29 2020

45mins

Play

Invariance, Geometry and Deep Neural Networks with Pavan Turaga - #386

Podcast cover
Read more

We continue our CVPR coverage with today’s guest, Pavan Turaga, Associate Professor at Arizona State University, with dual appointments as the Director of the Geometric Media Lab, and Interim Director of the School of Arts, Media, and Engineering.

Pavan gave a keynote presentation at the Differential Geometry in CV and ML Workshop, speaking on Revisiting Invariants with Geometry and Deep Learning. In our conversation, we go in-depth on Pavan’s research integrating physics-based principles into computer vision. We also discuss the context of the term “invariant,” and the role of architectural, loss function, and data constraints on models. Pavan also contextualizes this work in relation to Hinton’s similar Capsule Network research.

Check out the complete show notes for this episode at twimlai.com/talk/386.

Jun 25 2020

47mins

Play

Channel Gating for Cheaper and More Accurate Neural Nets with Babak Ehteshami Bejnordi - #385

Podcast cover
Read more

Today we’re joined by Babak Ehteshami Bejnordi, a Research Scientist at Qualcomm.

Babak works closely with former guest Max Welling and is currently focused on conditional computation, which is the main driver for today’s conversation. We dig into a few papers in great detail including one from this year’s CVPR conference, Conditional Channel Gated Networks for Task-Aware Continual Learning

We also discuss the paper TimeGate: Conditional Gating of Segments in Long-range Activities, and another paper from this year’s ICLR conference, Batch-Shaping for Learning Conditional Channel Gated Networks. We cover how gates are used to drive efficiency and accuracy, while decreasing model size, how this research manifests into actual products, and more! 

For more information on the episode, visit twimlai.com/talk/385. To follow along with the CVPR 2020 Series, visit twimlai.com/cvpr20

Thanks to Qualcomm for sponsoring today’s episode and the CVPR 2020 Series!

Jun 22 2020

55mins

Play

Machine Learning Commerce at Square with Marsal Gavalda - #384

Podcast cover
Read more

Today we’re joined by Marsal Gavalda, head of machine learning for the Commerce platform at Square. 

Marsal, who hails from Barcelona, Catalonia, kicks off our conversation by indulging Sam in their shared love for language, which is what put him on the path to a career in machine learning. At Square, Marsal manages the development of machine learning for various tools and platforms, including marketing, appointments, and above all, risk management. 

We explore how they manage this vast portfolio of projects, and how having an ML and technology focus at the outset of the company has contributed to their success. We also discuss some of Marsal’s tips and best practices for internal democratization of ML, their approach to developing ML-driven features, the techniques deployed in the development of those features, and much more!

The complete show notes for this episode can be found at twimlai.com/talk/384.

Jun 18 2020

51mins

Play

Cell Exploration with ML at the Allen Institute w/ Jianxu Chen - #383

Podcast cover
Read more

Today we’re joined by Jianxu Chen, a scientist in the Assay Development group at the Allen Institute for Cell Science. 

At the latest GTC conference, Jianxu presented his work on the Allen Cell Explorer Toolkit, an open-source project that allows users to do 3D segmentation of intracellular structures in fluorescence microscope images at high resolutions, making the images more accessible for data analysis. 

In our conversation, we discuss three of the major components of the toolkit: the cell image analyzer, the image generator, and the image visualizer. We also explore Jianxu’s transition from computer science into computational biology. More broadly, we cover how the use of GPUs has fundamentally changed this research, and the goals his team had in mind when they began the project.

Check out the complete show notes at twimlai.com/talk/383.

Jun 15 2020

43mins

Play

Neural Arithmetic Units & Experiences as an Independent ML Researcher with Andreas Madsen - #382

Podcast cover
Read more

Today we’re joined by Andreas Madsen, an independent researcher based in Denmark whose research focuses on developing interpretable machine learning models. 

While we caught up with Andreas to discuss his ICLR spotlight paper, “Neural Arithmetic Units,” we also spend time exploring his experience as an independent researcher. We discuss the difficulties of working with limited resources, the importance of finding peers to collaborate with, and tempering expectations of getting papers accepted to conferences -- something that might take a few tries to get right.

In his paper, Andreas notes that Neural Networks struggle to perform exact arithmetic operations over real numbers, but this can be helped with the addition of two NN components: the Neural Addition Unit (NAU), which can learn exact addition and subtraction; and the Neural Multiplication Unit (NMU) that can multiply subsets of a vector.

The complete show notes can be found at twimlai.com/talk/382.

Jun 11 2020

30mins

Play

2020: A Critical Inflection Point for Responsible AI with Rumman Chowdhury - #381

Podcast cover
Read more

Today we’re joined by Rumman Chowdhury, Managing Director and Global Lead of Responsible Artificial Intelligence at Accenture. In our conversation with Rumman, we explored questions like: 

  • Why is now such a critical inflection point in the application of responsible AI?
  • How should engineers and practitioners think about AI ethics and responsible AI?
  • Why is AI ethics inherently personal and how can you define your own personal approach?
  • Is the implementation of AI governance necessarily authoritarian?
  • How do we balance idealism and pragmatism in the application of AI ethics?

We also cover practical topics like how and where you should implement responsible AI in your organization, and building the teams and processes capable of taking on critical ethics and governance questions.

The complete show notes for this episode can be found at twimlai.com/talk/381.

Jun 08 2020

1hr 1min

Play

Panel: Advancing Your Data Science Career During the Pandemic - #380

Podcast cover
Read more

Today we’re joined by Ana Maria Echeverri, Caroline Chavier, Hilary Mason, and Jacqueline Nolis, our guests for the recent Advancing Your Data Science Career During the Pandemic panel.

In this conversation, we explore ways that Data Scientists and ML/AI practitioners can continue to advance their careers despite current challenges. Our panelists provide concrete tips, advice, and direction for those just starting out, those affected by layoffs, and those just wanting to move forward in their careers.

Topics we cover include:

  • Guerilla Job Hunting
  • Portfolio Building
  • Navigating Hiring Freezes
  • Acing the Technical Interview
  • Presenting the Best Candidate

For more information about our guests, or for links to the resources mentioned, visit the show notes page at twimlai.com/talk/380.

Jun 04 2020

1hr 7mins

Play

On George Floyd, Empathy, and the Road Ahead

Podcast cover
Read more

Visit twimlai.com/blacklivesmatter for resources to support organizations pushing for social equity like Black Lives Matter, and groups offering relief for those jailed for exercising their rights to peaceful protest. 

Jun 02 2020

6mins

Play

Engineering a Less Artificial Intelligence with Andreas Tolias - #379

Podcast cover
Read more

Today we’re joined by Andreas Tolias, Professor of Neuroscience at Baylor College of Medicine and Principal Investigator of the Neuroscience-Inspired Networks for Artificial Intelligence organization.

We caught up with Andreas to discuss his recent perspective piece, “Engineering a Less Artificial Intelligence,” which explores the shortcomings of state-of-the-art learning algorithms in comparison to the brain. The paper also offers several ideas about how neuroscience can lead the quest for better inductive biases by providing useful constraints on representations and network architecture. We discuss the promise of deep neural networks, the differences between inductive bias and model bias, the role of interpretability, and the exciting future of biological systems and deep learning. 

The complete show notes can be found at twimali.com/talk/379.

May 28 2020

46mins

Play

Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378

Podcast cover
Read more

Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. 

Our main focus in the conversation is Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which explores compute-efficient training strategies, based on model size.

We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency? We also discuss the parallels between computer vision and NLP tasks, how he characterizes both “larger” and “faster” in the paper.

Check out the complete show notes for this episode at twimlai.com/talk/378.

May 25 2020

52mins

Play

iTunes Ratings

288 Ratings
Average Ratings
252
16
8
5
7

Excellent Perspectives in Machine Learning

By Joel Sapp - Feb 26 2019
Read more
Love this podcast. Give it a try.

Awesome podcast

By daniel432! - Aug 13 2018
Read more
Love this podcast! The perspectives from experts are great