OwlTail

Cover image of Learning Machines 101

Learning Machines 101

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

Weekly hand curated podcast episodes for learning

Popular episodes

All episodes

The best episodes ranked using user listens.

Podcast cover

LM101-004: Can computers think? A mathematician.s response

Learning Machines 101 - A Gentle Introduction to Artificial Intelligence and Machine Learning Episode Summary: In this episode, we explore the question of what can computers do as well as what computers can.t do using the Turing Machine argument. Specifically, we discuss the computational limits of computers and raise the question of whether such limits pertain to biological brains and other non-standard computing machines. Show Notes: Hello everyone! Welcome to the. Read More » The post LM101-004: Can computers think? A mathematician.s response appeared first on Learning Machines 101.

34mins

12 May 2014

Rank #1

Podcast cover

LM101-078: Ch0: How to Become a Machine Learning Expert

This particular podcast (Episode 78 of Learning Machines 101) is the initial episode in a new special series of episodes designed to provide commentary on a new book that I am in the process of writing. In this episode we discuss books, software, courses, and podcasts designed to help you become a machine learning expert! For more information, check out: www.learningmachines101.com

39mins

24 Oct 2019

Rank #2

Similar Podcasts

Podcast cover

LM101-036: How to Predict the Future from the Distant Past using Recurrent Neural Networks

In this episode, we discuss the problem of predicting the future from not only recent events but also from the distant past using Recurrent Neural Networks (RNNs). A example RNN is described which learns to label images with simple sentences. A learning machine capable of generating even simple descriptions of images such as these could be used to help the blind interpret images, provide assistance to children and adults in language acquisition, support internet search of content in images, and enhance search engine optimization websites containing unlabeled images. Both tutorial notes and advanced implementational notes for RNNs can be found in the show notes at: www.learningmachines101.com .

25mins

28 Sep 2015

Rank #3

Podcast cover

LM101-059: How to Properly Introduce a Neural Network

I discuss the concept of a “neural network” by providing some examples of recent successes in neural network machine learning algorithms and providing a historical perspective on the evolution of the neural network concept from its biological origins. For more details visit us at: www.learningmachines101.com

29mins

21 Dec 2016

Rank #4

Most Popular Podcasts

Podcast cover

LM101-021: How to Solve Large Complex Constraint Satisfaction Problems (Monte Carlo Markov Chain)

We discuss how to solve constraint satisfaction inference problems where knowledge is represented as a large unordered collection of complicated probabilistic constraints among a collection of variables. The goal of the inference process is to infer the most probable values of the unobservable variables given the observable variables. Please visit: www.learningmachines101.com to obtain transcripts of this podcast and download free machine learning software!

35mins

26 Jan 2015

Rank #5

Podcast cover

LM101-007: How to Reason About Uncertain Events using Fuzzy Set Theory and Fuzzy Measure Theory

Learning Machines 101 - A Gentle Introduction to Artificial Intelligence and Machine Learning Episode Summary: In real life, there is no certainty. There are always exceptions. In this episode, two methods are discussed for making inferences in uncertain environments. In fuzzy set theory, a smart machine has certain beliefs about imprecisely defined concepts. In fuzzy measure theory, a smart machine has beliefs about precisely defined concepts but some beliefs are stronger. Read More » The post LM101-007: How to Reason About Uncertain Events using Fuzzy Set Theory and Fuzzy Measure Theory appeared first on Learning Machines 101.

26mins

23 Jun 2014

Rank #6

Podcast cover

LM101-010: How to Learn Statistical Regularities (MAP and maximum likelihood estimation)

Learning Machines 101 - A Gentle Introduction to Artificial Intelligence and Machine Learning Episode Summary: In this podcast episode, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. Show Notes: Hello everyone! Welcome to the tenth podcast in the podcast series Learning Machines 101. In this series of podcasts my goal. Read More » The post LM101-010: How to Learn Statistical Regularities (MAP and maximum likelihood estimation) appeared first on Learning Machines 101.

34mins

12 Aug 2014

Rank #7

Podcast cover

LM101-006: How to Interpret Turing Test Results

Learning Machines 101 - A Gentle Introduction to Artificial Intelligence and Machine Learning Episode Summary: In this episode, we briefly review the concept of the Turing Test for Artificial Intelligence (AI) which states that if a computer.s behavior is indistinguishable from that of the behavior of a thinking human being, then the computer should be called .artificially intelligent.. Some objections to this definition of artificial intelligence are introduced and discussed. At. Read More » The post LM101-006: How to Interpret Turing Test Results appeared first on Learning Machines 101.

31mins

9 Jun 2014

Rank #8

Podcast cover

LM101-014: How to Build a Machine that Can Do Anything (Function Approximation)

In this episode, we discuss the problem of how to build a machine that can do anything! Or more specifically, given a set of input patterns to the machine and a set of desired output patterns for those input patterns we would like to build a machine that can generate the specified output pattern for a given input pattern. This problem may be interpreted as an example of solving a supervised learning problem. Checkout the shownotes at: www.learningmachines101.com for a transcript of this show and free machine learning software!

32mins

13 Oct 2014

Rank #9

Podcast cover

LM101-070: How to Identify Facial Emotion Expressions in Images Using Stochastic Neighborhood Embedding

This 70th episode of Learning Machines 101 we discuss how to identify facial emotion expressions in images using an advanced clustering technique called Stochastic Neighborhood Embedding. We discuss the concept of recognizing facial emotions in images including applications to problems such as: improving online communication quality, identifying suspicious individuals such as terrorists using video cameras, improving lie detector tests, improving athletic performance by providing emotion feedback, and designing smart advertising which can look at the customer’s face to determine if they are bored or interested and dynamically adapt the advertising accordingly. To address this problem we review clustering algorithm methods including K-means clustering, Linear Discriminant Analysis, Spectral Clustering, and the relatively new technique of Stochastic Neighborhood Embedding (SNE) clustering. At the end of this podcast we provide a brief review of the classic machine learning text by Christopher Bishop titled “Pattern Recognition and Machine Learning”. Make sure to visit: www.learningmachines101.com to obtain free transcripts of this podcast and important supplemental reference materials!

32mins

31 Jan 2018

Rank #10

Podcast cover

LM101-020: How to Use Nonlinear Machine Learning Software to Make Predictions

In this episode we introduce some advanced nonlinear machine software which is more complex and powerful than the linear machine software introduced in Episode 13. Specifically, the software implements a multilayer nonlinear learning machine, however, whose inputs feed into hidden units which in turn feed into output units has the potential to learn a much larger class of statistical environments. Download the free software from: www.learningmachines101.com now!

27mins

12 Jan 2015

Rank #11

Podcast cover

LM101-075: Can computers think? A Mathematician's Response (remix)

In this episode, we explore the question of what can computers do as well as what computers can’t do using the Turing Machine argument. Specifically, we discuss the computational limits of computers and raise the question of whether such limits pertain to biological brains and other non-standard computing machines. This episode is dedicated to the memory of my mom, Sandy Golden. To learn more about Turing Machines, SuperTuring Machines, Hypercomputation, and my Mom, check out: www.learningmachines101.com

36mins

12 Dec 2018

Rank #12

Podcast cover

LM101-068: How to Design Automatic Learning Rate Selection for Gradient Descent Type Machine Learning Algorithms

This 68th episode of Learning Machines 101 discusses a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits improved predictive performance. The magnitude of the perturbation at each learning iteration is called the “stepsize” or “learning rate” and the identity of the perturbation vector is called the “search direction”. Simple mathematical formulas are presented based upon research from the late 1960s by Philip Wolfe and G. Zoutendijk that ensure convergence of the generated sequence of parameter vectors. These formulas may be used as the basis for the design of artificially intelligent smart automatic learning rate selection algorithms. For more information, please visit the official website:  www.learningmachines101.com

21mins

26 Sep 2017

Rank #13

Podcast cover

LM101-008: How to Represent Beliefs Using Probability Theory

Episode Summary: This episode focusses upon how an intelligent system can represent beliefs about its environment using fuzzy measure theory. Probability theory is introduced as a special case of fuzzy measure theory which is consistent with classical laws of logical inference.

30mins

3 Sep 2014

Rank #14

Podcast cover

LM101-030: How to Improve Deep Learning Performance with Artificial Brain Damage (Dropout and Model Averaging)

Deep learning machine technology has rapidly developed over the past five years due in part to a variety of actors such as: better technology, convolutional net algorithms, rectified linear units, and a relatively new learning strategy called "dropout" in which hidden unit feature detectors are temporarily deleted during the learning process. This article introduces and discusses the concept of "dropout" to support deep learning performance and makes connections of the "dropout" concept to concepts of regularization and model averaging. For more details and background references, check out: www.learningmachines101.com !

32mins

8 Jun 2015

Rank #15

Podcast cover

LM101-024: How to Use Genetic Algorithms to Breed Learning Machines

In this episode we introduce the concept of learning machines that can self-evolve using simulated natural evolution into more intelligent machines using Monte Carlo Markov Chain Genetic Algorithms. Check out: www.learningmachines101.com to obtain transcripts of this podcast and download free machine learning software!

29mins

10 Mar 2015

Rank #16

Podcast cover

LM101-012: How to Evaluate the Ability to Generalize from Experience (Cross-Validation Methods)

In this episode we discuss the problem of how to evaluate the ability of a learning machine to make generalizations and construct abstractions given the learning machine is provided a finite limited collection of experiences. 

32mins

9 Sep 2014

Rank #17

Podcast cover

LM101-077: How to Choose the Best Model using BIC

In this 77th episode of www.learningmachines101.com , we explain the proper semantic interpretation of the Bayesian Information Criterion (BIC) and emphasize how this semantic interpretation is fundamentally different from AIC (Akaike Information Criterion) model selection methods. Briefly, BIC is used to estimate the probability of the training data given the probability model, while AIC is used to estimate out-of-sample prediction error. The probability of the training data given the model is called the “marginal likelihood”.  Using the marginal likelihood, one can calculate the probability of a model given the training data and then use this analysis to support selecting the most probable model, selecting a model that minimizes expected risk, and support Bayesian model averaging. The assumptions which are required for BIC to be a valid approximation for the probability of the training data given the probability model are also discussed.

24mins

2 May 2019

Rank #18

Podcast cover

LM101-065: How to Design Gradient Descent Learning Machines (Rerun)

In this episode rerun we introduce the concept of gradient descent which is the fundamental principle underlying learning in the majority of deep learning and neural network learning algorithms. Check out the website: www.learningmachines101.com to obtain a transcript of this episode!

30mins

19 Jun 2017

Rank #19

Podcast cover

LM101-040: How to Build a Search Engine, Automatically Grade Essays, and Identify Synonyms using Latent Semantic Analysis

In this episode we introduce a very powerful approach for computing semantic similarity between documents.  Here, the terminology “document” could refer to a web-page, a word document, a paragraph of text, an essay, a sentence, or even just a single word.  Two semantically similar documents, therefore, will discuss many of the same topics while two semantically different documents will not have many topics in common.  Machine learning methods are described which can take as input large collections of documents and use those documents to automatically learn semantic similarity relations. This approach is called Latent Semantic Indexing (LSI) or Latent Semantic Analysis (LSA). Visit us at: www.learningmachines101.com to learn more!

28mins

24 Nov 2015

Rank #20