Cover image of Learning Machines 101
(77)
Technology
Science
Mathematics

Learning Machines 101

Updated 3 days ago

Technology
Science
Mathematics
Read more

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

Read more

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

iTunes Ratings

77 Ratings
Average Ratings
59
8
4
1
5

Great for those interesting in machine learning.

By Carlos Leonidas - Jul 12 2018
Read more
This podcast is a great intoduction to the field of Machine Learning from statistics angle.

Super interesting show!

By kpchf - Mar 12 2016
Read more
Richard knows his stuff! What a unique and interesting show!

iTunes Ratings

77 Ratings
Average Ratings
59
8
4
1
5

Great for those interesting in machine learning.

By Carlos Leonidas - Jul 12 2018
Read more
This podcast is a great intoduction to the field of Machine Learning from statistics angle.

Super interesting show!

By kpchf - Mar 12 2016
Read more
Richard knows his stuff! What a unique and interesting show!
Cover image of Learning Machines 101

Learning Machines 101

Latest release on Dec 24, 2019

Read more

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

Rank #1: LM101-023: How to Build a Deep Learning Machine

Podcast cover
Read more

Recently, there has been a lot of discussion and controversy over the currently hot topic of “deep learning”!! Deep Learning technology has made real and important fundamental contributions to the development of machine learning algorithms. Learn more about the essential ideas of  "Deep Learning" in Episode 23 of "Learning Machines 101". Check us out at our official website: www.learningmachines101.com ! 

Feb 24 2015

42mins

Play

Rank #2: LM101-004: Can computers think? A mathematician.s response

Podcast cover
Read more

Learning Machines 101 - A Gentle Introduction to Artificial Intelligence and Machine Learning

Episode Summary: In this episode, we explore the question of what can computers do as well as what computers can.t do using the Turing Machine argument. Specifically, we discuss the computational limits of computers and raise the question of whether such limits pertain to biological brains and other non-standard computing machines. Show Notes: Hello everyone! Welcome to the. Read More »

The post LM101-004: Can computers think? A mathematician.s response appeared first on Learning Machines 101.

May 12 2014

34mins

Play

Rank #3: LM101-035: What is a Neural Network and What is a Hot Dog?

Podcast cover
Read more

In this episode, we address the important questions of “What is a neural network?” and  “What is a hot dog?” by discussing human brains, neural networks that learn to play Atari video games, and rat brain neural networks. Check out: www.learningmachines101.com for videos of a neural network that learns to play ATARI video games and transcripts of this podcast!!! Also follow us on twitter at: @lm101talk

See you soon!! 

Sep 15 2015

28mins

Play

Rank #4: LM101-032: How To Build a Support Vector Machine to Classify Patterns

Podcast cover
Read more

In this 32nd episode of Learning Machines 101, we introduce the concept of a Support Vector Machine. We explain how to estimate the parameters of such machines to classify a pattern vector as a member of one of two categories as well as identify special pattern vectors called “support vectors” which are important for characterizing the Support Vector Machine decision boundary. The relationship of Support Vector Machine parameter estimation and logistic regression parameter estimation is also discussed. Check out this and other episodes as well as supplemental references to these episodes at the website: www.learningmachines101.com. Also follow us at twitter using the twitter handle: lm101talk.

Jul 13 2015

35mins

Play

Rank #5: LM101-036: How to Predict the Future from the Distant Past using Recurrent Neural Networks

Podcast cover
Read more

In this episode, we discuss the problem of predicting the future from not only recent events but also from the distant past using Recurrent Neural Networks (RNNs). A example RNN is described which learns to label images with simple sentences. A learning machine capable of generating even simple descriptions of images such as these could be used to help the blind interpret images, provide assistance to children and adults in language acquisition, support internet search of content in images, and enhance search engine optimization websites containing unlabeled images. Both tutorial notes and advanced implementational notes for RNNs can be found in the show notes at: www.learningmachines101.com .

Sep 28 2015

25mins

Play

Rank #6: LM101-021: How to Solve Large Complex Constraint Satisfaction Problems (Monte Carlo Markov Chain)

Podcast cover
Read more

We discuss how to solve constraint satisfaction inference problems where knowledge is represented as a large unordered collection of complicated probabilistic constraints among a collection of variables. The goal of the inference process is to infer the most probable values of the unobservable variables given the observable variables.

Please visit: www.learningmachines101.com to obtain transcripts of this podcast and download free machine learning software!

Jan 26 2015

35mins

Play

Rank #7: LM101-040: How to Build a Search Engine, Automatically Grade Essays, and Identify Synonyms using Latent Semantic Analysis

Podcast cover
Read more

In this episode we introduce a very powerful approach for computing semantic similarity between documents.  Here, the terminology “document” could refer to a web-page, a word document, a paragraph of text, an essay, a sentence, or even just a single word.  Two semantically similar documents, therefore, will discuss many of the same topics while two semantically different documents will not have many topics in common.  Machine learning methods are described which can take as input large collections of documents and use those documents to automatically learn semantic similarity relations. This approach is called Latent Semantic Indexing (LSI) or Latent Semantic Analysis (LSA). Visit us at: www.learningmachines101.com to learn more!

Nov 24 2015

28mins

Play

Rank #8: LM101-005: How to Decide if a Machine is Artificially Intelligent (The Turing Test)

Podcast cover
Read more

Learning Machines 101 - A Gentle Introduction to Artificial Intelligence and Machine Learning

Episode Summary: This episode we discuss the Turing Test for Artificial Intelligence which is designed to determine if the behavior of a computer is indistinguishable from the behavior of a thinking human being. The chatbot A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) is interviewed and basic concepts of AIML (Artificial Intelligence Markup Language) are introduced. Show Notes: Hello everyone!. Read More »

The post LM101-005: How to Decide if a Machine is Artificially Intelligent (The Turing Test) appeared first on Learning Machines 101.

May 27 2014

32mins

Play

Rank #9: LM101-024: How to Use Genetic Algorithms to Breed Learning Machines

Podcast cover
Read more

In this episode we introduce the concept of learning machines that can self-evolve using simulated natural evolution into more intelligent machines using Monte Carlo Markov Chain Genetic Algorithms. Check out:

www.learningmachines101.com

to obtain transcripts of this podcast and download free machine learning software!

Mar 10 2015

29mins

Play

Rank #10: LM101-059: How to Properly Introduce a Neural Network

Podcast cover
Read more

I discuss the concept of a “neural network” by providing some examples of recent successes in neural network machine learning algorithms and providing a historical perspective on the evolution of the neural network concept from its biological origins. For more details visit us at: www.learningmachines101.com

Dec 21 2016

29mins

Play

Rank #11: LM101-044: What happened at the Deep Reinforcement Learning Tutorial at the 2015 Neural Information Processing Systems Conference?

Podcast cover
Read more

This is the third of a short subsequence of podcasts providing a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the field of Machine Learning. This episode reviews and discusses topics associated with the Introduction to Reinforcement Learning with Function Approximation Tutorial presented by Professor Richard Sutton on the first day of the conference. Check out: www.learningmachines101.com to learn more!! Also follow us at: "lm101talk" on twitter!

Jan 26 2016

31mins

Play

Rank #12: LM101-043: How to Learn a Monte Carlo Markov Chain to Solve Constraint Satisfaction Problems (Rerun of Episode 22)

Podcast cover
Read more

Welcome to the 43rd Episode of Learning Machines 101!We are currently presenting a subsequence of episodes covering the events of the recent Neural Information Processing Systems Conference. However, this weekwill digress with a rerun of Episode 22 which nicely complements our previous discussion of the Monte Carlo Markov Chain Algorithm Tutorial. Specifically, today wediscuss the problem of approaches for learning or equivalently parameter estimation in Monte Carlo Markov Chain algorithms. The topics covered in this episode include: What is the pseudolikelihood method and what are its advantages and disadvantages?What is Monte Carlo Expectation Maximization? And...as a bonus prize...a mathematical theory of "dreaming"!!! The current plan is to returnto coverage of the Neural Information Processing Systems Conference in 2 weeks on January 25!! Check out: www.learningmachines101.com for more details!

Jan 12 2016

27mins

Play

Rank #13: LM101-015: How to Build a Machine that Can Learn Anything (The Perceptron)

Podcast cover
Read more

In this 15th episode of Learning Machines 101, we discuss the problem of how to build a machine that can learn any given pattern of inputs and generate any desired pattern of outputs when it is possible to do so! It is assumed that the input patterns consists of zeros and ones indicating possibly the presence or absence of a feature. 

Check out: www.learningmachines101.com to obtain transcripts of this podcast!!!

Oct 27 2014

30mins

Play

Rank #14: LM101-010: How to Learn Statistical Regularities (MAP and maximum likelihood estimation)

Podcast cover
Read more

Learning Machines 101 - A Gentle Introduction to Artificial Intelligence and Machine Learning

Episode Summary: In this podcast episode, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. Show Notes: Hello everyone! Welcome to the tenth podcast in the podcast series Learning Machines 101. In this series of podcasts my goal. Read More »

The post LM101-010: How to Learn Statistical Regularities (MAP and maximum likelihood estimation) appeared first on Learning Machines 101.

Aug 12 2014

34mins

Play

Rank #15: LM101-042: What happened at the Monte Carlo Markov Chain (MCMC) Inference Methods Tutorial at the 2015 Neural Information Processing Systems Conference?

Podcast cover
Read more

This is the second of a short subsequence of podcasts providing a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the field of Machine Learning. This episode reviews and discusses topics associated with the Monte Carlo Markov Chain (MCMC) Inference Methods Tutorial held on the first day of the conference. Check out: www.learningmachines101.com   to listen or download this podcast episode or download the transcripts! Also visit us at LINKEDIN or TWITTER. The twitter handle is: LM101TALK

Dec 29 2015

25mins

Play

Rank #16: LM101-050: How to Use Linear Machine Learning Software to Make Predictions (Linear Regression Software)[RERUN]

Podcast cover
Read more

In this episode we will explain how to download and use free
machine learning software from the website: www.learningmachines101.com.
This podcast is concerned with the very practical issues
associated with downloading and installing machine learning
software on your computer. If you follow these instructions, by the
end of this episode you will have installed one of the simplest
(yet most widely used) machine learning algorithms on your
computer. You can then use the software to make virtually any kind
of prediction you like. Also follow us on
twitter at: lm101talk

May 04 2016

30mins

Play

Rank #17: LM101-070: How to Identify Facial Emotion Expressions in Images Using Stochastic Neighborhood Embedding

Podcast cover
Read more

This 70th episode of Learning Machines 101 we discuss how to identify facial emotion expressions in images using an advanced clustering technique called Stochastic Neighborhood Embedding. We discuss the concept of recognizing facial emotions in images including applications to problems such as: improving online communication quality, identifying suspicious individuals such as terrorists using video cameras, improving lie detector tests, improving athletic performance by providing emotion feedback, and designing smart advertising which can look at the customer’s face to determine if they are bored or interested and dynamically adapt the advertising accordingly. To address this problem we review clustering algorithm methods including K-means clustering, Linear Discriminant Analysis, Spectral Clustering, and the relatively new technique of Stochastic Neighborhood Embedding (SNE) clustering. At the end of this podcast we provide a brief review of the classic machine learning text by Christopher Bishop titled “Pattern Recognition and Machine Learning”.

Make sure to visit: www.learningmachines101.com to obtain free transcripts of this podcast and important supplemental reference materials!

Jan 31 2018

32mins

Play

Rank #18: LM101-068: How to Design Automatic Learning Rate Selection for Gradient Descent Type Machine Learning Algorithms

Podcast cover
Read more

This 68th episode of Learning Machines 101 discusses a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits improved predictive performance. The magnitude of the perturbation at each learning iteration is called the “stepsize” or “learning rate” and the identity of the perturbation vector is called the “search direction”. Simple mathematical formulas are presented based upon research from the late 1960s by Philip Wolfe and G. Zoutendijk that ensure convergence of the generated sequence of parameter vectors. These formulas may be used as the basis for the design of artificially intelligent smart automatic learning rate selection algorithms. For more information, please visit the official website:  www.learningmachines101.com

Sep 26 2017

21mins

Play

Rank #19: LM101-029: How to Modernize Deep Learning with Rectilinear units, Convolutional Nets, and Max-Pooling

Podcast cover
Read more

This podcast discusses talks, papers, and ideas presented at the recent International Conference on Learning Representations 2015 which was followed by the Artificial Intelligence in Statistics 2015 Conference in San Diego. Specifically, commonly used techniques shared by many successful deep learning algorithms such as: rectilinear units, convolutional filters, and max-pooling are discussed. For more details please visit our website at: www.learningmachines101.com

May 25 2015

35mins

Play

Rank #20: LM101-041: What happened at the 2015 Neural Information Processing Systems Deep Learning Tutorial?

Podcast cover
Read more

This is the first of a short subsequence of podcasts which provides a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the field of Machine Learning. This episode introduces the Neural Information Processing Systems Conference and reviews the content of the Morning Deep Learning Tutorial which took place on the first day of the conference. Check out: www.learningmachines101.comfor additional supplementary hyperlinks to the conference and conference papers!!

Dec 16 2015

29mins

Play

LM101-079: Ch1: How to View Learning as Risk Minimization

Podcast cover
Read more

This particular podcast covers the material in Chapter 1 of my new (unpublished) book “Statistical Machine Learning: A unified framework”. In this episode we discuss Chapter 1 of my new book, which shows how supervised, unsupervised, and reinforcement learning algorithms can be viewed as special cases of a general empirical risk minimization framework. This is useful because it provides a framework for not only understanding existing algorithms but also for suggesting new algorithms for specific applications.

Dec 24 2019

26mins

Play

LM101-078: Ch0: How to Become a Machine Learning Expert

Podcast cover
Read more

This particular podcast (Episode 78 of Learning Machines 101) is the initial episode in a new special series of episodes designed to provide commentary on a new book that I am in the process of writing. In this episode we discuss books, software, courses, and podcasts designed to help you become a machine learning expert! For more information, check out: www.learningmachines101.com

Oct 24 2019

39mins

Play

LM101-077: How to Choose the Best Model using BIC

Podcast cover
Read more

In this 77th episode of www.learningmachines101.com , we explain the proper semantic interpretation of the Bayesian Information Criterion (BIC) and emphasize how this semantic interpretation is fundamentally different from AIC (Akaike Information Criterion) model selection methods. Briefly, BIC is used to estimate the probability of the training data given the probability model, while AIC is used to estimate out-of-sample prediction error. The probability of the training data given the model is called the “marginal likelihood”.  Using the marginal likelihood, one can calculate the probability of a model given the training data and then use this analysis to support selecting the most probable model, selecting a model that minimizes expected risk, and support Bayesian model averaging. The assumptions which are required for BIC to be a valid approximation for the probability of the training data given the probability model are also discussed.

May 02 2019

24mins

Play

LM101-076: How to Choose the Best Model using AIC and GAIC

Podcast cover
Read more

In this episode, we explain the proper semantic interpretation of the Akaike Information Criterion (AIC) and the Generalized Akaike Information Criterion (GAIC) for the purpose of picking the best model for a given set of training data.  The precise semantic interpretation of these model selection criteria is provided, explicit assumptions are provided for the AIC and GAIC to be valid, and explicit formulas are provided for the AIC and GAIC so they can be used in practice. Briefly, AIC and GAIC provide a way of estimating the average prediction error of your learning machine on test data without using test data or cross-validation methods. The GAIC is also called the Takeuchi Information Criterion (TIC).

Jan 23 2019

28mins

Play

LM101-075: Can computers think? A Mathematician's Response (remix)

Podcast cover
Read more

In this episode, we explore the question of what can computers do as well as what computers can’t do using the Turing Machine argument. Specifically, we discuss the computational limits of computers and raise the question of whether such limits pertain to biological brains and other non-standard computing machines. This episode is dedicated to the memory of my mom, Sandy Golden. To learn more about Turing Machines, SuperTuring Machines, Hypercomputation, and my Mom, check out: www.learningmachines101.com

Dec 12 2018

36mins

Play

LM101-074: How to Represent Knowledge using Logical Rules (remix)

Podcast cover
Read more

In this episode we will learn how to use “rules” to represent knowledge. We discuss how this works in practice and we explain how these ideas are implemented in a special architecture called the production system. The challenges of representing knowledge using rules are also discussed. Specifically, these challenges include: issues of feature representation, having an adequate number of rules, obtaining rules that are not inconsistent, and having rules that handle special cases and situations. To learn more, visit:

www.learningmachines101.com

Jun 30 2018

19mins

Play

LM101-073: How to Build a Machine that Learns to Play Checkers (remix)

Podcast cover
Read more

This is a remix of the original second episode Learning Machines 101 which describes in a little more detail how the computer program that Arthur Samuel developed in 1959 learned to play checkers by itself without human intervention using a mixture of classical artificial intelligence search methods and artificial neural network learning algorithms. The podcast ends with a book review of Professor Nilsson’s book: “The Quest for Artificial Intelligence: A History of Ideas and Achievements”. For more information, check out:

www.learningmachines101.com

Apr 25 2018

24mins

Play

LM101-072: Welcome to the Big Artificial Intelligence Magic Show! (Remix of LM101-001 and LM101-002)

Podcast cover
Read more

This podcast is basically a remix of the first and second episodes of Learning Machines 101 and is intended to serve as the new introduction to the Learning Machines 101 podcast series. The search for common organizing principles which could support the foundations of machine learning and artificial intelligence is discussed and the concept of the Big Artificial Intelligence Magic Show is introduced. At the end of the podcast, the book  After Digital: Computation as Done by Brains and Machines by Professor James A. Anderson is briefly reviewed. For more information, please visit: www.learningmachines101.com 

Mar 31 2018

22mins

Play

LM101-071: How to Model Common Sense Knowledge using First-Order Logic and Markov Logic Nets

Podcast cover
Read more

In this podcast, we provide some insights into the complexity of common sense. First, we discuss the importance of building common sense into learning machines. Second, we discuss how first-order logic can be used to represent common sense knowledge. Third, we describe a large database of common sense knowledge where the knowledge is represented using first-order logic which is free for researchers in machine learning. We provide a hyperlink to this free database of common sense knowledge. Fourth, we discuss some problems of first-order logic and explain how these problems can be resolved by transforming logical rules into probabilistic rules using Markov Logic Nets. And finally, we have another book review of the book “Markov Logic: An Interface Layer for Artificial Intelligence” by Pedro Domingos and Daniel Lowd which provides further discussion of the issues in this podcast. In this book review, we cover some additional important applications of Markov Logic Nets not covered in detail in this podcast such as: object labeling, social network link analysis, information extraction, and helping support robot navigation. Finally, at the end of the podcast we provide information about a free software program which you can use to build and evaluate your own Markov Logic Net! For more information check out: www.learningmachines101.com

Feb 23 2018

31mins

Play

LM101-070: How to Identify Facial Emotion Expressions in Images Using Stochastic Neighborhood Embedding

Podcast cover
Read more

This 70th episode of Learning Machines 101 we discuss how to identify facial emotion expressions in images using an advanced clustering technique called Stochastic Neighborhood Embedding. We discuss the concept of recognizing facial emotions in images including applications to problems such as: improving online communication quality, identifying suspicious individuals such as terrorists using video cameras, improving lie detector tests, improving athletic performance by providing emotion feedback, and designing smart advertising which can look at the customer’s face to determine if they are bored or interested and dynamically adapt the advertising accordingly. To address this problem we review clustering algorithm methods including K-means clustering, Linear Discriminant Analysis, Spectral Clustering, and the relatively new technique of Stochastic Neighborhood Embedding (SNE) clustering. At the end of this podcast we provide a brief review of the classic machine learning text by Christopher Bishop titled “Pattern Recognition and Machine Learning”.

Make sure to visit: www.learningmachines101.com to obtain free transcripts of this podcast and important supplemental reference materials!

Jan 31 2018

32mins

Play

LM101-069: What Happened at the 2017 Neural Information Processing Systems Conference?

Podcast cover
Read more

This 69th episode of Learning Machines 101 provides a short overview of the 2017 Neural Information Processing Systems conference with a focus on the development of methods for teaching learning machines rather than simply training them on examples. In addition, a book review of the book “Deep Learning” is provided.  #nips2017

Dec 16 2017

23mins

Play

LM101-068: How to Design Automatic Learning Rate Selection for Gradient Descent Type Machine Learning Algorithms

Podcast cover
Read more

This 68th episode of Learning Machines 101 discusses a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits improved predictive performance. The magnitude of the perturbation at each learning iteration is called the “stepsize” or “learning rate” and the identity of the perturbation vector is called the “search direction”. Simple mathematical formulas are presented based upon research from the late 1960s by Philip Wolfe and G. Zoutendijk that ensure convergence of the generated sequence of parameter vectors. These formulas may be used as the basis for the design of artificially intelligent smart automatic learning rate selection algorithms. For more information, please visit the official website:  www.learningmachines101.com

Sep 26 2017

21mins

Play

LM101-067: How to use Expectation Maximization to Learn Constraint Satisfaction Solutions (Rerun)

Podcast cover
Read more

In this episode we discuss how to learn to solve constraint satisfaction inference problems. The goal of the inference process is to infer the most probable values for unobservable variables. These constraints, however, can be learned from experience. Specifically, the important machine learning method for handling unobservable components of the data using Expectation Maximization is introduced. Check it out at:

www.learningmachines101.com

Aug 21 2017

25mins

Play

LM101-066: How to Solve Constraint Satisfaction Problems using MCMC Methods (Rerun)

Podcast cover
Read more

In this episode of Learning Machines 101 (www.learningmachines101.com) we discuss how to solve constraint satisfaction inference problems where knowledge is represented as a large unordered collection of complicated probabilistic constraints among a collection of variables. The goal of the inference process is to infer the most probable values of the unobservable variables given the observable variables. Specifically, Monte Carlo Markov Chain ( MCMC ) methods are discussed.

Jul 17 2017

34mins

Play

LM101-065: How to Design Gradient Descent Learning Machines (Rerun)

Podcast cover
Read more

In this episode rerun we introduce the concept of gradient descent which is the fundamental principle underlying learning in the majority of deep learning and neural network learning algorithms. Check out the website:

www.learningmachines101.com

to obtain a transcript of this episode!

Jun 19 2017

30mins

Play

LM101-064: Stochastic Model Search and Selection with Genetic Algorithms (Rerun)

Podcast cover
Read more

In this rerun of episode 24 we explore the concept of evolutionary learning machines. That is, learning machines that reproduce themselves in the hopes of evolving into more intelligent and smarter learning machines. This leads us to the topic of stochastic model search and evaluation. Check out the blog with additional technical references at: www.learningmachines101.com 

May 15 2017

28mins

Play

LM101-063: How to Transform a Supervised Learning Machine into a Policy Gradient Reinforcement Learning Machine

Podcast cover
Read more

This 63rd episode of Learning Machines 101 discusses how to build reinforcement learning machines which become smarter with experience but do not use this acquired knowledge to modify their actions and behaviors. This episode explains how to build reinforcement learning machines whose behavior evolves as the learning machines become increasingly smarter. The essential idea for the construction of such reinforcement learning machines is based upon first developing a supervised learning machine. The supervised learning machine then “guesses” the desired response and updates its parameters using its guess for the desired response! Although the reasoning seems circular, this approach in fact is a variation of the important widely used machine learning method of Expectation-Maximization. Some applications to learning to play video games, control walking robots, and developing optimal trading strategies for the stock market are briefly mentioned as well. Check us out at: www.learningmachines101.com 

Apr 20 2017

22mins

Play

LM101-062: How to Transform a Supervised Learning Machine into a Value Function Reinforcement Learning Machine

Podcast cover
Read more

This 62nd episode of Learning Machines 101 (www.learningmachines101.com)  discusses how to design reinforcement learning machines using your knowledge of how to build supervised learning machines! Specifically, we focus on Value Function Reinforcement Learning Machines which estimate the unobservable total penalty associated with an episode when only the beginning of the episode is observable. This estimated Value Function can then be used by the learning machine to select a particular action in a given situation to minimize the total future penalties that will be received. Applications include: building your own robot, building your own automatic aircraft lander, building your own automated stock market trading system, and building your own self-driving car!!

Mar 19 2017

31mins

Play

LM101-061: What happened at the Reinforcement Learning Tutorial? (RERUN)

Podcast cover
Read more

This is the third of a short subsequence of podcasts providing a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the field of Machine Learning. This episode reviews and discusses topics associated with the Introduction to Reinforcement Learning with Function Approximation Tutorial presented by Professor Richard Sutton on the first day of the conference.

This episode is a RERUN of an episode originally presented in January 2016 and lays the groundwork for future episodes on the topic of reinforcement learning!

Check out: www.learningmachines101.com  for more info!!

Feb 23 2017

29mins

Play

LM101-060: How to Monitor Machine Learning Algorithms using Anomaly Detection Machine Learning Algorithms

Podcast cover
Read more

This 60th episode of Learning Machines 101 discusses how one can use novelty detection or anomaly detection machine learning algorithms to monitor the performance of other machine learning algorithms deployed in real world environments. The episode is based upon a review of a talk by Chief Data Scientist Ira Cohen of Anodot presented at the 2016 Berlin Buzzwords Data Science Conference. Check out: www.learningmachines101.com to hear the podcast or read a transcription of the podcast!

Jan 23 2017

29mins

Play

iTunes Ratings

77 Ratings
Average Ratings
59
8
4
1
5

Great for those interesting in machine learning.

By Carlos Leonidas - Jul 12 2018
Read more
This podcast is a great intoduction to the field of Machine Learning from statistics angle.

Super interesting show!

By kpchf - Mar 12 2016
Read more
Richard knows his stuff! What a unique and interesting show!