Cover image of Learning Machines 101
(70)
Technology
Science
Mathematics

Learning Machines 101

Updated 5 days ago

Technology
Science
Mathematics
Read more

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

Read more

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

iTunes Ratings

70 Ratings
Average Ratings
55
5
4
1
5

Great for those interesting in machine learning.

By Carlos Leonidas - Jul 12 2018
Read more
This podcast is a great intoduction to the field of Machine Learning from statistics angle.

Super interesting show!

By kpchf - Mar 12 2016
Read more
Richard knows his stuff! What a unique and interesting show!

iTunes Ratings

70 Ratings
Average Ratings
55
5
4
1
5

Great for those interesting in machine learning.

By Carlos Leonidas - Jul 12 2018
Read more
This podcast is a great intoduction to the field of Machine Learning from statistics angle.

Super interesting show!

By kpchf - Mar 12 2016
Read more
Richard knows his stuff! What a unique and interesting show!
Cover image of Learning Machines 101

Learning Machines 101

Updated 5 days ago

Read more

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

Rank #1: LM101-036: How to Predict the Future from the Distant Past using Recurrent Neural Networks

Podcast cover
Read more

In this episode, we discuss the problem of predicting the future from not only recent events but also from the distant past using Recurrent Neural Networks (RNNs). A example RNN is described which learns to label images with simple sentences. A learning machine capable of generating even simple descriptions of images such as these could be used to help the blind interpret images, provide assistance to children and adults in language acquisition, support internet search of content in images, and enhance search engine optimization websites containing unlabeled images. Both tutorial notes and advanced implementational notes for RNNs can be found in the show notes at: www.learningmachines101.com .

Sep 28 2015
25 mins
Play

Rank #2: LM101-054: How to Build Search Engine and Recommender Systems using Latent Semantic Analysis (RERUN)

Podcast cover
Read more

Welcome to the 54th Episode of Learning Machines 101 titled "How to Build a Search Engine, Automatically Grade Essays, and Identify Synonyms using Latent Semantic Analysis" (rerun of Episode 40). The principles in this episode are also applicable to the problem of "Market Basket Analysis"  and the design of Recommender Systems.

Check it out at: www.learningmachines101.com

and follow us on twitter: @lm101talk

Jul 25 2016
29 mins
Play

Rank #3: LM101-070: How to Identify Facial Emotion Expressions in Images Using Stochastic Neighborhood Embedding

Podcast cover
Read more

This 70th episode of Learning Machines 101 we discuss how to identify facial emotion expressions in images using an advanced clustering technique called Stochastic Neighborhood Embedding. We discuss the concept of recognizing facial emotions in images including applications to problems such as: improving online communication quality, identifying suspicious individuals such as terrorists using video cameras, improving lie detector tests, improving athletic performance by providing emotion feedback, and designing smart advertising which can look at the customer’s face to determine if they are bored or interested and dynamically adapt the advertising accordingly. To address this problem we review clustering algorithm methods including K-means clustering, Linear Discriminant Analysis, Spectral Clustering, and the relatively new technique of Stochastic Neighborhood Embedding (SNE) clustering. At the end of this podcast we provide a brief review of the classic machine learning text by Christopher Bishop titled “Pattern Recognition and Machine Learning”.

Make sure to visit: www.learningmachines101.com to obtain free transcripts of this podcast and important supplemental reference materials!

Jan 31 2018
32 mins
Play

Rank #4: LM101-068: How to Design Automatic Learning Rate Selection for Gradient Descent Type Machine Learning Algorithms

Podcast cover
Read more

This 68th episode of Learning Machines 101 discusses a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits improved predictive performance. The magnitude of the perturbation at each learning iteration is called the “stepsize” or “learning rate” and the identity of the perturbation vector is called the “search direction”. Simple mathematical formulas are presented based upon research from the late 1960s by Philip Wolfe and G. Zoutendijk that ensure convergence of the generated sequence of parameter vectors. These formulas may be used as the basis for the design of artificially intelligent smart automatic learning rate selection algorithms. For more information, please visit the official website:  www.learningmachines101.com

Sep 26 2017
21 mins
Play

Rank #5: LM101-065: How to Design Gradient Descent Learning Machines (Rerun)

Podcast cover
Read more

In this episode rerun we introduce the concept of gradient descent which is the fundamental principle underlying learning in the majority of deep learning and neural network learning algorithms. Check out the website:

www.learningmachines101.com

to obtain a transcript of this episode!

Jun 19 2017
30 mins
Play

Rank #6: LM101-062: How to Transform a Supervised Learning Machine into a Value Function Reinforcement Learning Machine

Podcast cover
Read more

This 62nd episode of Learning Machines 101 (www.learningmachines101.com)  discusses how to design reinforcement learning machines using your knowledge of how to build supervised learning machines! Specifically, we focus on Value Function Reinforcement Learning Machines which estimate the unobservable total penalty associated with an episode when only the beginning of the episode is observable. This estimated Value Function can then be used by the learning machine to select a particular action in a given situation to minimize the total future penalties that will be received. Applications include: building your own robot, building your own automatic aircraft lander, building your own automated stock market trading system, and building your own self-driving car!!

Mar 19 2017
31 mins
Play

Rank #7: LM101-057: How to Catch Spammers using Spectral Clustering

Podcast cover
Read more

In this 57th episode, we explain how to use unsupervised machine learning algorithms to catch internet criminals who try to steal your money electronically!  Check it out at: www.learningmachines101.com

Oct 18 2016
19 mins
Play

Rank #8: LM101-061: What happened at the Reinforcement Learning Tutorial? (RERUN)

Podcast cover
Read more

This is the third of a short subsequence of podcasts providing a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the field of Machine Learning. This episode reviews and discusses topics associated with the Introduction to Reinforcement Learning with Function Approximation Tutorial presented by Professor Richard Sutton on the first day of the conference.

This episode is a RERUN of an episode originally presented in January 2016 and lays the groundwork for future episodes on the topic of reinforcement learning!

Check out: www.learningmachines101.com  for more info!!

Feb 23 2017
29 mins
Play

Rank #9: LM101-047: How Build a Support Vector Machine to Classify Patterns (Rerun)

Podcast cover
Read more

We explain how to estimate the parameters of such machines to classify a pattern vector as a member of one of two categories as well as identify special pattern vectors called “support vectors” which are important for characterizing the Support Vector Machine decision boundary. The relationship of Support Vector Machine parameter estimation and logistic regression parameter estimation is also discussed.For more information..check us out at: www.learningmachines101.com

also check us out on twitter at: lm101talk

Mar 14 2016
35 mins
Play

Rank #10: LM101-045: How to Build a Deep Learning Machine for Answering Questions about Images

Podcast cover
Read more

In this episode we discuss just one out of the 102 different posters which was presented on the first night of the 2015 Neural Information Processing Systems Conference. This presentation describes a system which can answer simple questions about images. Check out: www.learningmachines101.com for additional details!!

Feb 08 2016
21 mins
Play

Rank #11: LM101-056: How to Build Generative Latent Probabilistic Topic Models for Search Engine and Recommender System Applications

Podcast cover
Read more

In this NEW episode we discuss Latent Semantic Indexing type machine learning algorithms which have a PROBABILISTIC  interpretation. We explain why such a probabilistic interpretation is important and discuss how such algorithms can be used in the design of document retrieval systems, search engines, and recommender systems. Check us out at: www.learningmachines101.com

and follow us on twitter at: @lm101talk

Sep 20 2016
27 mins
Play

Rank #12: LM101-058: How to Identify Hallucinating Learning Machines using Specification Analysis

Podcast cover
Read more

In this 58th episode of Learning Machines 101, I’ll be discussing an important new scientific breakthrough published just last week for the first time in the journal Econometrics  in the special issue on model misspecification titled “Generalized Information Matrix Tests for Detecting Model Misspecification”. The article provides a unified theoretical framework for the development of a wide range of methods for determining if a learning machine is capable of learning its statistical environment. The article is co-authored by myself, Steven Henley, Halbert White, and Michael Kashner. It is an open-access article so the complete article can be downloaded for free! The download link can be found in the show notes of this episode at: www.learningmachines101.com . In 30 years  everyone will be using these methods so you might as well start using them now!

Nov 23 2016
19 mins
Play

Rank #13: LM101-040: How to Build a Search Engine, Automatically Grade Essays, and Identify Synonyms using Latent Semantic Analysis

Podcast cover
Read more

In this episode we introduce a very powerful approach for computing semantic similarity between documents.  Here, the terminology “document” could refer to a web-page, a word document, a paragraph of text, an essay, a sentence, or even just a single word.  Two semantically similar documents, therefore, will discuss many of the same topics while two semantically different documents will not have many topics in common.  Machine learning methods are described which can take as input large collections of documents and use those documents to automatically learn semantic similarity relations. This approach is called Latent Semantic Indexing (LSI) or Latent Semantic Analysis (LSA). Visit us at: www.learningmachines101.com to learn more!

Nov 24 2015
28 mins
Play

Rank #14: LM101-063: How to Transform a Supervised Learning Machine into a Policy Gradient Reinforcement Learning Machine

Podcast cover
Read more

This 63rd episode of Learning Machines 101 discusses how to build reinforcement learning machines which become smarter with experience but do not use this acquired knowledge to modify their actions and behaviors. This episode explains how to build reinforcement learning machines whose behavior evolves as the learning machines become increasingly smarter. The essential idea for the construction of such reinforcement learning machines is based upon first developing a supervised learning machine. The supervised learning machine then “guesses” the desired response and updates its parameters using its guess for the desired response! Although the reasoning seems circular, this approach in fact is a variation of the important widely used machine learning method of Expectation-Maximization. Some applications to learning to play video games, control walking robots, and developing optimal trading strategies for the stock market are briefly mentioned as well. Check us out at: www.learningmachines101.com 

Apr 20 2017
22 mins
Play

Rank #15: LM101-053: How to Enhance Learning Machines with Swarm Intelligence (Particle Swarm Optimization)

Podcast cover
Read more

In this 53rd episode of Learning Machines 101, we introduce the concept of a Swarm Intelligence with respect to Particle Swarm Optimization Algorithms. The essential idea of “Swarm Intelligence” is that you have a group of individual entities which behave in a coordinated manner yet there is no master control center providing directions to all of the individuals in the group. The global group behavior is an “emergent property” of local interactions among individuals in the group! We will analyze the concept of swarm intelligence as a Markov Random Field, discuss how it can be harnessed to enhance the performance of machine learning algorithms, and comment upon relevant mathematics for analyzing and designing “swarm intelligences” so they behave in an appropriate manner by viewing the Swarm as a nonlinear optimization algorithm. For more information check out: www.learningmachines101.com

and also check us out on twitter (@lm101talk).

Jul 11 2016
26 mins
Play

Rank #16: LM101-052: How to Use the Kernel Trick to Make Hidden Units Disappear

Podcast cover
Read more

Today, we discuss a simple yet powerful idea which began popular in the machine learning literature in the 1990s which is called “The Kernel Trick”. The basic idea of the “Kernel Trick” is that you specify similarity relationships among input patterns rather than a recoding transformation to solve a nonlinear problem with a linear learning machine. It's a great magic trick...check it out at: www.learningmachines101.com

where you can obtain transcripts of this episode and download free machine learning software! Also check out the "Statistical Machine Learning Forum" on Linked In and follow us on Twitter using the twitter handle: @lm101talk

Jun 13 2016
28 mins
Play

Rank #17: LM101-038: How to Model Knowledge Skill Growth Over Time using Bayesian Nets

Podcast cover
Read more

In this episode, we examine the problem of developing an advanced artificially intelligent technology which is capable of tracking knowledge growth in students in real-time, representing the knowledge state of a student a skill profile, and automatically defining the concept of a skill without human intervention! The approach can be viewed as a sophisticated state-of-the-art extension of the Item Response Theory approach to Computerized Adaptive Testing Educational Technology described in Episode 37. Both tutorial notes and advanced implementational notes can be found in the show notes at: www.learningmachines101.com

Oct 27 2015
23 mins
Play

Rank #18: LM101-050: How to Use Linear Machine Learning Software to Make Predictions (Linear Regression Software)[RERUN]

Podcast cover
Read more

In this episode we will explain how to download and use free
machine learning software from the website: www.learningmachines101.com.
This podcast is concerned with the very practical issues
associated with downloading and installing machine learning
software on your computer. If you follow these instructions, by the
end of this episode you will have installed one of the simplest
(yet most widely used) machine learning algorithms on your
computer. You can then use the software to make virtually any kind
of prediction you like. Also follow us on
twitter at: lm101talk

May 04 2016
30 mins
Play

Rank #19: LM101-037: How to Build a Smart Computerized Adaptive Testing Machine using Item Response Theory

Podcast cover
Read more

In this episode, we discuss the problem of how to build a smart computerized adaptive testing machine using Item Response Theory (IRT). Suppose that you are teaching a student a particular target set of knowledge. Examples of such situations obviously occur in nursery school, elementary school, junior high school, high school, and college. However, such situations also occur in industry when top professionals in a particular field attend an advanced training seminar. All of these situations would benefit from a smart adaptive assessment machine which attempts to estimate a student’s knowledge in real-time. Such a machine could then use that information to optimize the choice and order of questions to be presented to the student in order to develop a customized exam for efficiently assessing the student’s knowledge level and possibly guiding instructional strategies. Both tutorial notes and advanced implementational notes can be found in the show notes at: www.learningmachines101.com .

Oct 12 2015
34 mins
Play

Rank #20: LM101-069: What Happened at the 2017 Neural Information Processing Systems Conference?

Podcast cover
Read more

This 69th episode of Learning Machines 101 provides a short overview of the 2017 Neural Information Processing Systems conference with a focus on the development of methods for teaching learning machines rather than simply training them on examples. In addition, a book review of the book “Deep Learning” is provided.  #nips2017

Dec 16 2017
23 mins
Play

Similar Podcasts