Cover image of Learning Machines 101
(70)

Rank #31 in Technology category

Science & Medicine
Technology
Higher Education
Natural Sciences
Software How-To

Learning Machines 101

Updated 13 days ago

Rank #31 in Technology category

Science & Medicine
Technology
Higher Education
Natural Sciences
Software How-To
Read more

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

Read more

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

iTunes Ratings

70 Ratings
Average Ratings
55
5
4
1
5

Great for those interesting in machine learning.

By Carlos Leonidas - Jul 12 2018
Read more
This podcast is a great intoduction to the field of Machine Learning from statistics angle.

Super interesting show!

By kpchf - Mar 12 2016
Read more
Richard knows his stuff! What a unique and interesting show!

iTunes Ratings

70 Ratings
Average Ratings
55
5
4
1
5

Great for those interesting in machine learning.

By Carlos Leonidas - Jul 12 2018
Read more
This podcast is a great intoduction to the field of Machine Learning from statistics angle.

Super interesting show!

By kpchf - Mar 12 2016
Read more
Richard knows his stuff! What a unique and interesting show!
Cover image of Learning Machines 101

Learning Machines 101

Updated 13 days ago

Rank #31 in Technology category

Read more

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

Rank #1: LM101-047: How Build a Support Vector Machine to Classify Patterns (Rerun)

Podcast cover
Read more

We explain how to estimate the parameters of such machines to classify a pattern vector as a member of one of two categories as well as identify special pattern vectors called “support vectors” which are important for characterizing the Support Vector Machine decision boundary. The relationship of Support Vector Machine parameter estimation and logistic regression parameter estimation is also discussed.For more information..check us out at: www.learningmachines101.com

also check us out on twitter at: lm101talk

Mar 14 2016
35 mins
Play

Rank #2: LM101-045: How to Build a Deep Learning Machine for Answering Questions about Images

Podcast cover
Read more

In this episode we discuss just one out of the 102 different posters which was presented on the first night of the 2015 Neural Information Processing Systems Conference. This presentation describes a system which can answer simple questions about images. Check out: www.learningmachines101.com for additional details!!

Feb 08 2016
21 mins
Play

Rank #3: LM101-059: How to Properly Introduce a Neural Network

Podcast cover
Read more

I discuss the concept of a “neural network” by providing some examples of recent successes in neural network machine learning algorithms and providing a historical perspective on the evolution of the neural network concept from its biological origins. For more details visit us at: www.learningmachines101.com

Dec 21 2016
29 mins
Play

Rank #4: LM101-023: How to Build a Deep Learning Machine

Podcast cover
Read more

Recently, there has been a lot of discussion and controversy over the currently hot topic of “deep learning”!! Deep Learning technology has made real and important fundamental contributions to the development of machine learning algorithms. Learn more about the essential ideas of  "Deep Learning" in Episode 23 of "Learning Machines 101". Check us out at our official website: www.learningmachines101.com ! 

Feb 24 2015
42 mins
Play

Rank #5: LM101-021: How to Solve Large Complex Constraint Satisfaction Problems (Monte Carlo Markov Chain)

Podcast cover
Read more

We discuss how to solve constraint satisfaction inference problems where knowledge is represented as a large unordered collection of complicated probabilistic constraints among a collection of variables. The goal of the inference process is to infer the most probable values of the unobservable variables given the observable variables.

Please visit: www.learningmachines101.com to obtain transcripts of this podcast and download free machine learning software!

Jan 26 2015
35 mins
Play

Rank #6: LM101-050: How to Use Linear Machine Learning Software to Make Predictions (Linear Regression Software)[RERUN]

Podcast cover
Read more

In this episode we will explain how to download and use free
machine learning software from the website: www.learningmachines101.com.
This podcast is concerned with the very practical issues
associated with downloading and installing machine learning
software on your computer. If you follow these instructions, by the
end of this episode you will have installed one of the simplest
(yet most widely used) machine learning algorithms on your
computer. You can then use the software to make virtually any kind
of prediction you like. Also follow us on
twitter at: lm101talk

May 04 2016
30 mins
Play

Rank #7: LM101-070: How to Identify Facial Emotion Expressions in Images Using Stochastic Neighborhood Embedding

Podcast cover
Read more

This 70th episode of Learning Machines 101 we discuss how to identify facial emotion expressions in images using an advanced clustering technique called Stochastic Neighborhood Embedding. We discuss the concept of recognizing facial emotions in images including applications to problems such as: improving online communication quality, identifying suspicious individuals such as terrorists using video cameras, improving lie detector tests, improving athletic performance by providing emotion feedback, and designing smart advertising which can look at the customer’s face to determine if they are bored or interested and dynamically adapt the advertising accordingly. To address this problem we review clustering algorithm methods including K-means clustering, Linear Discriminant Analysis, Spectral Clustering, and the relatively new technique of Stochastic Neighborhood Embedding (SNE) clustering. At the end of this podcast we provide a brief review of the classic machine learning text by Christopher Bishop titled “Pattern Recognition and Machine Learning”.

Make sure to visit: www.learningmachines101.com to obtain free transcripts of this podcast and important supplemental reference materials!

Jan 31 2018
32 mins
Play

Rank #8: LM101-068: How to Design Automatic Learning Rate Selection for Gradient Descent Type Machine Learning Algorithms

Podcast cover
Read more

This 68th episode of Learning Machines 101 discusses a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits improved predictive performance. The magnitude of the perturbation at each learning iteration is called the “stepsize” or “learning rate” and the identity of the perturbation vector is called the “search direction”. Simple mathematical formulas are presented based upon research from the late 1960s by Philip Wolfe and G. Zoutendijk that ensure convergence of the generated sequence of parameter vectors. These formulas may be used as the basis for the design of artificially intelligent smart automatic learning rate selection algorithms. For more information, please visit the official website:  www.learningmachines101.com

Sep 26 2017
21 mins
Play

Rank #9: LM101-013: How to Use Linear Machine Learning Software to Make Predictions (Linear Regression Software)

Podcast cover
Read more

Hello everyone! Welcome to the thirteenth podcast in the podcast series Learning Machines 101. In this series of podcasts my goal is to discuss important concepts of artificial intelligence and machine learning in hopefully an entertaining and educational manner.

In this episode we will explain how to download and use free machine learning software which can be downloaded from the website: www.learningmachines101.com. Although we will continue to focus on critical theoretical concepts in machine learning in future episodes, it is always useful to actually experience how these concepts work in practice. For these reasons, from time to time I will include special podcasts like this one which focus on very practical issues associated with downloading and installing machine learning software on your computer. If you follow these instructions, by the end of this episode you will have installed one of the simplest (yet most widely used) machine learning algorithms on your computer. You can then use the software to make virtually any kind of prediction you like. However, some of these predictions will be good predictions, while other predictions will be poor predictions. For this reason, following the discussion in Episode 12 which was concerned with the problem of evaluating generalization performance, we will also discuss how to evaluate what your learning machine has “memorized” and additionally evaluate the ability of your learning machine to “generalize” and make predictions about things that it has never seen before.

Sep 22 2014
30 mins
Play

Rank #10: LM101-052: How to Use the Kernel Trick to Make Hidden Units Disappear

Podcast cover
Read more

Today, we discuss a simple yet powerful idea which began popular in the machine learning literature in the 1990s which is called “The Kernel Trick”. The basic idea of the “Kernel Trick” is that you specify similarity relationships among input patterns rather than a recoding transformation to solve a nonlinear problem with a linear learning machine. It's a great magic trick...check it out at: www.learningmachines101.com

where you can obtain transcripts of this episode and download free machine learning software! Also check out the "Statistical Machine Learning Forum" on Linked In and follow us on Twitter using the twitter handle: @lm101talk

Jun 13 2016
28 mins
Play

Rank #11: LM101-054: How to Build Search Engine and Recommender Systems using Latent Semantic Analysis (RERUN)

Podcast cover
Read more

Welcome to the 54th Episode of Learning Machines 101 titled "How to Build a Search Engine, Automatically Grade Essays, and Identify Synonyms using Latent Semantic Analysis" (rerun of Episode 40). The principles in this episode are also applicable to the problem of "Market Basket Analysis"  and the design of Recommender Systems.

Check it out at: www.learningmachines101.com

and follow us on twitter: @lm101talk

Jul 25 2016
29 mins
Play

Rank #12: LM101-055: How to Learn Statistical Regularities using MAP and Maximum Likelihood Estimation (Rerun)

Podcast cover
Read more

In this rerun of Episode 10, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. In particular, the episode introduces fundamental machine learning concepts such as: probability models, model misspecification, maximum likelihood estimation, and MAP estimation. Check us out at: www.learningmachines101.com

Aug 16 2016
35 mins
Play

Rank #13: LM101-015: How to Build a Machine that Can Learn Anything (The Perceptron)

Podcast cover
Read more

In this 15th episode of Learning Machines 101, we discuss the problem of how to build a machine that can learn any given pattern of inputs and generate any desired pattern of outputs when it is possible to do so! It is assumed that the input patterns consists of zeros and ones indicating possibly the presence or absence of a feature. 

Check out: www.learningmachines101.com to obtain transcripts of this podcast!!!

Oct 27 2014
30 mins
Play

Rank #14: LM101-010: How to Learn Statistical Regularities (MAP and maximum likelihood estimation)

Podcast cover
Read more

Learning Machines 101 - A Gentle Introduction to Artificial Intelligence and Machine Learning

Episode Summary: In this podcast episode, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. Show Notes: Hello everyone! Welcome to the tenth podcast in the podcast series Learning Machines 101. In this series of podcasts my goal. Read More »

The post LM101-010: How to Learn Statistical Regularities (MAP and maximum likelihood estimation) appeared first on Learning Machines 101.

Aug 12 2014
34 mins
Play

Rank #15: LM101-012: How to Evaluate the Ability to Generalize from Experience (Cross-Validation Methods)

Podcast cover
Read more

In this episode we discuss the problem of how to evaluate the ability of a learning machine to make generalizations and construct abstractions given the learning machine is provided a finite limited collection of experiences. 

Sep 09 2014
32 mins
Play

Rank #16: LM101-075: Can computers think? A Mathematician's Response (remix)

Podcast cover
Read more

In this episode, we explore the question of what can computers do as well as what computers can’t do using the Turing Machine argument. Specifically, we discuss the computational limits of computers and raise the question of whether such limits pertain to biological brains and other non-standard computing machines. This episode is dedicated to the memory of my mom, Sandy Golden. To learn more about Turing Machines, SuperTuring Machines, Hypercomputation, and my Mom, check out: www.learningmachines101.com

Dec 12 2018
36 mins
Play

Rank #17: LM101-011: How to Learn About Rare and Unseen Events (Smoothing Probabilistic Laws)

Podcast cover
Read more

Learning Machines 101 - A Gentle Introduction to Artificial Intelligence and Machine Learning

Episode Summary: Today we address a strange yet fundamentally important question. How do you predict the probability of something you have never seen? Or, in other words, how can we accurately estimate the probability of rare events? Show Notes: Hello everyone! Welcome to the eleventh podcast in the podcast series Learning Machines 101. In this series of podcasts. Read More »

The post LM101-011: How to Learn About Rare and Unseen Events (Smoothing Probabilistic Laws) appeared first on Learning Machines 101.

Aug 26 2014
40 mins
Play

Rank #18: LM101-063: How to Transform a Supervised Learning Machine into a Policy Gradient Reinforcement Learning Machine

Podcast cover
Read more

This 63rd episode of Learning Machines 101 discusses how to build reinforcement learning machines which become smarter with experience but do not use this acquired knowledge to modify their actions and behaviors. This episode explains how to build reinforcement learning machines whose behavior evolves as the learning machines become increasingly smarter. The essential idea for the construction of such reinforcement learning machines is based upon first developing a supervised learning machine. The supervised learning machine then “guesses” the desired response and updates its parameters using its guess for the desired response! Although the reasoning seems circular, this approach in fact is a variation of the important widely used machine learning method of Expectation-Maximization. Some applications to learning to play video games, control walking robots, and developing optimal trading strategies for the stock market are briefly mentioned as well. Check us out at: www.learningmachines101.com 

Apr 20 2017
22 mins
Play

Rank #19: LM101-064: Stochastic Model Search and Selection with Genetic Algorithms (Rerun)

Podcast cover
Read more

In this rerun of episode 24 we explore the concept of evolutionary learning machines. That is, learning machines that reproduce themselves in the hopes of evolving into more intelligent and smarter learning machines. This leads us to the topic of stochastic model search and evaluation. Check out the blog with additional technical references at: www.learningmachines101.com 

May 15 2017
28 mins
Play

Rank #20: LM101-024: How to Use Genetic Algorithms to Breed Learning Machines

Podcast cover
Read more

In this episode we introduce the concept of learning machines that can self-evolve using simulated natural evolution into more intelligent machines using Monte Carlo Markov Chain Genetic Algorithms. Check out:

www.learningmachines101.com

to obtain transcripts of this podcast and download free machine learning software!

Mar 10 2015
29 mins
Play

Similar Podcasts