Cover image of Data Science at Home
(43)

Rank #195 in Technology category

Technology
News
Tech News
Science
Mathematics

Data Science at Home

Updated 3 days ago

Rank #195 in Technology category

Technology
News
Tech News
Science
Mathematics
Read more

Technology, machine learning and algorithms

Read more

Technology, machine learning and algorithms

iTunes Ratings

43 Ratings
Average Ratings
26
10
2
2
3

Yes

By stevensforjesus - Nov 04 2017
Read more
This is so good

Very well done

By Charliea472 - Aug 02 2016
Read more
Very clear explanations of data science concepts.

iTunes Ratings

43 Ratings
Average Ratings
26
10
2
2
3

Yes

By stevensforjesus - Nov 04 2017
Read more
This is so good

Very well done

By Charliea472 - Aug 02 2016
Read more
Very clear explanations of data science concepts.
Cover image of Data Science at Home

Data Science at Home

Updated 3 days ago

Rank #195 in Technology category

Read more

Technology, machine learning and algorithms

Rank #1: Episode 53: Estimating uncertainty with neural networks

Podcast cover
Read more

Have you ever wanted to get an estimate of the uncertainty of your neural network? Clearly Bayesian modelling provides a solid framework to estimate uncertainty by design. However, there are many realistic cases in which Bayesian sampling is not really an option and ensemble models can play a role.

In this episode I describe a simple yet effective way to estimate uncertainty, without changing your neural network’s architecture nor your machine learning pipeline at all.

The post with mathematical background and sample source code is published here.

Jan 23 2019
15 mins
Play

Rank #2: Episode 44: The predictive power of metadata

Podcast cover
Read more

In this episode I don't talk about data. In fact, I talk about metadata.

While many machine learning models rely on certain amounts of data eg. text, images, audio and video, it has been proved how powerful is the signal carried by metadata, that is all data that is invisible to the end user.Behind a tweet of 140 characters there are more than 140 fields of data that draw a much more detailed profile of the sender and the content she is producing... without ever considering the tweet itself.

References You are your Metadata: Identification and Obfuscation of Social Media Users using Metadata Information https://www.ucl.ac.uk/~ucfamus/papers/icwsm18.pdf

Aug 21 2018
21 mins
Play

Rank #3: Episode 67: Classic Computer Science Problems in Python

Podcast cover
Read more

Today I am with David Kopec, author of Classic Computer Science Problems in Python, published by Manning Publications.

His book deepens your knowledge of problem solving techniques from the realm of computer science by challenging you with interesting and realistic scenarios, exercises, and of course algorithms. There are examples in the major topics any data scientist should be familiar with, for example search, clustering, graphs, and much more.

Get the book from https://www.manning.com/books/classic-computer-science-problems-in-python and use coupon code poddatascienceathome19 to get 40% discount.

 

References

Twitter https://twitter.com/davekopec

GitHub https://github.com/davecom

classicproblems.com

Jul 02 2019
28 mins
Play

Rank #4: Episode 43: Applied Text Analysis with Python (interview with Rebecca Bilbro)

Podcast cover
Read more

Today’s episode is about text analysis with python. Python is the de facto standard in machine learning. A large community, a generous choice in the set of libraries, at the price of less performant tasks, sometimes. But overall a decent language for typical data science tasks.

I am with Rebecca Bilbro, co-author of Applied Text Analysis with Python, with Benjamin Bengfort and Tony Ojeda.

We speak about the evolution of applied text analysis, tools and pipelines, chatbots.

Aug 14 2018
36 mins
Play

Rank #5: Episode 63: Financial time series and machine learning

Podcast cover
Read more

In this episode I speak to Alexandr Honchar, data scientist and owner of blog https://medium.com/@alexrachnogAlexandr has written very interesting posts about time series analysis for financial data. His blog is in my personal list of best tutorial blogs. We discuss about financial time series and machine learning, what makes predicting the price of stocks a very challenging task and why machine learning might not be enough.As usual, I ask Alexandr how he sees machine learning in the next 10 years. His answer - in my opinion quite futuristic - makes perfect sense. 

You can contact Alexandr on

Enjoy the show!

Jun 04 2019
21 mins
Play

Rank #6: Episode 41: How can deep neural networks reason

Podcast cover
Read more

Today’s episode  will be about deep learning and reasoning. There has been a lot of discussion about the effectiveness of deep learning models and their capability to generalize, not only across domains but also on data that such models have never seen.

But there is a research group from the Department of Computer Science, Duke University that seems to be on something with deep learning and interpretability in computer vision.

 

References

Prediction Analysis Lab Duke University https://users.cs.duke.edu/~cynthia/lab.html

This looks like that: deep learning for interpretable image recognition https://arxiv.org/abs/1806.10574

Jul 31 2018
18 mins
Play

Rank #7: Episode 40: Deep learning and image compression

Podcast cover
Read more

Today’s episode  will be about deep learning and compression of data, and in particular compressing images. We all know how important compressing data is, reducing the size of digital objects without affecting the quality. As a very general rule, the more one compresses an image the lower the quality, due to a number of factors like bitrate, quantization error, etcetera. I am glad to be here with Tong Chen,  researcher at the School of electronic Science and Engineering of Nanjing University, China.

Tong developed a deep learning based compression algorithm for images, that seems to improve over state of the art approaches like BPG, JPEG2000 and JPEG.

 

Reference

Deep Image Compression via End-to-End Learning - Haojie Liu, Tong Chen, Qiu Shen, Tao Yue, and Zhan Ma School of Electronic Science and Engineering, Nanjing University, Jiangsu, China

Jul 24 2018
17 mins
Play

Rank #8: Episode 52: why do machine learning models fail? [RB]

Podcast cover
Read more

The success of a machine learning model depends on several factors and events. True generalization to data that the model has never seen before is more a chimera than a reality. But under specific conditions a well trained machine learning model can generalize well and perform with testing accuracy that is similar to the one performed during training.

In this episode I explain when and why machine learning models fail from training to testing datasets.

Jan 17 2019
15 mins
Play

Rank #9: Episode 66: More intelligent machines with self-supervised learning

Podcast cover
Read more

In this episode I talk about a new paradigm of learning, which can be found a bit blurry and not really different from the other methods we know of, such as supervised and unsupervised learning. The method I introduce here is called self-supervised learning.

Enjoy the show!

 

Don't forget to subscribe to our Newsletter at amethix.com and get the latest updates in AI and machine learning. We do not spam. Promise!

 

References

Deep Clustering for Unsupervised Learning of Visual Features

Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey

Jun 25 2019
18 mins
Play

Rank #10: Episode 48: Coffee, Machine Learning and Blockchain

Podcast cover
Read more

In this episode - which I advise to consume at night, in a quite place - I speak about private machine learning and blockchain, while I sip a cup of coffee in my home office.There are several reasons why I believe we should start thinking about private machine learning...It doesn't really matter what approach becomes successful and gets adopted, as long as it makes private machine learning possible. If people own their data, they should also own the by-product of such data.

Decentralized machine learning makes this scenario possible.

Oct 21 2018
28 mins
Play

Rank #11: Episode 45: why do machine learning models fail?

Podcast cover
Read more

The success of a machine learning model depends on several factors and events. True generalization to data that the model has never seen before is more a chimera than a reality. But under specific conditions a well trained machine learning model can generalize well and perform with testing accuracy that is similar to the one performed during training.

In this episode I explain when and why machine learning models fail from training to testing datasets.

Aug 28 2018
16 mins
Play

Rank #12: Episode 54: Reproducible machine learning

Podcast cover
Read more

In this episode I speak about how important reproducible machine learning pipelines are. When you are collaborating with diverse teams, several tasks will be distributed among different individuals. Everyone will have good reasons to change parts of your pipeline, leading to confusion and definitely a number of options that soon explode. In all those cases, tracking data and code is extremely helpful to build models that are reproducible anytime, anywhere. Listen to the podcast and learn how.

Mar 09 2019
11 mins
Play

Rank #13: Episode 57: Neural networks with infinite layers

Podcast cover
Read more

How are differential equations related to neural networks? What are the benefits of re-thinking neural network as a differential equation engine? In this episode we explain all this and we provide some material that is worth learning. Enjoy the show!

 

Residual Block

 

References

[1] K. He, et al., “Deep Residual Learning for Image Recognition”, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016

[2] S. Hochreiter, et al., “Long short-term memory”, Neural Computation 9(8), pages 1735-1780, 1997.

[3] Q. Liao, et al.,”Bridging the gaps between residual learning, recurrent neural networks and visual cortex”, arXiv preprint, arXiv:1604.03640, 2016.

[4] Y. Lu, et al., “Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equation”, Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden, 2018.

[5] T. Q. Chen, et al., ” Neural Ordinary Differential Equations”, Advances in Neural Information Processing Systems 31, pages 6571-6583}, 2018

Apr 23 2019
16 mins
Play

Rank #14: Episode 60: Predicting your mouse click (and a crash course in deeplearning)

Podcast cover
Read more

Deep learning is the future. Get a crash course on deep learning. Now! In this episode I speak to Oliver Zeigermann, author of Deep Learning Crash Course published by Manning Publications at https://www.manning.com/livevideo/deep-learning-crash-course

Oliver (Twitter: @DJCordhose) is a veteran of neural networks and machine learning. In addition to the course - that teaches you concepts from prototype to production - he's working on a really cool project that predicts something people do every day... clicking their mouse. 

If you use promo code poddatascienceathome19 you get a 40% discount for all products on the Manning platform

Enjoy the show!

 

References:

Deep Learning Crash Course (Manning Publications)

https://www.manning.com/livevideo/deep-learning-crash-course?a_aid=djcordhose&a_bid=e8e77cbf

Companion notebooks for the code samples of the video course "Deep Learning Crash Course"

https://github.com/DJCordhose/deep-learning-crash-course-notebooks/blob/master/README.md

Next-button-to-click predictor source code

https://github.com/DJCordhose/ux-by-tfjs

May 16 2019
39 mins
Play

Rank #15: Episode 70: Validate neural networks without data with Dr. Charles Martin

Podcast cover
Read more

In this episode, I am with Dr. Charles Martin from Calculation Consulting a machine learning and data science consulting company based in San Francisco. We speak about the nuts and bolts of deep neural networks and some impressive findings about the way they work. 

The questions that Charles answers in the show are essentially two:

    Why is regularisation in deep learning seemingly quite different than regularisation in other areas on ML?
    How can we dominate DNN in a theoretically principled way?

 

References 
Jul 23 2019
44 mins
Play

Similar Podcasts