Disclaimer-mark
This is a user generated content for MyStory, a YourStory initiative to enable its community to contribute and have their voices heard. The views and writings here reflect that of the author and not of YourStory.
Disclaimer-mystory

A Primer to learn Machine Learning

A Primer to learn Machine Learning

Thursday June 06, 2019,

8 min Read

This article has been co-written by Ravikant Bhargava and Saahil Sachdeva


We often get asked, “How to learn Machine Learning” or some variant of the same. So, here I am penning my thoughts as a ready reply for such queries. Maybe it can help some searching soul.

Over the years, first learning myself and then helping others groom their skills, there are few patterns that I have observed which seem to work well. This post is meant to be a collection of my thoughts on these patterns. But remember, each one has a unique background, skill sets and learning mechanisms and hence every journey is unique. So, use this post only as a guide and try to optimize your own path as you move along learning just like a neural network does.


Balance theory and practical

Many people have this notion that you should not go about implementing something until you have a theoretical understanding. I myself have an OCD for postponing the implementation till I get the ‘feel’ of things. But resist this temptation while learning ML. In my view, they should both go hand-in-hand. You learn by trying new things and you try new things by learning. Many aspects or results of ML are empirical in nature, i.e. based on observation and not mathematically proven. So if even the established names in this field rely on observation rather than math then there is no shame in trying a few things yourself because your gut said so.


Here are my pointers to help you learn better:


Theory

  1. Pick a course, follow it thoroughly, lecture video, slides, assignments and all. There are a number of MOOCs available on sites like Coursera, Udemy, Udacity, UpGrad etc. Also, there are many universities who have recorded their lectures and provided them online for free. Here is one such link. As a professional courtesy, don’t forget to star this repo if you find it useful.
  2. ML is an empirical science but there is a ‘feel’ aspect to many of its workings. Always try to dig the internet for anything that is unintuitive or if you do not understand why it works. And trust me, this is the most beautiful part of this learning. When you are able to appreciate the numbers, the matrices, the differentials, the graphs and alien looking math with an intuitive feeling — you will definitely feel a surge of satisfaction.
  3. Having a book is not necessary as there is enough online material. But keeping it handy for a quick sneak-peak might help.
  4. If you have graduated from basic theory and are now into reading papers then try to read them thoroughly. Many tend to overlook sections like related works, experiments, data analysis, ablation studies etc because they seem boring. But in my experience, they are as important as the section on implementation or results. Related work can point you to another paper or an approach that might be more aligned to the problem that you are trying to solve. Analysing dataset used in training will provide insights into whether the given data is similar to yours or not or why your results are poorer than the reported ones. Experiments are probably the most important sections in view. They suggest the ways in which you can play with the network. They are also a rich source to get the ‘feel’ of the given architecture.


Practical


  1. Course assignments are really helpful in structuring the learning and reducing the fear of math and the code. Generally, the course assignments start from very basic, so you can easily follow. This helps in learning difficult looking concepts and how they can be implemented with few library API calls.
  2. Courses are just the first step. Real learning will come when you get your hands dirty without much supervision. For that pick a project in a topic of your interest and start playing with it. For e.g. my first project was on object detection and I started with the awesome py-faster-rcnn repo. The project can be an active open source project over which you can build or a completely new project starting from scratch.
  3. The benefit of an open source project is that if you are stuck somewhere then there are others who can help you. Nowadays there are several frameworks like TensorflowPytorchDetectron etc. They provide APIs to ease your development, trained models and code setup to start building. Their community support is pretty wide and active. So working with these eases the learning process.
  4. If you are starting from scratch then start with something simple, for e.g. image classification over a standard dataset. Using standard datasets like Imagenet or MSCOCO takes away the pain of collecting, preprocessing, annotating and verifying its genuineness which, trust me, is a big pain! Collecting data manually or by scraping the web generally results in unintended biases or too much variance. So try to reduce the data dependency and you can focus on the meat of stuff.
  5. If you are comfortable with basics then you may pick more difficult challenges like Kaggle competitions. Whether it is a Kaggle challenge or a GitHub project, make a habit of going through issues/posts by other users. You will be surprised to find out the good things that you can learn just by going through other users’ stuff!
  6. It is an empirical science, so be content with some of the observations that you cannot explain. Some things are learned over time.


Some myths


  1. You don’t need a very strong background in Maths for a career in ML. Having it helps but it is not something that cannot be learned. Of course, there are roles in core research that require such skills to come up with novel ideas but for general industry roles, it is not needed!
  2. On the other hand, don’t think that by running an open source project you have become a ‘Machine Learning Expert’. The way things are changing, I doubt anyone can claim to know it all! Just be a curious learner and you will go far.
  3. Don’t do Machine Learning due to its hype. It will die down. It has happened before and will happen again. Remember that all these big guys, the Hintons, the Ngs, the Karpathys et al of the world were doing these things before they were cool. So if you want to be like them, be ready for the grind. The end results are generally cool, the journey, not so.


Some other ways to learn


  1. Meet people, attend meetups. I have learnt a huge deal from other ML practitioners in the past and I still do. Even if the topics presented go above your head, you will at least be inspired by them or their work. The added benefit of meeting people is that you get some insights into work being done at other places. It may also open doors for you.
  2. Once you have learnt something, try to give it back to the community. Hold meetups, give talks. I cannot emphasize it more so I will rest my case with a quote from Richard P. Feynman, one of the best teachers the world has ever seen: “What I cannot create, I do not understand.”
  3. Follow people/labs/organizations whose work you like on Twitter, Facebook etc. I will really recommend Twitter as several Deep Learning researchers are pretty active on the platform. These platforms serve as a quick way to be updated about the latest work in your field of interest.
  4. In case you feel really lost on your learning journey — then don’t hesitate to ask people for help. Github issues and StackOverflow are the best places to ask. Also, feel free to ask your doubts on social media if you are not afraid of being labelled a ‘nerd’. My experience has been that most of the people are generally helpful irrespective of the perceived stupidity level of your question. So don’t overthink and just go for it. A word of advice here: It would be wrong on your part to expect that the best person to answer your question will answer that question. Generally, the best person is really busy doing great things for the greater good of the community. So have patience and someone will generally help.


I think that is all the ‘gyaan’ that I wanted to share with you all. I will try to update this piece as my own journey proceeds ahead. Remember, we are not so different from neural nets. We all have our own inefficiencies, our own biases. We will keep on making mistakes. But that should not stop us from improving every day. Keep learning in small-small steps and keep optimizing yourself.

Happy Learning!




Ravikant Bhargav is Chief Research Officer at Silversparro. Silversparro provides AI-powered Video Analytics for workplace productivity. Silversparro is founded by IIT Delhi Alumni — Abhinav Kumar GuptaAnkit Agarwal and Ravikant Bhargava and is working for clients such as Viacom18PolicybazaarAditya Birla Finance LimitedUHV Technologies etc. Silversparro is backed by NVIDIA Inception program and marquee investors such as Anand Chandrashekaran (Facebook), Dinesh Agarwal (Indiamart), Rajesh Sawheny (Innerchef) etc.