Posted on Nov 27, 2021

Weight Decay: basics with implementations

Posted in Machine Learning

Whoever is out there, working with Machine Learning models, overfitting must be a challenge for us. We can overcome this challenge by going out and collecting more data. But this can be costly, time-consuming, or sometimes even impossible for individuals! So, what do we do? We can follow some regularization techniques. Weight Decay is...

Posted on Jul 16, 2021

Language Model Integration in Encoder-Decoder Speech Recognition

Posted in Machine Learning
photo of a motherboard

Currently, Attention-based recurrent Encoder-Decoder models provide an elegant way of building end-to-end models for different tasks, like automatic speech recognition (ASR), machine translation (MT), etc. An end-to-end ASR model folds traditional acoustic model, pronunciation model, and language model (LM) into a single network. An encoder maps the input speech to a sequence of higher-level...

Posted on Jul 04, 2021

What is Label Smoothing in Machine Learning?

Posted in Machine Learning
label smoothing

Nowadays for a lot of tasks, we are using deep neural networks or deep learning. While working with deep learning very often we face situations like overfitting and overconfidence. Overfitting is relatively well studied and can be tackled with several strategies like dropout, weight regularization, early stopping, etc. We have tools for tackling overconfidence...

Scroll to top
Copy link