| S.No |
Topic |
Sub-Topics |
| 1 |
Deep Learning |
What is deep learning, History, DL vs ML, Applications, Challenges |
| 2 |
Linear Algebra Refresher |
Vectors, Matrices, Dot product, Eigenvalues, Matrix operations |
| 3 |
Probability & Statistics |
Random variables, Probability distributions, Mean & variance, Bayes theorem, Expectation |
| 4 |
Optimization Basics |
Loss functions, Cost functions, Convex vs non-convex, Gradient descent, Learning rate |
| 5 |
Neural Network Basics |
Perceptron, Neurons, Weights & bias, Activation functions, Forward propagation |
| 6 |
Activation Functions |
Sigmoid, Tanh, ReLU, Leaky ReLU, Softmax |
| 7 |
Backpropagation |
Chain rule, Gradient computation, Weight updates, Vanishing gradients, Exploding gradients |
| 8 |
Training Deep Neural Networks |
Epochs, Batch size, Initialization, Convergence, Overfitting |
| 9 |
Regularization Techniques |
L1, L2, Dropout, Early stopping, Data augmentation |
| 10 |
Optimizers |
SGD, Momentum, RMSProp, Adam, AdamW |
| 11 |
Loss Functions |
MSE, MAE, Cross-entropy, Hinge loss, KL divergence |
| 12 |
Deep Learning Frameworks |
TensorFlow basics, Keras API, PyTorch basics, Autograd, Model training loop |
| 13 |
Convolutional Neural Networks (CNN) |
Convolution layers, Pooling, Padding, Stride, Feature maps |
| 14 |
CNN Architectures |
LeNet, AlexNet, VGG, ResNet, Inception |
| 15 |
Image Classification |
Dataset preparation, Transfer learning, Fine-tuning, Evaluation metrics, Deployment basics |
| 16 |
Sequence Modeling |
Time series, Sequential data, Tokenization, Padding, Masking |
| 17 |
Recurrent Neural Networks (RNN) |
RNN architecture, BPTT, Vanishing gradients, Use cases, Limitations |
| 18 |
LSTM & GRU |
Cell states, Gates, LSTM vs GRU, Applications, Training tips |
| 19 |
Attention Mechanism |
Why attention, Self-attention, Encoder-decoder attention, Scaled dot-product, Benefits |
| 20 |
Transformers |
Transformer architecture, Positional encoding, Multi-head attention, Encoder-decoder, Training |
| 21 |
Natural Language Processing with DL |
Word embeddings, RNN for NLP, Transformers for NLP, Text classification, NER |
| 22 |
Autoencoders |
Basic autoencoder, Sparse AE, Denoising AE, Variational AE, Use cases |
| 23 |
Generative Models |
GAN basics, Generator & discriminator, Training instability, DCGAN, Applications |
| 24 |
Advanced CNN Applications |
Object detection, Image segmentation, Face recognition, Medical imaging, OCR |
| 25 |
Transfer Learning & Fine-tuning |
Pretrained models, Layer freezing, Domain adaptation, Benefits, Limitations |
| 26 |
Model Evaluation |
Accuracy, Precision, Recall, F1-score, ROC-AUC |
| 27 |
Hyperparameter Tuning |
Grid search, Random search, Bayesian optimization, Learning rate schedules, Batch tuning |
| 28 |
Model Optimization |
Pruning, Quantization, Knowledge distillation, Mixed precision, Inference optimization |
| 29 |
Deployment of DL Models |
REST APIs, Model serving, Cloud deployment, Edge deployment, Monitoring |
| 30 |
Advanced & Emerging Topics |
Self-supervised learning, Multimodal models, Foundation models, Ethical AI, Future trends |