Deep Learning is a part of Machine Learning that focuses on algorithms that are inspired by the structure and function of the human brain. These algorithms are referred to as artificial neural networks. Neural networks learn to do things by considering examples, which is what we as humans do. Deep learning is behind a lot of the new technologies like driverless cars.
Tensorflow is Google’s library for deep learning and artificial intelligence. TensorFlow offers APIs that facilitates Machine Learning. TensorFlow also has a faster compilation time than other Deep Learning libraries such as Keras and Touch. TensorFlow supports both CPU and GPU computing devices.
In this ‘Deep Learning with TensorFlow 2.0 Certification Training’ course, you will be working on various real-time projects like Emotion and Gender Detection, Auto Image Captioning using CNN and LSTM, and so much more.
Global Edulink is a leading online provider for several accrediting bodies, and provides learners the opportunity to take this exclusive course awarded by CPD. At Global Edulink, we give our fullest attention to our learners’ needs and ensure they have the necessary information required to proceed with the Course. Learners who register will be given excellent support, discounts for future purchases and be eligible for a TOTUM Discount card and Student ID card with amazing offers and access to retail stores, the library, cinemas, gym memberships and their favourite restaurants.
The course will be directly delivered to you, and you have 12 months access to the online learning platform from the date you joined the course. The course is self-paced and you can complete it in stages, revisiting the lessons at any time.
1: Introduction to Deep Learning | |||
What is Deep Learning? | |||
Curse of Dimensionality | |||
Machine Learning vs. Deep Learning | |||
Use cases of Deep Learning | |||
Human Brain vs. Neural Network | |||
What is Perceptron? | |||
Learning Rate | |||
Epoch | |||
Batch Size | |||
Activation Function | |||
Single Layer Perceptron | |||
2: Getting Started with TensorFlow 2.0 | |||
Introduction to TensorFlow 2.x | |||
Installing TensorFlow 2.x | |||
Defining Sequence model layers | |||
Activation Function | |||
Layer Types | |||
What is Convolution | |||
Model Optimizer | |||
Model Loss Function | |||
Model Training | |||
Digit Classification using Simple Neural Network in TensorFlow 2.x | |||
Improving the model | |||
Adding Hidden Layer | |||
Adding Dropout | |||
Using Adam Optimizer | |||
3: Convolution Neural Network | |||
Image Classification Example | |||
What is Convolution | |||
Convolutional Layer Network | |||
Convolutional Layer | |||
Filtering | |||
ReLU Layer | |||
Pooling | |||
Data Flattening | |||
Fully Connected Layer | |||
Predicting a cat or a dog | |||
Saving and Loading a Model | |||
Face Detection using OpenCV | |||
4: Regional CNN | |||
Regional-CNN | |||
Selective Search Algorithm | |||
Bounding Box Regression | |||
SVM in RCNN | |||
Pre-trained Model | |||
Model Accuracy | |||
Model Inference Time | |||
Model Size Comparison | |||
Transfer Learning | |||
Object Detection – Evaluation | |||
mAP | |||
IoU | |||
RCNN – Speed Bottleneck | |||
Fast R-CNN | |||
RoI Pooling | |||
Fast R-CNN – Speed Bottleneck | |||
Faster R-CNN | |||
Feature Pyramid Network (FPN) | |||
Regional Proposal Network (RPN) | |||
Mask R-CNN | |||
5: Boltzmann Machine & Autoencoder | |||
What is Boltzmann Machine (BM)? | |||
Identify the issues with BM | |||
Why did RBM come into picture? | |||
Step by step implementation of RBM | |||
Distribution of Boltzmann Machine | |||
Understanding Autoencoders | |||
Architecture of Autoencoders | |||
Brief on types of Autoencoders | |||
Applications of Autoencoders | |||
6: Generative Adversarial Network(GAN) | |||
Which Face is Fake? | |||
Understanding GAN | |||
What is Generative Adversarial Network? | |||
How does GAN work? | |||
Step by step Generative Adversarial Network implementation | |||
Types of GAN | |||
Recent Advances: GAN | |||
7: Emotion and Gender Detection | |||
Where do we use Emotion and Gender Detection? $ | |||
How does it work? | |||
Emotion Detection architecture | |||
Face/Emotion detection using Haar Cascade | |||
Implementation on Colab | |||
8: Introduction RNN and GRU | |||
Issues with Feed Forward Network | |||
Recurrent Neural Network (RNN) | |||
Architecture of RNN | |||
Calculation in RNN | |||
Backpropagation and Loss calculation | |||
Applications of RNN | |||
Vanishing Gradient | |||
Exploding Gradient | |||
What is GRU? | |||
Components of GRU | |||
Update gate | |||
Reset gate | |||
Current memory content | |||
Final memory at current time step | |||
9: LSTM | |||
What is LSTM? | |||
Structure of LSTM | |||
Forget Gate | |||
Input Gate | |||
Output Gate | |||
LSTM architecture | |||
Types of Sequence-Based Model | |||
Sequence Prediction | |||
Sequence Classification | |||
Sequence Generation | |||
Types of LSTM | |||
Vanilla LSTM | |||
Stacked LSTM | |||
CNN LSTM | |||
Bidirectional LSTM | |||
How to increase the efficiency of the model? | |||
Backpropagation through time | |||
Workflow of BPTT | |||
10: Auto Image Captioning Using CNN LSTM | |||
Auto Image Captioning | |||
COCO dataset | |||
Pre-trained model | |||
Inception V3 model | |||
Architecture of Inception V3 | |||
Modify last layer of pre-trained model | |||
Freeze model | |||
CNN for image processing | |||
LSTM or text processing |
Warren Burgess
The course was beautifully designed and explained in a very good manner. Thank you very much for your effort. This also provides a great opportunity for the people to develop their career.
Bailey Anderson
Worth the time and efforts. Easy to follow and learn! Lessons were well structured with parallel accent on both theory and code related with the use of TensorFlow library.
Abel Berry
The course content is comprehensive and my advise for students would be, plan to allocate some quality time if you want to complete this course.
Aurelia Hill
I enjoyed the pace of learning in this course, a bit daunting because it makes one realise how little one knows about this topic.