Deep Neural Frameworks

PYTORCH

  1. Deep learning with pytorch - The bookarrow-up-right

FAST.AI

KERAS

A make sense introduction into kerasarrow-up-right, has several videos on the topic, going through many network types, creating custom activation functions, going through examples.

+ Two extra videos from the same author, examplesarrow-up-right and examples-2arrow-up-right

Didn’t read:

  1. Stateful LSTMarrow-up-right - Example script showing how to use stateful RNNs to model long sequences efficiently.

  2. CONV LSTMarrow-up-right - this script demonstrate the use of a conv LSTM network, used to predict the next frame of an artificially generated move which contains moving squares.

How to force keras to use tensorflowarrow-up-right and not teano (set the .bat file)

Callbacks - how to create an AUC ROC score callback with kerasarrow-up-right - with code example.

Batch size vs. Iteratioarrow-up-rightns in NN Keras.

Keras metricsarrow-up-right - classification regression and custom metrics

Keras Metrics 2arrow-up-right - accuracy, ROC, AUC, classification, regression r^2.

Introduction to regression models in Keras,arrow-up-right using MSE, comparing baseline vs wide vs deep networks.

How does Keras calculate accuracyarrow-up-right? Formula and explanation

Compares label with the rounded predicted float, i.e. bigger than 0.5 = 1, smaller than = 0

For categorical we take the argmax for the label and the prediction and compare their location.

In both cases, we average the results.

Custom metrics (precision recall) in kerasarrow-up-right. Which are taken from herearrow-up-right, including entropy and f1

KERAS MULTI GPU

  1. Note: probably doesn't reflect on adam, is there a reference?

  2. Pitfalls in GPU training, this is a very important post, be aware that you can corrupt your weights using the wrong combination of batches-to-input-sizearrow-up-right, in keras-tensorflow. When you do multi-GPU training, it is important to feed all the GPUs with data. It can happen that the very last batch of your epoch has less data than defined (because the size of your dataset can not be divided exactly by the size of your batch). This might cause some GPUs not to receive any data during the last step. Unfortunately some Keras Layers, most notably the Batch Normalization Layer, can’t cope with that leading to nan values appearing in the weights (the running mean and variance in the BN layer).

KERAS FUNCTIONAL API

What is and how to use?arrow-up-right A flexible way to declare layers in parallel, i.e. parallel ways to deal with input, feature extraction, models and outputs as seen in the following images. Neural Network Graph With Shared Feature Extraction LayerNeural Network Graph With Multiple Inputs

Neural Network Graph With Multiple Outputs

KERAS EMBEDDING LAYER

Keras: Predict vs Evaluate

here:arrow-up-right

.predict() generates output predictions based on the input you pass it (for example, the predicted characters in the MNIST examplearrow-up-right)

.evaluate() computes the loss based on the input you pass it, along with any other metrics that you requested in the metrics param when you compiled your model (such as accuracy in the MNIST examplearrow-up-right)

Keras metrics

For classification methods - how does keras calculate accuracy, all functions.arrow-up-right

LOSS IN KERAS

Why is the training loss much higher than the testing loss?arrow-up-right A Keras model has two modes: training and testing. Regularization mechanisms, such as Dropout and L1/L2 weight regularization, are turned off at testing time.

The training loss is the average of the losses over each batch of training data. Because your model is changing over time, the loss over the first batches of an epoch is generally higher than over the last batches. On the other hand, the testing loss for an epoch is computed using the model as it is at the end of the epoch, resulting in a lower loss.

Last updated

Was this helpful?