MobileNetV2 demo in pytorch: https://pytorch.org/hub/pytorch_vision_mobilenet_v2/ MobileNet V2 paper: https://arxiv.org/pdf/1801.04381.pdf Articles with good illustrations: https://predictiveprogrammer.com/famous-convolutional-neural-network-architectures-2/ https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d https://towardsdatascience.com/visualizing-convolution-neural-networks-using-pytorch-3dfa8443e74e Inverted residuals: https://towardsdatascience.com/mobilenetv2-inverted-residuals-and-linear-bottlenecks-8a4362f4ffd5 MobileNetV2 parameters: t,c,n,s t = expansion ratio (nhiddens per input channel) for bottleneck layers c = number of input channels n = repetition count of blocks (sum of n's = # layers) s = stride ================ Files included in this folder: labels.py holds all 1000 Imagenet class labels. images/ folder holds the images used by mb_demo1 and mb_demo2. mb_demo1 loads several images and gives their top 5 classifications. mb_demo2 loads several images and gives their feature vector representations. It doesn't display anything. mb_demo3 displays all 32 layer 1 kernels, either as blended 3x3 grids or as three 3x3 opponent color grids. It then runs through all 32 kernels and shows the response to a test image. MobileNet.fsm runs the current camera image through the network and prints the top 5 classifications.