fcnn, optimization, and initialization fcnn optimization initialization kaiming initialization 感觉这节课比较有用的是这张ppt,我更喜欢从实验的角度来看kaiming initialization(避免梯度爆炸/消失)
Linear Classifiers Viewpoints of Linear Classifiers hard cases of linear classifiers hinge loss hinge loss is a loss function, linear and zeros-in-margin 感觉是一个有意思的线性损失函数,但是目前作用不大 SVM loss 对损失函数初始化估值可以初步验证是否由bug Regularization 从线性模型的角度看,添加常数因子效果不变,但是权重矩阵的范数会变化 由上可知不唯一,所以需要对权重矩阵进行约束,常用的约束方式是正则化 一种新的引入正则化项的思路 🤔 ===> express our preference or 先验的知识点 余下两种观点:防止过拟合 重新理解L1 / L2正则化 L1正则化 : 倾向于权重集中 L2正则化 : 倾向于权重均匀
Lecture 2: Image Classification Introduction Image classification is the task of assigning a label…
can be a building-block for many applications
More robust, data-driven approaches Understanding the dataset 简单介绍一下类似于MNIST, CIFAR-100等数据集的基本结构 提出Omniglot数据集的概念 few-shot learning Choosing a model Nearest Neighbor
find the distance metric between the test image and all the training images memorize the training images and their corresponding labels predict the label of the test image based on the nearest training image With N examples…