• Home
  • About
    • Yerim Oh photo

      Yerim Oh

      Happy and worthwhile day by day :)

    • Learn More
    • Email
    • LinkedIn
    • Instagram
    • Github
    • Youtube
  • Posts
    • All Posts
    • All Tags
  • Projects

[20] [INDEX] CS231N 정리

10 Mar 2021

Reading time ~2 minutes

Table of Contents
  • [21] Lecture 1: Introduction
  • [23] Lecture 3: Regularization and Optimization
    • Regularization
    • Optimization Regularization
  • [24] Lecture 4: Neural Networks and Backpropagation
  • [25] Lecture 5: Convolutional Neural Networks
  • [26] Lecture 6: Training Neural Networks Part I
  • [27] Lecture 7: Training Neural Networks Part II

본 포스팅은 CS231N을 한국어로 정리한 포스팅입니다

  • CS231N lecture overview
  • CS231N lecture vedio

[21] Lecture 1: Introduction

learn 21

  • 컴퓨터 비전(Computer Vision)이란?
  • 컴퓨터 비전의 역사
  • 컴퓨터 비전의 발전
    • Block World
    • David Marr의 책
    • Recognition via Parts
    • Recognition via Edge Detection
    • 앞선 연구들의 한계
    • 영상분할(Image Segmentation)
    • 얼굴인식
    • 객체 인식(SIFT feature)
    • ImageNet 프로젝트
  • CNN

[23] Lecture 3: Regularization and Optimization

Regularization

learn 23

  • 손실함수
    • 손실함수 공식
  • Multiclass SVM loss
    • SVM Loss 작동 방식
    • 코드 구현
    • 예제 적용
    • 추가 질문
    • 개선: Regularization
      • Regularization의 종류
  • Softmax Classifier
    • 작동 방식
    • 예시 적용
    • 추가 질문
  • SVM vs Softmax

Optimization Regularization

learn 23

  • Optimization
    • 임의 탐색: random search
    • 경사 따라가기
    • 수치 미분: numerical gradient
    • 해석적 미분: Analytic gradient
      • 코드
  • 경사 하강법
    • SGD
  • Image Feature
    • color histogram
    • Histogram of oriented gradients (HOG)
    • bag of words
    • Image feature vs ConvNet

[24] Lecture 4: Neural Networks and Backpropagation

learn 24

  • Intro
  • computational graph
  • Backpropagation
    • 계산
    • 특징: local 계산
    • 복잡한 예제
    • analytic gradient의 간편성
    • Patterns in backward flow
  • 벡터로 확장
    • 백터화 예제 1
    • 백터화 예제 2
    • forward / backward API
    • 관련 프레임워크
  • Neural Networks
    • 실제 뉴런과 차이점

[25] Lecture 5: Convolutional Neural Networks

learn 25

  • History of CNN
    • 신경망의 역사
    • CNN의 역사
  • Convolutional Neural NetworK
    • Fully Connected Layer(FC Layer)
  • CNN
    • 필터의 연산
    • Activation map의 예제: Convolution
  • CNN의 수행 개요
  • Spatial dimension
    • stride
    • zero-pad
      • 예시적용
      • 1 x 1 Convolution
      • 프레임워크
  • The brain/neuron view of CONV Layer
  • Pooling Layer
    • MAX POOLING
    • Pooling Layer의 Design Choice
  • Fully Connected Layer (FC layer)

[26] Lecture 6: Training Neural Networks Part I

learn 26

  • Overview
  • Activation Function
    • Sigmoid
    • tanh
    • ReLU
    • Leaky ReLU
    • PReLU
    • ELU
    • Maxout Neuron
    • 참고
  • Data Preprocessing
    • zero mean의 한계
  • Weight Initialization
    • 가중치 초기화: Small random numbers
    • 가중치 초기화: large number
    • 가중치 초기화: Xavier initialization
  • Batch Normalization
  • Babysitting the Learning Process
  • Hyperparameter Optimization
    • cross-validation
    • grid search

[27] Lecture 7: Training Neural Networks Part II

learn 27

  • Overview
  • Fancier Optimization
    • SGD의 문제
    • SGD + Momentum
    • Nesterov momentum
    • AdaGrad
    • RMSProp
    • Adam
    • learning rates dacay
    • quasi-Newton methods
  • Model Ensembles
  • regularization
    • dropout
    • data augmentationn
    • DropConnect
    • fractional max pooling
    • stochastic depth
  • transfer learning


Deep Learning Share Tweet +1