본문 바로가기

Computer Science

06 Matrices for solving systems Matrices: Reduced Row Echelon Form 1 The matrices are just arrays of numbers that are shorthand for this system of equations. We can make a coefficient matrix! Let's see this matrix is independent or not. We will transform this matrix to Reduced Row Echelon form(RREF (A)). The variables that you associate with your pivot entries, we call these pivot variables(x1, x3). And the variables that are .. 더보기
[F-30] Multiprocessing Intro¶Learning Objectives Understanding Multitasking, parallel programming with python Make Parallel programming with concurrent.futures with python Contents Multitasking Program Profiling scale-up vs scale-out Multiprocess, Multithread Multithread Multiprocess Thread/Process Pool Example concurrent.futures module finding prime number Multitasking - What is multitasking?¶You need to know how to .. 더보기
09. CNN Architecture Intro In Lecture 9 we discuss some common architectures for convolutional neural networks. We discuss architectures which performed well in the ImageNet challenges, including AlexNet, VGGNet, GoogLeNet, and ResNet, as well as other interesting models. Keywords: AlexNet, VGGNet, GoogLeNet, ResNet, Network in Network, Wide ResNet, ResNeXT, Stochastic Depth, DenseNet, FractalNet, SqueezeNet [link] .. 더보기
07. Training Neural Networks(2) CNN Preivew Lecture 7 continues our discussion of practical issues for training neural networks. We discuss different update rules commonly used to optimize neural networks during training, as well as different strategies for regularizing large neural networks including dropout. We also discuss transfer learning and finetuning. Keywords: Optimization, momentum, Nesterov momentum, AdaGrad, RMSPro.. 더보기
06. Training Neural Networks(1) In Lecture 6 we discuss many practical issues for training modern neural networks. We discuss different activation functions, the importance of data preprocessing and weight initialization, and batch normalization; we also cover some strategies for monitoring the learning process and choosing hyperparameters. Keywords: Activation functions, data preprocessing, weight initialization, batch normal.. 더보기
05. Convolutional Neural Networks In Lecture 5 we move from fully-connected neural networks to convolutional neural networks. We discuss some of the key historical milestones in the development of convolutional networks, including the perceptron, the neocognitron, LeNet, and AlexNet. We introduce convolution, pooling, and fully-connected layers which form the basis for modern convolutional networks. Keywords: Convolutional neura.. 더보기
04. Introduction to Neural Networks In Lecture 4 we progress from linear classifiers to fully-connected neural networks. We introduce the backpropagation algorithm for computing gradients and briefly discuss connections between artificial neural networks and biological neural networks. Keywords: Neural networks, computational graphs, backpropagation, activation functions, biological neurons slides Backpropagation Backprop is a rec.. 더보기
03. Loss Functions and Optimization Lecture 3 continues our discussion of linear classifiers. We introduce the idea of a loss function to quantify our unhappiness with a model’s predictions, and discuss two commonly used loss functions for image classification: the multiclass SVM loss and the multinomial logistic regression loss. We introduce the idea of regularization as a mechanism to fight overfitting, with weight decay as a co.. 더보기