пятница, 16 августа 2019 г.

Logistic Regression with a Neural Network mindset, 丁鹏的博客

Logistic Regression with a Neural Network mindset. You will learn to: Build the general architecture of a learning algorithm, including: Initializing parameters Calculating the cost function and its gradient Using an optimization algorithm (gradient descent) Gather all three functions above into a main model function, in the right order. numpy is the fundamental package for scientific computing with Python. h5py is a common package to interact with a dataset that is stored on an H5 file. matplotlib is a famous library to plot graphs in Python. PIL and scipy are used here to test your model with your own picture at the end. Overview of the Problem set. Find the values for: m_train (number of training examples) m_test (number of test examples) num_px (= height = width of a training image) Reshape the training and test data sets. Let’s standardize our dataset. What you need to remember: Common steps for pre-processing a new dataset are: Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, …) Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1) “Standardize” the data. General Architecture of the learning algorithm. The following Figure explains why Logistic Regression is actually a very simple Neural Network! Key steps : In this exercise, you will carry out the following steps: Initialize the parameters of the model Learn the parameters for the model by minimizing the cost Use the learned parameters to make predictions (on the test set) Analyse the results and conclude. Building the parts of our algorithm. The main steps for building a Neural Network are: Define the model structure (such as number of input features) Initialize the model’s parameters Loop: Calculate current loss ( forward propagation ) Calculate current gradient ( backward propagation ) Update parameters (gradient descent) You often build 1-3 separately and integrate them into one function we call model(). Helper functions. Initializing parameters. You have to initialize w as a vector of zeros. If you don’t know what numpy function to use, look up np.zeros() in the Numpy library’s documentation. For image inputs, w will be of shape (num_px * num_px * 3, 1). Forward and Backward propagation. Now that your parameters are initialized, you can do the “forward” and “backward” propagation steps for learning the parameters. Implement a function propagate() that computes the cost function and its gradient. Optimization. You have initialized your parameters. You are also able to compute a cost function and its gradient. Now, you want to update the parameters using gradient descent. The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the predict() function. There is two steps to computing predictions: What to remember : You’ve implemented several functions that: Initialize (w,b) Optimize the loss iteratively to learn parameters (w,b): computing the cost and its gradient updating the parameters using gradient descent Use the learned (w,b) to predict the labels for a given set of examples. Merge all functions into a model. Implement the model function. Use the following notation: Y_prediction_test for your predictions on the test set Y_prediction_train for your predictions on the train set w, costs, grads for the outputs of optimize() Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Let’s also plot the cost function and the gradients. Further analysis. Choice of learning rate. In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate α determines how rapidly we update the parameters. If the learning rate is too large we may “overshoot” the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That’s why it is crucial to use a well-tuned learning rate. Different learning rates give different costs and thus different predictions results. If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). A lower cost doesn’t mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy. In deep learning, we usually recommend that you: Choose the learning rate that better minimizes the cost function. If your model overfits, use other techniques to reduce overfitting. (We’ll talk about this in later videos.) What to remember from this assignment: Preprocessing the dataset is important. You implemented each function separately: initialize() , propagate() , optimize() . Then you built a model(). Tuning the learning rate (which is an example of a “hyperparameter”) can make a big difference to the algorithm.

Комментариев нет:

Отправить комментарий