How to use backward propagation in machine learning?

I have one output Lee this is hidden Life This is our place The input layer Is your data, okay Yes Place Yes So you’ve taken just one there So if we suppose we wanted to add one more layer of immortal not advance and the same set off neutrons can be different I’ve given example here We’ll do that We’ll add one more layer to this and see what their efficiency improves The acres improves.

I just kept your cable copy Piss in here We decided this is the last day of the last one Is Alfred Lee I got it How It will decide little offer fewer orders last month without land she’s asking how does it know this is the last layer doesn’t know Since you’re not defined anything beyond this there’s going to be a real quickly No no no no no Even I can start with this as one Right all of you Okay okay.

Let’s go down Yes Makes you killed one because it’s a binder Anything What if somebody has given a tree even though it doesn’t make sense But somebody has given what will give you interpreter lumber so you won’t be able to just see what deters I will be able to see But it won’t be able to interpret The numbers will give you three outputs But how do you design what class is based on that Then you’ll a place off banks But soft Maxie will live to classes so it will become a kind of march.

We’re even probably in the values after converting Sof Mexico below Yeah so just keep doing X and Martin cedar Is it like you didn’t before him then selected is Well I actually wanted to show you something else that makes up Then every list I’ve actually given you the final one but doesn’t matter will trace back and mix it will create The country will create the confession out of this Okay Wait we’ll do that I want you to do that Okay And the other thing is since we this is going to be my output layer it cast a basic mournful Okay In between I can array Lou and all damage and all the things whatever I want But this one I want sigmoid because I’m doing classifications Binary classification.

All right What do you define Pillow is on input layer Your data set hidden layer And that all put lip This is more to your noodle network will look like but we’re still not completed it We need to define okay When this neural network is used for predictions I’ll explain all these things to in a vial and in a bit will do that What you’ve built is a set-off layer, Okay But that is not sufficient When I push my data through that on the other end you get why I hired you Predicted values How do you convert back into error function How do you decide How do you call your estimate generators.

We faster to define whatever met function I won’t use I’m going to use Bryan Ferry crossing trophy not to reduce this error What optimizer algorithms should I use I’m going to use an optimization algorithm called Adam I can use as giddy If you’re a family that’s a duty we can use as Geneva Can you say Damn which one of them gives us the most optimal results is what we need to check again These are hyper perimeters Now what is metric I want to keep my eye on when all these iterations are going on on on this error function that I miss working on trying to minimize a little What is the material that you want to keep your eye own We’re going to use accuracy Are you, okay So what we do is mortal not compile.

All those layers that have built along with the layers you have these definitions All these definitions put together this one model for you At this point the model becomes an executable object Before that it was only a brew blueprint That blueprint is converting the concrete model at this time at this point question together that in Karen’s there are three matrices To my knowledge you can get here You can do that, Yes Please send the loss or that’s a function to calculate their And 2nd 1 is the Optimize it, Yes In that error sir Face what l what logic should I use to move from the highest error toe Least there So we know so great in distant.

A variance of great industry One of them is momentum Okay I’ll discuss these things today because it’s very important for you to know what this is But these are gently part of our model hyper paramedic unit Very adding bias here We don’t need to all it automatically does The oldest bias is not on your own vices just a constant in your program So by default bias Constance will be added Some random number will be assigned when you start your noodle network for the first time All the weights are initialized using random numbers That initialization methodology is also thereby the folded Xavier Xavier mythology off initializing the weeds When you’re doing that the buyers will also get it shelling the one Okay No What is this seat pokes And what is this bad size.

In this case, we have 50,000 records, not a large number The 50,000 can easily sit in tow the ram in the memory But when you’re dealing with other sets like imaginary data sets those run into lax of records hundreds of thousands of records They cannot all sit in the memory So what happens is during the forward propagation let me explain it on the board So during the suppose this is your deep neural network During the forward propagation on than the subsequent backward propagation the Bates get digested the adjusted way.

It’s No Let me test based on these weights I can’t read the data against Eve orders Ater again using backward propagation Minimums of eights, No Let me run against the new set of weights and save orders later You keep doing this every time you read the data from here to test the new set off Bates what is the performance of the model using the new set off Bates Every time you read the data from here that is called payback one E Polk is one full read off dreaming zit The number of the box is how many times you won’t read the training’s it Yeah sure It’s basically the illustrations that you go through the back.

We think we need to be slightly careful with the word integration because their traditions within iterations so that causes a lot of confusion Okay so what I’m saying is if you’re deep new let works This is my mortal Deanna I read my entire data set here Data training data are intimidated Sit forward propagation Why hat minus Why That child is the owner now to minimize the error I do a backdrop and they just await Now these adjusted rates home a traitor Do the gift for that I’d read the data again to a forward drop again Do a white hair.

This is why I had one This is why I had to compare it against way This gave me even this Give me e to Hopefully he too is less than even But they may not still be zero It may not still be minimized So let me do a backdrop Bandages of eights again But know that this newly adjusted rates I run the entire day does it again and see what is the performance that is another forward propagation So how many times you read the data Sit that is called E-Books You’re lucky No this is situation But be careful of the world attrition itself Because many things a retreat inside epochs is the number of times they read the file The data file the training file to train your model There is a box You know the answer It’s a hyper about a meter.

Leave a Comment