diff --git a/01_pytorch_workflow.ipynb b/01_pytorch_workflow.ipynb index b5c4e5ff..9d7ac68b 100644 --- a/01_pytorch_workflow.ipynb +++ b/01_pytorch_workflow.ipynb @@ -1090,7 +1090,7 @@ "source": [ "Wow! How cool is that?\n", "\n", - "Our model got very close to calculate the exact original values for `weight` and `bias` (and it would probably get even closer if we trained it for longer).\n", + "Our model got very close to calculating the exact original values for `weight` and `bias` (and it would probably get even closer if we trained it for longer).\n", "\n", "> **Exercise:** Try changing the `epochs` value above to 200, what happens to the loss curves and the weights and bias parameter values of the model?\n", "\n", @@ -1212,7 +1212,7 @@ "source": [ "Woohoo! Those red dots are looking far closer than they were before!\n", "\n", - "Let's get onto saving an reloading a model in PyTorch." + "Let's get onto saving and reloading a model in PyTorch." ] }, { @@ -1343,7 +1343,7 @@ "\n", "So instead, we're using the flexible method of saving and loading just the `state_dict()`, which again is basically a dictionary of model parameters.\n", "\n", - "Let's test it out by created another instance of `LinearRegressionModel()`, which is a subclass of `torch.nn.Module` and will hence have the in-built method `load_state_dict()`." + "Let's test it out by creating another instance of `LinearRegressionModel()`, which is a subclass of `torch.nn.Module` and will hence have the in-built method `load_state_dict()`." ] }, { @@ -1797,7 +1797,7 @@ " def forward(self, x: torch.Tensor) -> torch.Tensor:\n", " return self.linear_layer(x)\n", "\n", - "# Set the manual seed when creating the model (this isn't always need but is used for demonstrative purposes, try commenting it out and seeing what happens)\n", + "# Set the manual seed when creating the model (this isn't always needed but is used for demonstrative purposes, try commenting it out and seeing what happens)\n", "torch.manual_seed(42)\n", "model_1 = LinearRegressionModelV2()\n", "model_1, model_1.state_dict()" @@ -1879,7 +1879,7 @@ } ], "source": [ - "# Set model to GPU if it's availalble, otherwise it'll default to CPU\n", + "# Set model to GPU if it's available, otherwise it'll default to CPU\n", "model_1.to(device) # the device variable was set above to be \"cuda\" if available or \"cpu\" if not\n", "next(model_1.parameters()).device" ] @@ -2434,7 +2434,7 @@ "* Read [What is `torch.nn`, really?](https://pytorch.org/tutorials/beginner/nn_tutorial.html) by Jeremy Howard for a deeper understanding of how one of the most important modules in PyTorch works. \n", "* Spend 10-minutes scrolling through and checking out the [PyTorch documentation cheatsheet](https://pytorch.org/tutorials/beginner/ptcheat.html) for all of the different PyTorch modules you might come across.\n", "* Spend 10-minutes reading the [loading and saving documentation on the PyTorch website](https://pytorch.org/tutorials/beginner/saving_loading_models.html) to become more familiar with the different saving and loading options in PyTorch. \n", - "* Spend 1-2 hours read/watching the following for an overview of the internals of gradient descent and backpropagation, the two main algorithms that have been working in the background to help our model learn. \n", + "* Spend 1-2 hours reading/watching the following for an overview of the internals of gradient descent and backpropagation, the two main algorithms that have been working in the background to help our model learn. \n", " * [Wikipedia page for gradient descent](https://en.wikipedia.org/wiki/Gradient_descent)\n", " * [Gradient Descent Algorithm — a deep dive](https://towardsdatascience.com/gradient-descent-algorithm-a-deep-dive-cf04e8115f21) by Robert Kwiatkowski\n", " * [Gradient descent, how neural networks learn video](https://youtu.be/IHZwWFHWa-w) by 3Blue1Brown\n",