Skip to content

Commit

Permalink
Update docs (#879)
Browse files Browse the repository at this point in the history
* Add docs config

* Fix mock

* Add huggingface_hub to reqs for docs

* Remove from mocks

* Fix

* Change theme

* Fix

* Fix

* Update emoji

* Table of content

* Links in doc

* Update content

* Update examples

* Update

* Update

* Add save load
  • Loading branch information
qubvel authored May 31, 2024
1 parent b948136 commit 3d6da1d
Show file tree
Hide file tree
Showing 11 changed files with 150 additions and 25 deletions.
8 changes: 4 additions & 4 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

import sys
import datetime
import sphinx_rtd_theme
# import sphinx_rtd_theme

sys.path.append("..")

Expand Down Expand Up @@ -67,13 +67,13 @@ def get_version():
# a list of builtin themes.
#

html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# html_theme = "sphinx_rtd_theme"
# html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]

# import karma_sphinx_theme
# html_theme = "karma_sphinx_theme"

html_theme = "faculty_sphinx_theme"
html_theme = "sphinx_book_theme"

# import catalyst_sphinx_theme
# html_theme = "catalyst_sphinx_theme"
Expand Down
2 changes: 1 addition & 1 deletion docs/encoders.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
🏔 Available Encoders
🔍 Available Encoders
=====================

ResNet
Expand Down
2 changes: 1 addition & 1 deletion docs/encoders_timm.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
🪐 Timm Encoders
🎯 Timm Encoders
~~~~~~~~~~~~~~~~

Pytorch Image Models (a.k.a. timm) has a lot of pretrained models and interface which allows using these models as encoders in smp,
Expand Down
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ Welcome to Segmentation Models's documentation!
encoders_timm
losses
metrics
save_load
insights


Expand Down
2 changes: 1 addition & 1 deletion docs/insights.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
🔧 Insights
💡 Insights
===========

1. Models architecture
Expand Down
2 changes: 1 addition & 1 deletion docs/install.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
🛠 Installation
⚙️ Installation
===============

PyPI version:
Expand Down
2 changes: 1 addition & 1 deletion docs/metrics.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
📈 Metrics
📏 Metrics
==========

Functional metrics
Expand Down
48 changes: 38 additions & 10 deletions docs/models.rst
Original file line number Diff line number Diff line change
@@ -1,40 +1,68 @@
📦 Segmentation Models
🕸️ Segmentation Models
==============================


.. contents::
:local:

.. _unet:

Unet
~~~~
.. autoclass:: segmentation_models_pytorch.Unet


.. _unetplusplus:

Unet++
~~~~~~
.. autoclass:: segmentation_models_pytorch.UnetPlusPlus

MAnet
~~~~~~
.. autoclass:: segmentation_models_pytorch.MAnet

Linknet
~~~~~~~
.. autoclass:: segmentation_models_pytorch.Linknet
.. _fpn:

FPN
~~~
.. autoclass:: segmentation_models_pytorch.FPN


.. _pspnet:

PSPNet
~~~~~~
.. autoclass:: segmentation_models_pytorch.PSPNet

PAN
~~~
.. autoclass:: segmentation_models_pytorch.PAN

.. _deeplabv3:

DeepLabV3
~~~~~~~~~
.. autoclass:: segmentation_models_pytorch.DeepLabV3


.. _deeplabv3plus:

DeepLabV3+
~~~~~~~~~~
.. autoclass:: segmentation_models_pytorch.DeepLabV3Plus


.. _linknet:

Linknet
~~~~~~~
.. autoclass:: segmentation_models_pytorch.Linknet


.. _manet:

MAnet
~~~~~~
.. autoclass:: segmentation_models_pytorch.MAnet


.. _pan:

PAN
~~~
.. autoclass:: segmentation_models_pytorch.PAN
28 changes: 24 additions & 4 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Quick Start
🚀 Quick Start
==============

**1. Create segmentation model**
Expand All @@ -16,8 +16,9 @@ Segmentation model is just a PyTorch nn.Module, which can be created as easy as:
classes=3, # model output channels (number of classes in your dataset)
)
- see table with available model architectures
- see table with avaliable encoders and its corresponding weights
- Check the page with available :doc:`model architectures <models>`.
- Check the table with :doc:`available ported encoders and its corresponding weights <encoders>`.
- `Pytorch Image Models (timm) <https://github.com/huggingface/pytorch-image-models>`_ encoders are also supported, check it :doc:`here<encoders_timm>`.

**2. Configure data preprocessing**

Expand All @@ -33,4 +34,23 @@ All encoders have pretrained weights. Preparing your data the same way as during
**3. Congratulations!** 🎉


You are done! Now you can train your model with your favorite framework!
You are done! Now you can train your model with your favorite framework, or as simple as:

.. code-block:: python
for images, gt_masks in dataloader:
predicted_mask = model(image)
loss = loss_fn(predicted_mask, gt_masks)
loss.backward()
optimizer.step()
Check the following examples:

.. |colab-badge| image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/master/examples/binary_segmentation_intro.ipynb
:alt: Open In Colab

- Finetuning notebook on Oxford Pet dataset with `PyTorch Lightning <https://github.com/qubvel/segmentation_models.pytorch/blob/master/examples/binary_segmentation_intro.ipynb>`_ |colab-badge|
- Finetuning script for cloth segmentation with `PyTorch Lightning <https://github.com/ternaus/cloths_segmentation>`_
6 changes: 4 additions & 2 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
faculty-sphinx-theme==0.2.2
sphinx<7
sphinx-book-theme==1.1.2
six==1.15.0
autodocsumm
autodocsumm
huggingface_hub
74 changes: 74 additions & 0 deletions docs/save_load.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
📂 Saving and Loading
=====================

In this section, we will discuss how to save a trained model, push it to the Hugging Face Hub, and load it back for later use.

Saving and Sharing a Model
--------------------------

Once you have trained your model, you can save it using the `.save_pretrained` method. This method saves the model configuration and weights to a directory of your choice.
And, optionally, you can push the model to the Hugging Face Hub by setting the `push_to_hub` parameter to `True`.

For example:

.. code:: python
import segmentation_models_pytorch as smp
model = smp.Unet('resnet34', encoder_weights='imagenet')
# After training your model, save it to a directory
model.save_pretrained('./my_model')
# Or saved and pushed to the Hub simultaneously
model.save_pretrained('username/my-model', push_to_hub=True)
Loading Trained Model
---------------------

Once your model is saved and pushed to the Hub, you can load it back using the `smp.from_pretrained` method. This method allows you to load the model weights and configuration from a directory or directly from the Hub.

For example:

.. code:: python
import segmentation_models_pytorch as smp
# Load the model from the local directory
model = smp.from_pretrained('./my_model')
# Alternatively, load the model directly from the Hugging Face Hub
model = smp.from_pretrained('username/my-model')
Saving model Metrics and Dataset Name
-------------------------------------

You can simply pass the `metrics` and `dataset` parameters to the `save_pretrained` method to save the model metrics and dataset name in Model Card along with the model configuration and weights.

For example:

.. code:: python
import segmentation_models_pytorch as smp
model = smp.Unet('resnet34', encoder_weights='imagenet')
# After training your model, save it to a directory
model.save_pretrained('./my_model', metrics={'accuracy': 0.95}, dataset='my_dataset')
# Or saved and pushed to the Hub simultaneously
model.save_pretrained('username/my-model', push_to_hub=True, metrics={'accuracy': 0.95}, dataset='my_dataset')
Conclusion
----------

By following these steps, you can easily save, share, and load your models, facilitating collaboration and reproducibility in your projects. Don't forget to replace the placeholders with your actual model paths and names.

|colab-badge|

.. |colab-badge| image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/master/examples/binary_segmentation_intro.ipynb
:alt: Open In Colab


0 comments on commit 3d6da1d

Please sign in to comment.