This is a tracker based on the CoTracker3 algorithm with some modifications for Biodiversity lab video. This is work in progress. This uses two transformer-based models to track objects in a video; one model for appearance and one for tracking points motion. The CoTracker3 model is run in an offline mode for best performance.
poetry install
export PYTHONPATH=. && poetry run python examples/video.py
Should see a window with the video and the tracked points, something similar to this:
Video track of the tracked points in the video.