An open source project from Data to AI Lab at MIT.
A library for generative modeling and evaluation of synthetic household-level electricity load timeseries. This package is still under active development.
- Documentation: (tbd)
EnData is a library built for generating synthetic household-level electric load and generation timeseries. EnData supports a variety of generative time series models that can be used to train a time series data generator from scratch on a user-defined dataset. Additionally, EnData provides functionality for loading pre-trained model checkpoints that can be used to generate data instantly. Trained models can be evaluated using a series of metrics and visualizations also implemented here.
These supported models include:
Feel free to look at our tutorial notebooks to get started.
EnData has been developed and tested on Python 3.9, Python 3.10 and Python 3.11.
Also, although it is not strictly required, the usage of a virtualenv is highly recommended in order to avoid interfering with other software installed in the system in which EnData is run.
These are the minimum commands needed to create a virtualenv using python3.8 for EnData:
pip install virtualenv
virtualenv -p $(which python3.8) EnData-venv
Afterwards, you have to execute this command to activate the virtualenv:
source EnData-venv/bin/activate
Remember to execute it every time you start a new console to work on EnData!
If you want to reproduce our models from scratch, you will need to download the PecanStreet DataPort dataset and place it under the path specified in your pecanstreet.yaml
. Specifically you will require the following files:
- 15minute_data_austin.csv
- 15minute_data_california.csv
- 15minute_data_newyork.csv
- metadata.csv
If you want to train models using the Open Power Systems dataset, you will need to download the following file:
- household_data_15min_singleindex.csv
and again place it under the path specified in openpower.yaml
. For instructions and dataset usage terms, please refer to the data provider's websites.
New models, new evaluation functionality and new datasets coming soon!