TensorFlow implementation of FSRCNN with quantized version. This implements illustrates that FSRCNN adopted with 16-bit fixed-point representation delivers performance nearly identical as a full precision one.
- Python 2.7
- TensorFlow version> 1.2
- numpy
- Scipy version > 0.18
- h5py
- PIL
-
Download the caffe training code from here, place folder
Train
andTest
into($root)
. -
Open MATLAB and run
generate_train.m
andgenerate_test.m
to generate training and test data. You can also rundata_aug.m
to do data augmentation first. -
Modify the flags
data_dir
andtest_dir
inmodel.py
as paths to the directory. -
Set
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN
inutils.py
as the number of samples for training. -
To train a new model, run
FSRCNN.py --train True --gpu 0,1 --quantize True
for training. And simultaneously runFSRCNN.py --train False --gpu 0 --quantize True
for testing. Note thatreload
flag can be set toTrue
for reloading a pre-train model. More flags are available for different usages. CheckFSRCNN.py
for all the possible flags. -
After training, you can evaluate performance of the model with
TensorBoard
. Runtensorboard --logdir=/tmp/FSRCNN_eval
. Also, you can extract parameters and save them in the format.mat
by set the flagsave
toTrue
.