You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Of interest is the computation of the KL divergence in batches of 5000. This implicitly assumes that, having generated 50,000 images of, say, 1000 ImageNet classes as our conditioning information, the images are ordered randomly in the provided array and thus the mean of the KL divergence of the batch approaches the KL divergence of the whole.
If instead the case occurs where the images are ordered by class (i.e. images of class 0 as the first 50 images, class 1 as the next 50, etc etc) in the provided array, the KL divergence of the batch will spike due to the batch only containing representations of 100 out of 1000 classes, and thus the calculated ISC will be artificially low.
This issue can be fixed by adding the following line:
np.random.shuffle(activations)
As a demonstration: If I sort my own generated ImageNet sample set of 50K images in order, I get ISC of ~50, the other is not in order and gets ISC of ~366.
Fortunately, this bug does not affect the academic research which uses this script for evaluations, because authors save the images to disk as individual files, then use Python to read the files back in -- which ends up being, by happenstance, in a random enough order that the batch statistics are close to the non-batched KL divergence.
The text was updated successfully, but these errors were encountered:
Hello, in the provided
evaluator.py
:Of interest is the computation of the KL divergence in batches of 5000. This implicitly assumes that, having generated 50,000 images of, say, 1000 ImageNet classes as our conditioning information, the images are ordered randomly in the provided array and thus the mean of the KL divergence of the batch approaches the KL divergence of the whole.
If instead the case occurs where the images are ordered by class (i.e. images of class 0 as the first 50 images, class 1 as the next 50, etc etc) in the provided array, the KL divergence of the batch will spike due to the batch only containing representations of 100 out of 1000 classes, and thus the calculated ISC will be artificially low.
This issue can be fixed by adding the following line:
np.random.shuffle(activations)
As a demonstration: If I sort my own generated ImageNet sample set of 50K images in order, I get ISC of ~50, the other is not in order and gets ISC of ~366.
Fortunately, this bug does not affect the academic research which uses this script for evaluations, because authors save the images to disk as individual files, then use Python to read the files back in -- which ends up being, by happenstance, in a random enough order that the batch statistics are close to the non-batched KL divergence.
The text was updated successfully, but these errors were encountered: