Skip to content

training BigGAN #35

@STomoya

Description

@STomoya

In the BigGAN paper, they have improved the scores by using bigger batch size and a larger model.
I used the batch size of 12, which is the biggest batch size my environment can train...

Also the model collapsed after about 10 epochs.

Metadata

Metadata

Assignees

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions