You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Prepare all types of feed data in files with (name, data) format. (name, raw_file) is required
- Create feed data iterator, read specific data, set by args, in batch form
- Read each data element in key-value format
- Organize data element in batch with specified method
- Build batch iterator
- Build middle-level data processing model which compute middle-value used in final model
- Gather several middle-level data processor
-
- Create trainer which specify the training process
- Build core-model
- build other model, optimizer etc, used in core-model
training process (flow process)
->main(cmd): [TTSTask=>AbsTask] # Build All train/Eval Environment->main_worker(args): [AbsTask]
->build_model(args): model [ESPnetTTSModel] <=Unfoldedbelow->build_optimizer(args, model=model): [AbsTask]
->scheduler_classes.get(name):scheduler->schedulers.append(scheduler)
->build_iter_factory(args):train_iter_factory []
->build_sequence_iter_factory(args, iter_option):SequenceIterFactory [] \
->A:ESPnetDataset(iter_option.data_path_and_name_and_type, iter_option.preprocess):dataset [torch.utils.data]
->B:IterableESPnetDataset(iter_option.data_path_and_name_and_type, iter_option.preprocess):dataset [torch.utils.data]
->build_batch_sampler(iter_option.batch_type, iter_option.shape_files, batch_size):batch_sampler [samplers]
->NumElementsBatchSampler()
->batches=list(batch_sampler)
->SequenceIterFactory(dataset, batches, args.seeds):seqFactor []
->self.build_iter(epoch): Dataloader []
->build_iter_factory(args):valid_iter_factory []
# Dump all args to config.yaml# Load Pretrained model->trainer.run(model,
optimizers,
schedulers,
train_iter_factory,
valid_iter_factory,
seeds) # Training Process Overlook->train_one_epoch(
model,
optimizers,
schedulers,
train_iter_factory.build_iter(iepoch),
) # Training detailed control in each epoch ->model(**batch)
->forward(text, text_len, speech, speech_len, spkem): [Tacotron2_controllable]
# report result# Save checkpoints# Training Error Handling