训练步骤未在 pytorch 闪电中执行

Posted

技术标签:

【中文标题】训练步骤未在 pytorch 闪电中执行【英文标题】:Training step not executing in pytorch lightning 【发布时间】:2021-06-19 16:47:37 【问题描述】:

我正在努力微调 t5 模型以总结亚马逊评论。我在这里学习本教程:https://towardsdatascience.com/fine-tuning-a-t5-transformer-for-any-summarization-task-82334c64c81

我注意到我的代码中的 training_step 从未被执行,因为训练损失在整个 epoch 中保持“NaN”。但是,validation_step 计算得很好。

我已经确认数据中没有空字符串,并尝试了多个批量大小。

这是错误

RuntimeError                              Traceback (most recent call last)
<ipython-input-53-45d4afebefac> in <module>()
----> 1 trainer.fit(model)

8 frames
<ipython-input-46-00fddffa2209> in training_epoch_end(self, outputs)
    134         print("OUTPUTS")
    135         print(outputs)
--> 136         avg_train_loss = torch.stack([x["loss"] for x in outputs]).mean()
    137         tensorboard_logs = "avg_train_loss": avg_train_loss
    138         return "avg_train_loss": avg_train_loss, "log": tensorboard_logs, 'progress_bar': tensorboard_logs

RuntimeError: stack expects a non-empty TensorList

通过在 training_step 函数中添加打印语句,我发现 training_step 函数永远不会被执行。

下面是我的 T5FineTuner 类的代码(对不起,我不能再简洁了):

class T5FineTuner(pl.LightningModule):
    def __init__(self, hparams):
        super(T5FineTuner, self).__init__()
        self.hparams = hparams        
        self.model = T5ForConditionalGeneration.from_pretrained(hparams.model_name_or_path)
        self.tokenizer = T5Tokenizer.from_pretrained(hparams.tokenizer_name_or_path)
        self.rouge_metric = load_metric('rouge') 
        
        if self.hparams.freeze_embeds:
            self.freeze_embeds()
        if self.hparams.freeze_encoder:
            self.freeze_params(self.model.get_encoder())
            assert_all_frozen(self.model.get_encoder())
            
            
        n_observations_per_split = 
            "train": self.hparams.n_train,
            "validation": self.hparams.n_val,
            "test": self.hparams.n_test,
        
        self.n_obs = k: v if v >= 0 else None for k, v in n_observations_per_split.items()
        
    
    def freeze_params(self, model):
        for par in model.parameters():
            par.requires_grad = False
            
            
    def freeze_embeds(self):
        """Freeze token embeddings and positional embeddings for bart, just token embeddings for t5."""
        try:
            self.freeze_params(self.model.model.shared)
            for d in [self.model.model.encoder, self.model.model.decoder]:
                freeze_params(d.embed_positions)
                freeze_params(d.embed_tokens)
        except AttributeError:
            self.freeze_params(self.model.shared)
            for d in [self.model.encoder, self.model.decoder]:
                self.freeze_params(d.embed_tokens)
    
    def lmap(self, f, x):
        """list(map(f, x))"""
        return list(map(f, x))
    

    def is_logger(self):
        return True
    
    
    def parse_score(self, result):
        return k: round(v.mid.fmeasure * 100, 4) for k, v in result.items()
        
    def forward(
      self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, labels=None
  ):
        return self.model(
            input_ids,
            attention_mask=attention_mask,
            decoder_input_ids=decoder_input_ids,
            decoder_attention_mask=decoder_attention_mask,
            labels=labels,
    )

    def _step(self, batch):
        labels = batch["target_ids"]
        labels[labels[:, :] == self.tokenizer.pad_token_id] = -100
        # print(labels)
        outputs = self(
            input_ids=batch["source_ids"],
            attention_mask=batch["source_mask"],
            labels=labels,
            decoder_attention_mask=batch['target_mask']
        )
        # print(outputs)

        loss = outputs[0]
        return loss
    
    
    def ids_to_clean_text(self, generated_ids):
        gen_text = self.tokenizer.batch_decode(
            generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
        )
        return self.lmap(str.strip, gen_text)
    
    
    def _generative_step(self, batch) :
        
        t0 = time.time()
        
        generated_ids = self.model.generate(
            batch["source_ids"],
            attention_mask=batch["source_mask"],
            use_cache=True,
            decoder_attention_mask=batch['target_mask'],
            max_length=150, 
            num_beams=2,
            repetition_penalty=2.5, 
            length_penalty=1.0, 
            early_stopping=False,
        )
        preds = self.ids_to_clean_text(generated_ids)
        target = self.ids_to_clean_text(batch["target_ids"])
            
        gen_time = (time.time() - t0) / batch["source_ids"].shape[0]  
    
        loss = self._step(batch)
        # print("LOSS _generative_step")
        # print(loss)
        base_metrics = 'val_loss': loss
#         rouge: Dict = self.calc_generative_metrics(preds, target)
        summ_len = np.mean(self.lmap(len, generated_ids))
        base_metrics.update(gen_time=gen_time, gen_len=summ_len, preds=preds, target=target)
        self.rouge_metric.add_batch(preds, target)
        
        # rouge_results = self.rouge_metric.compute() 
        # rouge_dict = self.parse_score(rouge_results)
        # base_metrics.update(rouge1=rouge_dict['rouge1'], rougeL=rouge_dict['rougeL'])
        
        return base_metrics
    

    def training_step(self, batch, batch_idx):
        print("training_step")
        print(batch)
        loss = self._step(batch)

        tensorboard_logs = "train_loss": loss
        print("LOSS")
        print(loss)
        return "loss": loss, "log": tensorboard_logs
  
    def training_epoch_end(self, outputs):
        print("OUTPUTS")
        print(outputs)
        avg_train_loss = torch.stack([x["loss"] for x in outputs]).mean()
        tensorboard_logs = "avg_train_loss": avg_train_loss
        return "avg_train_loss": avg_train_loss, "log": tensorboard_logs, 'progress_bar': tensorboard_logs

    def validation_step(self, batch, batch_idx):
        print("validation_step")
        return self._generative_step(batch)
    
  
    def validation_epoch_end(self, outputs):
        
        avg_loss = torch.stack([x["val_loss"] for x in outputs]).mean()
        tensorboard_logs = "val_loss": avg_loss
        
        rouge_results = self.rouge_metric.compute() 
        rouge_dict = self.parse_score(rouge_results)
    
        tensorboard_logs.update(rouge1=rouge_dict['rouge1'], rougeL=rouge_dict['rougeL'])
        
        ## Clear out the lists for next epoch
        self.target_gen= []
        self.prediction_gen=[]
        return "avg_val_loss": avg_loss, 
                "rouge1" : rouge_results['rouge1'],
                "rougeL" : rouge_results['rougeL'],
                "log": tensorboard_logs, 'progress_bar': tensorboard_logs

    def configure_optimizers(self):
        "Prepare optimizer and schedule (linear warmup and decay)"

        model = self.model
        no_decay = ["bias", "LayerNorm.weight"]
        optimizer_grouped_parameters = [
            
                "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
                "weight_decay": self.hparams.weight_decay,
            ,
            
                "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
                "weight_decay": 0.0,
            ,
        ]
        optimizer = AdamW(optimizer_grouped_parameters, lr=self.hparams.learning_rate, eps=self.hparams.adam_epsilon)
        self.opt = optimizer
        return [optimizer]
  
    def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None, using_native_amp=False, optimizer_closure=None, on_tpu=None, using_lbfgs=None):
        # if self.trainer.use_tpu:
        #     xm.optimizer_step(optimizer)
        # else:
        optimizer.step()
        optimizer.zero_grad()
        self.lr_scheduler.step()
  
    def get_tqdm_dict(self):
        tqdm_dict = "loss": ":.3f".format(self.trainer.avg_loss), "lr": self.lr_scheduler.get_last_lr()[-1]

        return tqdm_dict
    

    def train_dataloader(self):
        print("train_dataloader")   
        n_samples = self.n_obs['train']
        print(n_samples)
        dataloader = DataLoader(train_dataset, batch_size=self.hparams.train_batch_size, num_workers=4)
        print(len(dataloader.dataset))
        print(self.hparams.train_batch_size * max(1, self.hparams.n_gpu))
        print(self.hparams.gradient_accumulation_steps)
        print(float(self.hparams.num_train_epochs))
        t_total = (
            (len(dataloader.dataset) // (self.hparams.train_batch_size * max(1, self.hparams.n_gpu)))
            # // self.hparams.gradient_accumulation_steps
            * float(self.hparams.num_train_epochs)
        )
        print(t_total)
        scheduler = get_linear_schedule_with_warmup(
            self.opt, num_warmup_steps=self.hparams.warmup_steps, num_training_steps=t_total
        )
        self.lr_scheduler = scheduler
        return dataloader

    def val_dataloader(self):
        n_samples = self.n_obs['validation']
        # validation_dataset = get_dataset(tokenizer=self.tokenizer, type_path="validation", num_samples=n_samples, args=self.hparams)
        
        return DataLoader(validation_dataset, batch_size=self.hparams.eval_batch_size, num_workers=4)
    
    
    def test_dataloader(self):
        n_samples = self.n_obs['test']
        # test_dataset = get_dataset(tokenizer=self.tokenizer, type_path="test", num_samples=n_samples, args=self.hparams)
        
        return DataLoader(test_dataset, batch_size=self.hparams.test_batch_size, num_workers=4)

以下是我的参数:

args_dict = dict(
    output_dir="", # path to save the checkpoints
    model_name_or_path='t5-small',
    tokenizer_name_or_path='t5-small',
    max_input_length=512,
    max_output_length=150,
    freeze_encoder=False,
    freeze_embeds=False,
    learning_rate=3e-4,
    weight_decay=0.0,
    adam_epsilon=1e-8,
    warmup_steps=0,
    train_batch_size=20,
    eval_batch_size=20,
    num_train_epochs=2,
    gradient_accumulation_steps=8,
    n_gpu=1,
    resume_from_checkpoint=None, 
    val_check_interval = 0.05, 
    n_val=1000,
    n_train=-1,
    n_test=-1,
    early_stop_callback=False,
    fp_16=False, # if you want to enable 16-bit training then install apex and set this to true
    opt_level='O1', # you can find out more on optimisation levels here https://nvidia.github.io/apex/amp.html#opt-levels-and-properties
    max_grad_norm=1.0, # if you enable 16-bit training then set this to a sensible value, 0.5 is a good default
    seed=42,
)

【问题讨论】:

嘿,请问有什么更新吗?我遇到了同样的问题 【参考方案1】:

看来这段代码已经过时了。造成这种冲突的是optimizer_step() 方法。我刚刚在下面评论了这整个部分,它对我有用。如果你想在这个函数中做任何自定义逻辑,最好参考GitHub上的最新代码。

def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None, using_native_amp=False,on_tpu=None,using_lbfgs=None, optimizer_closure=None):
        if self.trainer.use_tpu:
            xm.optimizer_step(optimizer)
        else:
            optimizer.step(closure=optimizer_closure)
        optimizer.zero_grad()
        self.lr_scheduler.step()
  

【讨论】:

以上是关于训练步骤未在 pytorch 闪电中执行的主要内容,如果未能解决你的问题,请参考以下文章

pytorch闪电模型的输出预测

使用 pytorch 闪电进行多 GPU 训练时出错

用 pytorch 闪电组织张量板图

通过模型检查点时 Pytorch 闪电出错

将 pytorch 闪电与香草 pytorch 混合

如何在忽略类中使用 pytorch 闪电精度?