李宏毅深度学习 2021 作业四 Self-Attention 实验记录

这次作业的信息量突然变得很大,真正想做到 strong baseline 的话避不开去读 transformer , conformer 的论文。并且实现 ppt 里的几点提示:Self-attention pooling,Additive margin softmax,模型改为 conformer。
因此本篇文章用来整理自己从这个作业里学到的一些训练技巧,和调包流程。

任务描述

作业 PPT
kaggle

给出了语音信息,预测讲话人。讲道理一开始看这个作业的数据是有点懵逼的,一堆 json 文件,训练数据还是处理好的 mel-spectrogram 语音,具体文件是 .pt 类型,做起来远不如一些文本,图像数据直观。
但也有一个好处,就是用不着自己做特征工程了。当然,特征的长度不统一,还是需要对特征做 padding。

最终kaggle成绩:

conformer

这次作业只修改了模型,没有改动参数,训练一次时间太长不想再修改了,增加 d_model 的维度效果应该会更好。

数据说明

本次作业数据量很多,一共 7.6G mel-spectrogram 特征,数据十分零碎有 75000 多条数据,训练时磁盘读取速度成了瓶颈,基本 2000 轮训练要 30 分钟,而总共需要 70000 次训练,最后的 conformer 训练完用了 7,8 个小时。
id 表示一个人,一个人有多个长短不一的语音片段, feature_path 给出了语音特征的文件路径,mel_len 是该段特征的长度。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
"n_mels": 40,
"speakers": {
        一个人的特征
        "id10473": [
                      {
                        "feature_path": "uttr-5c88b2f1803449789c36f14fb4d3c1eb.pt",
                        "mel_len": 652
                      },
                      {
                        "feature_path": "uttr-022a67baccc54bfda3567a7ac282a7b8.pt",
                        "mel_len": 564
                      },
                      ...
                      ...
                      ...
                      ],
                `````
                `````

Transformer

Transformer 起初是为了解决 NLP 领域问题所提出的模型,他主要为了解决传统的 RNN,LSTM 模型的一些缺陷,如记忆长度过短,无法并行化导致训练效率低下等问题。

self-attention

直观理解 self-attention,是一个句子内的单词,互相看其他单词对自己的影响力有多大。 SA 里的 k,q,v:
Query,Key,Value 的概念取自于信息检索系统,举个简单的搜索的例子来说。当你在某电商平台搜索某件商品(年轻女士冬季穿的红色薄款羽绒服)时,你在搜索引擎上输入的内容便是Query,然后搜索引擎根据Query为你匹配Key(例如商品的种类,颜色,描述等),然后根据Query和Key的相似度得到匹配的内容(Value)。
self-attention 中的 Q,K,V 也是起着类似的作用,在矩阵计算中,点积是计算两个矩阵相似度的方法之一。接着便是根据相似度进行输出的匹配,使用了加权匹配的方式,而权值就是query与key的相似度。

看不懂论文找了几篇不错的文章

The Illustrated Transformer
详解Transformer (Attention Is All You Need)

pytorch 中的 TransformerEncoderLayer()

范例代码中使用了 transformer 的编码层,参数如下:

  • d_model:输入特征的维度
  • nhead:多头注意力模型的头数
  • dim_feedforward:前向反馈隐藏层的维度
  • dropout:对两个子层的输出进行处理的dropout值。默认值:0.1。
  • activation:前馈神经网络的激活函数。默认值:relu。

Conformer

大概意思是在 transformer 的 attention 层又加了卷积层,由于 transformer 抛弃了 rnn , lstm 过于关注长序列的信息而忽略了一些同样重要的局部特征,加层 cnn 可以使模型学习到一些局部特征。可直接调包,在模型里插入 conformer 模块,其中参数使用论文默认参数。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
pip install conformer

self.conformer_block = ConformerBlock(
    dim=d_model,
    dim_head=64,
    heads=8,
    ff_mult=4,
    conv_expansion_factor=2,
    conv_kernel_size=31,
    attn_dropout=dropout,
    ff_dropout=dropout,
    conv_dropout=dropout
)

余弦退火策略 + warmup

范例代码中,实现了一个实用的余弦退火学习率更新策略并且同时可以热启动,这部分可以保存起来,今后的实验中也可能用到。
作业中前 1000 轮为热身,之后使用余弦退火更新学习率。

1
2
3
4
5
6
7
8
9
def get_cosine_schedule_with_warmup(optimizer: Optimizer, num_warmup_steps: int, num_training_steps: int,
                                    num_cycles: float = 0.5, last_epoch: int = -1):
    def lr_lambda(current_step):
        if current_step < num_warmup_steps:
            return float(current_step) / float(max(1, num_warmup_steps))
        progress = float(current_step - num_warmup_steps) / float(max(1, num_training_steps - num_warmup_steps))
        return max(0.0, 0.5 * (1.0 + math.cos(math.pi * float(num_cycles) * 2.0 * progress)))

    return LambdaLR(optimizer, lr_lambda, last_epoch)

实验结果

范例作业结果

kaggle 分数

范例代码

范例的模型结构:

用了一个 transformer 的编码器(dmodel 维度80, 前馈隐层256,两个多头注意模型) 之后过了两个线性层,结果用 mean pooling。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
class Classifier(nn.Module):
    def __init__(self, d_model=80, n_spks=600, dropout=0.1):
        super().__init__()
        self.prenet = nn.Linear(40, d_model)
        self.encoder_layer = nn.TransformerEncoderLayer(d_model=d_model, dim_feedforward=256, nhead=2)

        self.pred_layer = nn.Sequential(
            nn.Linear(d_model, d_model),
            nn.ReLU(),
            nn.Linear(d_model, n_spks),
        )

    def forward(self, mels):
        out = self.prenet(mels)
        out = out.permute(1, 0, 2)
        out = self.encoder_layer(out)
        out = out.transpose(0, 1)
        stats = out.mean(dim=1)

        out = self.pred_layer(stats)
        return out

范例的 loss,acc 输出

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
[Info]: 完成加载dataloader!
[Info]: 完成实例化模型!
Train: 100% 2000/2000 [1:09:28<00:00,  2.08s/ step, accuracy=0.19, loss=3.65, step=2000]
Valid:   0% 32/6944 [00:00<01:26, 79.76 uttr/s, accuracy=0.19, loss=3.57]
Train: 100% 2000/2000 [15:18<00:00,  2.18 step/s, accuracy=0.28, loss=2.91, step=4000]
Valid:   0% 32/6944 [00:00<00:03, 2070.49 uttr/s, accuracy=0.28, loss=3.07]
Train: 100% 2000/2000 [17:28<00:00,  1.91 step/s, accuracy=0.34, loss=2.65, step=6000]
Valid:   0% 32/6944 [00:02<09:51, 11.69 uttr/s, accuracy=0.38, loss=2.73]
Train: 100% 2000/2000 [27:55<00:00,  1.19 step/s, accuracy=0.47, loss=2.56, step=8000]
Valid:   0% 32/6944 [00:00<00:03, 1933.00 uttr/s, accuracy=0.22, loss=3.16]
Train: 100% 2000/2000 [07:09<00:00,  4.65 step/s, accuracy=0.44, loss=2.10, step=1e+4]
Valid:   0% 32/6944 [00:00<00:03, 2005.37 uttr/s, accuracy=0.41, loss=2.37]
Train:   0% 1/2000 [00:03<1:47:05,  3.21s/ step, accuracy=0.47, loss=2.55, step=1e+4]Step 10000, best model saved. (accuracy=0.0019)
Train: 100% 2000/2000 [02:14<00:00, 14.88 step/s, accuracy=0.38, loss=2.81, step=12000]
Valid:   0% 32/6944 [00:00<00:03, 2005.34 uttr/s, accuracy=0.38, loss=2.68]
Train: 100% 2000/2000 [01:01<00:00, 32.62 step/s, accuracy=0.44, loss=2.13, step=14000]
Valid:   0% 32/6944 [00:00<00:03, 1887.39 uttr/s, accuracy=0.38, loss=2.44]
Train: 100% 2000/2000 [00:48<00:00, 41.39 step/s, accuracy=0.38, loss=2.70, step=16000]
Valid:   0% 32/6944 [00:00<00:03, 2066.73 uttr/s, accuracy=0.44, loss=2.42]
Train: 100% 2000/2000 [00:41<00:00, 48.06 step/s, accuracy=0.41, loss=2.29, step=18000]
Valid:   0% 32/6944 [00:00<00:03, 2066.96 uttr/s, accuracy=0.53, loss=2.09]
Train: 100% 2000/2000 [00:41<00:00, 48.29 step/s, accuracy=0.66, loss=1.11, step=2e+4]
Valid:   0% 32/6944 [00:00<00:03, 2005.37 uttr/s, accuracy=0.47, loss=2.02]
Train:   0% 2/2000 [00:00<29:05,  1.14 step/s, accuracy=0.50, loss=2.08, step=2e+4]Step 20000, best model saved. (accuracy=0.0024)
Train: 100% 2000/2000 [00:40<00:00, 49.51 step/s, accuracy=0.44, loss=2.23, step=22000]
Valid:   0% 32/6944 [00:00<00:03, 2139.06 uttr/s, accuracy=0.47, loss=1.95]
Train: 100% 2000/2000 [00:39<00:00, 50.86 step/s, accuracy=0.66, loss=1.52, step=24000]
Valid:   0% 32/6944 [00:00<00:03, 2108.29 uttr/s, accuracy=0.50, loss=1.93]
Train: 100% 2000/2000 [00:39<00:00, 50.81 step/s, accuracy=0.47, loss=2.46, step=26000]
Valid:   0% 32/6944 [00:00<00:03, 1977.60 uttr/s, accuracy=0.56, loss=1.75]
Train: 100% 2000/2000 [00:42<00:00, 46.99 step/s, accuracy=0.72, loss=1.28, step=28000]
Valid:   0% 32/6944 [00:00<00:03, 2139.03 uttr/s, accuracy=0.53, loss=1.80]
Train: 100% 2000/2000 [00:45<00:00, 43.99 step/s, accuracy=0.59, loss=1.35, step=3e+4]
Valid:   0% 32/6944 [00:00<00:03, 1970.83 uttr/s, accuracy=0.47, loss=1.65]
Train:   0% 1/2000 [00:00<00:49, 40.56 step/s, accuracy=0.59, loss=1.62, step=3e+4]Step 30000, best model saved. (accuracy=0.0026)
Train: 100% 2000/2000 [00:49<00:00, 40.64 step/s, accuracy=0.81, loss=1.01, step=32000]
Valid:   0% 32/6944 [00:00<00:03, 2080.41 uttr/s, accuracy=0.50, loss=1.97]
Train: 100% 2000/2000 [00:51<00:00, 38.74 step/s, accuracy=0.66, loss=1.36, step=34000]
Valid:   0% 32/6944 [00:00<00:03, 2015.40 uttr/s, accuracy=0.56, loss=1.66]
Train: 100% 2000/2000 [00:48<00:00, 40.82 step/s, accuracy=0.72, loss=1.35, step=36000]
Valid:   0% 32/6944 [00:00<00:03, 2004.78 uttr/s, accuracy=0.56, loss=1.58]
Train: 100% 2000/2000 [00:39<00:00, 50.19 step/s, accuracy=0.62, loss=1.44, step=38000]
Valid:   0% 32/6944 [00:00<00:03, 1953.65 uttr/s, accuracy=0.69, loss=1.46]
Train: 100% 2000/2000 [00:40<00:00, 48.93 step/s, accuracy=0.72, loss=0.85, step=4e+4]
Valid:   0% 32/6944 [00:00<00:03, 2083.25 uttr/s, accuracy=0.62, loss=1.54]
Train:   0% 2/2000 [00:00<00:41, 48.21 step/s, accuracy=0.62, loss=1.48, step=4e+4]Step 40000, best model saved. (accuracy=0.0032)
Train: 100% 2000/2000 [00:40<00:00, 49.06 step/s, accuracy=0.66, loss=1.08, step=42000]
Valid:   0% 32/6944 [00:00<00:03, 1932.08 uttr/s, accuracy=0.66, loss=1.53]
Train: 100% 2000/2000 [00:39<00:00, 50.19 step/s, accuracy=0.75, loss=1.28, step=44000]
Valid:   0% 32/6944 [00:00<00:03, 2139.03 uttr/s, accuracy=0.72, loss=1.55]
Train: 100% 2000/2000 [00:39<00:00, 50.51 step/s, accuracy=0.88, loss=0.69, step=46000]
Valid:   0% 32/6944 [00:00<00:03, 2050.41 uttr/s, accuracy=0.59, loss=1.69]
Train: 100% 2000/2000 [00:39<00:00, 50.33 step/s, accuracy=0.56, loss=1.59, step=48000]
Valid:   0% 32/6944 [00:00<00:03, 2005.37 uttr/s, accuracy=0.66, loss=1.63]
Train: 100% 2000/2000 [00:40<00:00, 49.88 step/s, accuracy=0.81, loss=0.71, step=5e+4]
Valid:   0% 32/6944 [00:00<00:03, 2139.06 uttr/s, accuracy=0.72, loss=1.00]
Train:   0% 1/2000 [00:00<00:45, 43.59 step/s, accuracy=0.84, loss=0.81, step=5e+4]Step 50000, best model saved. (accuracy=0.0033)
Train: 100% 2000/2000 [00:39<00:00, 50.31 step/s, accuracy=0.72, loss=1.03, step=52000]
Valid:   0% 32/6944 [00:00<00:03, 2069.22 uttr/s, accuracy=0.56, loss=1.87]
Train: 100% 2000/2000 [00:40<00:00, 49.31 step/s, accuracy=0.69, loss=1.34, step=54000]
Valid:   0% 32/6944 [00:00<00:03, 2032.00 uttr/s, accuracy=0.62, loss=1.32]
Train: 100% 2000/2000 [00:40<00:00, 49.07 step/s, accuracy=0.72, loss=1.00, step=56000]
Valid:   0% 32/6944 [00:00<00:03, 1942.68 uttr/s, accuracy=0.56, loss=1.72]
Train: 100% 2000/2000 [00:43<00:00, 46.15 step/s, accuracy=0.75, loss=1.02, step=58000]
Valid:   0% 32/6944 [00:00<00:03, 2008.98 uttr/s, accuracy=0.59, loss=1.21]
Train: 100% 2000/2000 [00:41<00:00, 48.48 step/s, accuracy=0.91, loss=0.55, step=6e+4]
Valid:   0% 32/6944 [00:00<00:03, 1967.54 uttr/s, accuracy=0.72, loss=1.43]
Train:   0% 1/2000 [00:00<00:45, 44.27 step/s, accuracy=0.72, loss=1.06, step=6e+4]Step 60000, best model saved. (accuracy=0.0033)
Train: 100% 2000/2000 [00:40<00:00, 49.98 step/s, accuracy=0.53, loss=1.31, step=62000]
Valid:   0% 32/6944 [00:00<00:03, 2139.06 uttr/s, accuracy=0.66, loss=1.09]
Train: 100% 2000/2000 [00:40<00:00, 48.99 step/s, accuracy=0.72, loss=1.47, step=64000]
Valid:   0% 32/6944 [00:00<00:03, 2033.36 uttr/s, accuracy=0.84, loss=0.77]
Train: 100% 2000/2000 [00:39<00:00, 50.88 step/s, accuracy=0.78, loss=0.72, step=66000]
Valid:   0% 32/6944 [00:00<00:03, 1999.43 uttr/s, accuracy=0.66, loss=1.32]
Train: 100% 2000/2000 [00:38<00:00, 51.39 step/s, accuracy=0.75, loss=0.94, step=68000]
Valid:   0% 32/6944 [00:00<00:03, 2148.07 uttr/s, accuracy=0.62, loss=1.55]
Train: 100% 2000/2000 [00:39<00:00, 51.03 step/s, accuracy=0.75, loss=1.02, step=7e+4]
Valid:   0% 32/6944 [00:00<00:03, 1934.64 uttr/s, accuracy=0.62, loss=1.48]
Train:   0% 0/2000 [00:00<?, ? step/s]
Step 70000, best model saved. (accuracy=0.0039)

模型改为 conformer + SA pooling + AMSoftmax

kaggle 分数

conformer

Self Attentive Pooling

A Structured Self-Attentive Sentence Embedding 代码直接去 github 上扒 SA pooling

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
class Self_Attentive_Pooling(nn.Module):
    def __init__(self, dim):
        """SAP
        Paper: Self-Attentive Speaker Embeddings for Text-Independent Speaker Verification
        Link: https://danielpovey.com/files/2018_interspeech_xvector_attention.pdf
        Args:
            dim (pair): the size of attention weights
        """
        super(Self_Attentive_Pooling, self).__init__()
        self.sap_linear = nn.Linear(dim, dim)
        self.attention = nn.Parameter(torch.FloatTensor(dim, 1))

    def forward(self, x):
        """Computes Self-Attentive Pooling Module
        Args:
            x (torch.Tensor): Input tensor (#batch, dim, frames).
        Returns:
            torch.Tensor: Output tensor (#batch, dim)
        """
        x = x.permute(0, 2, 1)
        h = torch.tanh(self.sap_linear(x))
        w = torch.matmul(h, self.attention).squeeze(dim=2)
        w = F.softmax(w, dim=1).view(x.size(0), x.size(1), 1)
        x = torch.sum(x * w, dim=1)
        return x

AMSoftmax

从Softmax到AMSoftmax(附可视化代码和实现代码) 同样 AMSoftmax 也直接从 github 上扒下来做修改。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
class AMSoftmax(nn.Module):
    '''
    Additve Margin Softmax as proposed in:
    https://arxiv.org/pdf/1801.05599.pdf
    '''

    def __init__(self, in_features, n_classes, s=30, m=0.4):
        super(AMSoftmax, self).__init__()
        self.linear = nn.Linear(in_features, n_classes, bias=False)
        self.m = m
        self.s = s

    def _am_logsumexp(self, logits):
        max_x = torch.max(logits, dim=-1)[0].unsqueeze(-1)
        term1 = (self.s * (logits - (max_x + self.m))).exp()
        term2 = (self.s * (logits - max_x)).exp().sum(-1).unsqueeze(-1) - (self.s * (logits - max_x)).exp()
        return self.s * max_x + (term1 + term2).log()

    def forward(self, *inputs):
        x_vector = F.normalize(inputs[0], p=2, dim=-1)
        self.linear.weight.data = F.normalize(self.linear.weight.data, p=2, dim=-1)
        logits = self.linear(x_vector)
        scaled_logits = (logits - self.m) * self.s
        return scaled_logits - self._am_logsumexp(logits)

最终模型结构

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
class Classifier(nn.Module):
    def __init__(self, d_model=256, n_spks=600, dropout=0.1):
        super().__init__()
        self.prenet = nn.Linear(40, d_model)

        self.conformer_block = ConformerBlock(
            dim=d_model,
            dim_head=64,
            heads=8,
            ff_mult=4,
            conv_expansion_factor=2,
            conv_kernel_size=31,
            attn_dropout=dropout,
            ff_dropout=dropout,
            conv_dropout=dropout
        )

        self.pooling = Self_Attentive_Pooling(d_model)

        self.pred_layer = AMSoftmax(in_feats=d_model, n_classes=n_spks)

    def forward(self, mels):
        out = self.prenet(mels)
        out = out.permute(1, 0, 2)
        out = self.conformer_block(out)

        out = out.permute(1, 2, 0)
        # out: (batch size, length, d_model)
        stats = self.pooling(out)
        out = self.pred_layer(stats)
        return out

conformer的 acc,loss 输出

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
[Info]: 完成加载dataloader!
[Info]: 完成实例化模型!
Train: 100% 2000/2000 [1:23:07<00:00,  2.49s/ step, accuracy=0.41, loss=2.59, step=2000]
Valid:   0% 32/6944 [00:03<12:13,  9.43 uttr/s, accuracy=0.28, loss=3.30]
Train: 100% 2000/2000 [38:02<00:00,  1.14s/ step, accuracy=0.53, loss=1.81, step=4000]
Valid:   0% 32/6944 [00:06<24:28,  4.71 uttr/s, accuracy=0.50, loss=1.90]
Train: 100% 2000/2000 [33:57<00:00,  1.02s/ step, accuracy=0.59, loss=1.62, step=6000]
Valid:   0% 32/6944 [00:02<09:55, 11.61 uttr/s, accuracy=0.50, loss=1.76]
Train: 100% 2000/2000 [31:58<00:00,  1.04 step/s, accuracy=0.62, loss=1.28, step=8000]
Valid:   0% 32/6944 [00:00<00:07, 878.07 uttr/s, accuracy=0.56, loss=1.71]
Train: 100% 2000/2000 [21:29<00:00,  1.55 step/s, accuracy=0.56, loss=2.00, step=1e+4]
Valid:   0% 32/6944 [00:00<00:07, 898.87 uttr/s, accuracy=0.62, loss=1.45]
Train:   0% 1/2000 [00:01<35:27,  1.06s/ step, accuracy=0.56, loss=1.67, step=1e+4]Step 10000, best model saved. (accuracy=0.0029)
Train: 100% 2000/2000 [25:02<00:00,  1.33 step/s, accuracy=0.75, loss=1.28, step=12000]
Valid:   0% 32/6944 [00:00<00:09, 747.23 uttr/s, accuracy=0.53, loss=1.76]
Train: 100% 2000/2000 [36:18<00:00,  1.09s/ step, accuracy=0.72, loss=1.21, step=14000]
Valid:   0% 32/6944 [00:00<00:09, 713.02 uttr/s, accuracy=0.56, loss=1.66]
Train: 100% 2000/2000 [35:01<00:00,  1.05s/ step, accuracy=0.69, loss=1.29, step=16000]
Valid:   0% 32/6944 [00:00<00:07, 891.27 uttr/s, accuracy=0.78, loss=1.04]
Train: 100% 2000/2000 [28:01<00:00,  1.19 step/s, accuracy=0.66, loss=1.38, step=18000]
Valid:   0% 32/6944 [00:00<00:32, 209.70 uttr/s, accuracy=0.75, loss=1.06]
Train: 100% 2000/2000 [21:14<00:00,  1.57 step/s, accuracy=0.91, loss=0.40, step=2e+4]
Valid:   0% 32/6944 [00:00<00:07, 916.74 uttr/s, accuracy=0.72, loss=1.39]
Train:   0% 0/2000 [00:00<?, ? step/s]Step 20000, best model saved. (accuracy=0.0036)
Train: 100% 2000/2000 [07:46<00:00,  4.28 step/s, accuracy=0.75, loss=0.76, step=22000]
Valid:   0% 32/6944 [00:00<00:07, 868.11 uttr/s, accuracy=0.75, loss=1.03]
Train: 100% 2000/2000 [03:14<00:00, 10.28 step/s, accuracy=0.81, loss=0.65, step=24000]
Valid:   0% 32/6944 [00:00<00:07, 918.44 uttr/s, accuracy=0.81, loss=0.81]
Train: 100% 2000/2000 [03:04<00:00, 10.85 step/s, accuracy=0.84, loss=0.43, step=26000]
Valid:   0% 32/6944 [00:00<00:07, 867.15 uttr/s, accuracy=0.72, loss=1.18]
Train: 100% 2000/2000 [03:36<00:00,  9.25 step/s, accuracy=0.97, loss=0.31, step=28000]
Valid:   0% 32/6944 [00:00<00:08, 844.34 uttr/s, accuracy=0.81, loss=0.84]
Train: 100% 2000/2000 [02:53<00:00, 11.55 step/s, accuracy=0.84, loss=0.48, step=3e+4]
Valid:   0% 32/6944 [00:00<00:08, 844.36 uttr/s, accuracy=0.81, loss=0.86]
Train:   0% 0/2000 [00:00<?, ? step/s]Step 30000, best model saved. (accuracy=0.0037)
Train: 100% 2000/2000 [02:13<00:00, 14.95 step/s, accuracy=0.91, loss=0.44, step=32000]
Valid:   0% 32/6944 [00:00<00:08, 854.84 uttr/s, accuracy=0.88, loss=0.48]
Train: 100% 2000/2000 [01:37<00:00, 20.43 step/s, accuracy=0.84, loss=0.47, step=34000]
Valid:   0% 32/6944 [00:00<00:08, 841.03 uttr/s, accuracy=0.81, loss=0.60]
Train: 100% 2000/2000 [01:36<00:00, 20.67 step/s, accuracy=0.91, loss=0.32, step=36000]
Valid:   0% 32/6944 [00:00<00:08, 844.35 uttr/s, accuracy=0.84, loss=0.62]
Train: 100% 2000/2000 [01:35<00:00, 20.98 step/s, accuracy=0.88, loss=0.42, step=38000]
Valid:   0% 32/6944 [00:00<00:07, 867.17 uttr/s, accuracy=0.72, loss=0.83]
Train: 100% 2000/2000 [01:43<00:00, 19.28 step/s, accuracy=0.97, loss=0.29, step=4e+4]
Valid:   0% 32/6944 [00:00<00:07, 867.89 uttr/s, accuracy=0.81, loss=0.57]
Train:   0% 0/2000 [00:00<?, ? step/s]Step 40000, best model saved. (accuracy=0.0040)
Train: 100% 2000/2000 [01:34<00:00, 21.24 step/s, accuracy=0.97, loss=0.14, step=42000]
Valid:   0% 32/6944 [00:00<00:07, 867.19 uttr/s, accuracy=0.88, loss=0.42]
Train: 100% 2000/2000 [01:33<00:00, 21.31 step/s, accuracy=0.91, loss=0.46, step=44000]
Valid:   0% 32/6944 [00:00<00:08, 845.64 uttr/s, accuracy=0.78, loss=0.70]
Train: 100% 2000/2000 [01:34<00:00, 21.25 step/s, accuracy=0.94, loss=0.28, step=46000]
Valid:   0% 32/6944 [00:00<00:08, 844.34 uttr/s, accuracy=0.84, loss=0.54]
Train: 100% 2000/2000 [01:34<00:00, 21.24 step/s, accuracy=0.88, loss=0.32, step=48000]
Valid:   0% 32/6944 [00:00<00:08, 844.35 uttr/s, accuracy=0.88, loss=0.40]
Train: 100% 2000/2000 [01:40<00:00, 19.88 step/s, accuracy=0.94, loss=0.31, step=5e+4]
Valid:   0% 32/6944 [00:00<00:07, 867.18 uttr/s, accuracy=0.91, loss=0.39]
Train:   0% 0/2000 [00:00<?, ? step/s]Step 50000, best model saved. (accuracy=0.0042)
Train: 100% 2000/2000 [01:36<00:00, 20.80 step/s, accuracy=0.88, loss=0.39, step=52000]
Valid:   0% 32/6944 [00:00<00:08, 835.28 uttr/s, accuracy=0.88, loss=0.41]
Train: 100% 2000/2000 [01:38<00:00, 20.33 step/s, accuracy=0.94, loss=0.20, step=54000]
Valid:   0% 32/6944 [00:00<00:07, 868.06 uttr/s, accuracy=0.88, loss=0.39]
Train: 100% 2000/2000 [01:39<00:00, 20.06 step/s, accuracy=1.00, loss=0.06, step=56000]
Valid:   0% 32/6944 [00:00<00:07, 866.80 uttr/s, accuracy=0.91, loss=0.27]
Train: 100% 2000/2000 [01:35<00:00, 20.83 step/s, accuracy=1.00, loss=0.05, step=58000]
Valid:   0% 32/6944 [00:00<00:07, 867.18 uttr/s, accuracy=0.81, loss=0.69]
Train: 100% 2000/2000 [01:35<00:00, 21.05 step/s, accuracy=0.97, loss=0.16, step=6e+4]
Valid:   0% 32/6944 [00:00<00:08, 847.94 uttr/s, accuracy=0.84, loss=0.53]
Train:   0% 0/2000 [00:00<?, ? step/s]Step 60000, best model saved. (accuracy=0.0042)
Train: 100% 2000/2000 [01:34<00:00, 21.11 step/s, accuracy=1.00, loss=0.04, step=62000]
Valid:   0% 32/6944 [00:00<00:08, 849.05 uttr/s, accuracy=0.88, loss=0.47]
Train: 100% 2000/2000 [01:41<00:00, 19.79 step/s, accuracy=1.00, loss=0.01, step=64000]
Valid:   0% 32/6944 [00:00<00:07, 867.18 uttr/s, accuracy=0.91, loss=0.35]
Train: 100% 2000/2000 [01:39<00:00, 20.07 step/s, accuracy=1.00, loss=0.07, step=66000]
Valid:   0% 32/6944 [00:00<00:14, 486.15 uttr/s, accuracy=0.91, loss=0.35]
Train: 100% 2000/2000 [01:41<00:00, 19.68 step/s, accuracy=0.94, loss=0.20, step=68000]
Valid:   0% 32/6944 [00:00<00:07, 867.18 uttr/s, accuracy=0.88, loss=0.57]
Train: 100% 2000/2000 [01:37<00:00, 20.56 step/s, accuracy=1.00, loss=0.03, step=7e+4]
Valid:   0% 32/6944 [00:00<00:07, 867.18 uttr/s, accuracy=0.91, loss=0.26]
Train:   0% 0/2000 [00:00<?, ? step/s]
Step 70000, best model saved. (accuracy=0.0042)

实验代码

作业代码仓库 https://github.com/nikuleo/lhy_ML2021Spring/blob/main/hw4_Self_Attention/self_attention_classify.py
0%