当前位置: 首页 > news >正文

公司网站做的太难看广州网站制作实力乐云seo

公司网站做的太难看,广州网站制作实力乐云seo,h5制作平台排行榜,jsp做电影网站打卡 目录 打卡 环境准备 准备阶段 数据加载与预处理 BertTokenizer 部分输出 模型构建 gpt2模型结构输出 训练流程 部分输出 部分输出2(减少训练数据) 推理流程 环境准备 pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspo…

打卡

目录

打卡

环境准备

准备阶段

数据加载与预处理

BertTokenizer

部分输出

模型构建

gpt2模型结构输出

训练流程

部分输出

部分输出2(减少训练数据)

推理流程


环境准备

pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspore==2.2.14pip install tokenizers==0.15.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
# 该案例在 mindnlp 0.3.1 版本完成适配,如果发现案例跑不通,可以指定mindnlp版本,执行`!pip install mindnlp==0.3.1`pip install mindnlp

准备阶段

nlpcc2017摘要数据,内容为新闻正文及其摘要,总计50000个样本。

来源:nlpcc2017摘要数据

数据加载与预处理

  • 原始数据格式:
article: [CLS] article_context [SEP]
summary: [CLS] summary_context [SEP]
  • 预处理后的数据格式:
[CLS] article_context [SEP] summary_context [SEP]

BertTokenizer

因GPT2无中文的tokenizer,使用BertTokenizer替代。代码如下:

from mindspore.dataset import TextFileDataset
import json
import numpy as np
from mindnlp.transformers import BertTokenizer# preprocess dataset
def process_dataset(dataset, tokenizer, batch_size=6, max_seq_len=1024, shuffle=False):def read_map(text):data = json.loads(text.tobytes())return np.array(data['article']), np.array(data['summarization'])def merge_and_pad(article, summary):# tokenization# pad to max_seq_length, only truncate the articletokenized = tokenizer(text=article, text_pair=summary,padding='max_length', truncation='only_first', max_length=max_seq_len)return tokenized['input_ids'], tokenized['input_ids']dataset = dataset.map(read_map, 'text', ['article', 'summary'])# change column names to input_ids and labels for the following trainingdataset = dataset.map(merge_and_pad, ['article', 'summary'], ['input_ids', 'labels'])dataset = dataset.batch(batch_size)if shuffle:dataset = dataset.shuffle(batch_size)return dataset# load dataset
dataset = TextFileDataset(str(path), shuffle=False)
print(dataset.get_dataset_size())   ### 50000# split into training and testing dataset
train_dataset, test_dataset = dataset.split([0.9, 0.1], randomize=False)
print(len(train_dataset))  ### 45000# We use BertTokenizer for tokenizing chinese context.
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
len(tokenizer)train_dataset = process_dataset(train_dataset, tokenizer, batch_size=4)
## next(train_dataset.create_tuple_iterator())

部分输出

模型构建

如下,通过两个类实现:

  1. 构建GPT2ForSummarization模型,注意shift right的操作。
  2. 动态学习率
from mindspore import ops
from mindnlp.transformers import GPT2LMHeadModel 
from mindspore.nn.learning_rate_schedule import LearningRateSchedulefrom mindspore import nn
from mindnlp.transformers import GPT2Config, GPT2LMHeadModel
from mindnlp._legacy.engine import Trainer
from mindnlp._legacy.engine.callbacks import CheckpointCallbackclass GPT2ForSummarization(GPT2LMHeadModel):def construct(self,input_ids = None,attention_mask = None,labels = None,):outputs = super().construct(input_ids=input_ids, attention_mask=attention_mask)shift_logits = outputs.logits[..., :-1, :]shift_labels = labels[..., 1:]# Flatten the tokensloss = ops.cross_entropy(shift_logits.view(-1, shift_logits.shape[-1]), shift_labels.view(-1), ignore_index=tokenizer.pad_token_id)return lossclass LinearWithWarmUp(LearningRateSchedule):"""Warmup-decay learning rate."""def __init__(self, learning_rate, num_warmup_steps, num_training_steps):super().__init__()self.learning_rate = learning_rateself.num_warmup_steps = num_warmup_stepsself.num_training_steps = num_training_stepsdef construct(self, global_step):if global_step < self.num_warmup_steps:return global_step / float(max(1, self.num_warmup_steps)) * self.learning_ratereturn ops.maximum(0.0, (self.num_training_steps - global_step) / (max(1, self.num_training_steps - self.num_warmup_steps))) * self.learning_rate## 训练参数设置
num_epochs = 1
warmup_steps = 2000
learning_rate = 1.5e-4num_training_steps = num_epochs * train_dataset.get_dataset_size()config = GPT2Config(vocab_size=len(tokenizer))
model = GPT2ForSummarization(config)lr_scheduler = LinearWithWarmUp(learning_rate=learning_rate, num_warmup_steps=warmup_steps, num_training_steps=num_training_steps)
optimizer = nn.AdamWeightDecay(model.trainable_params(), learning_rate=lr_scheduler)# 记录模型参数数量
print('number of model parameters: {}'.format(model.num_parameters()))

gpt2模型结构输出

1. 1级主类:GPT2ForSummarization

2. 2级类:GPT2Model 层,是transformer 结构,是模型的核心部分。

3. 2级类:lm_head 结构的 Dense 全连接层 , dim[in, out]=[768,  21128]。

4. GPT2Model 结构下的3级类组件分三层:

        >> wte 嵌入层:dim[in, out]=[21128, 768] ,即使用了 21128 个词汇,每个词汇映射到一个768 维的向量。

        >> wpe 嵌入层:dim[in, out]=[1024, 768] 

        >> drop 层。

        >> layers h 隐网络结构层:Transformer模型的主体,包含 12 个 GPT2Block。  

        >> ln_f LayerNorm 最后的层归一化。        

5. GPT2Block 的结构:

        》》ln_1 LayerNorm层,层归一化,用于在注意力机制之前对输入进行归一化。

        》》attn GPT2Attention层,自注意力机制,用于计算输入序列中不同位置的注意力权重。共包括3层:Conv1D、Conv1D、CustomDropout、CustomDropout。

        》》ln_2 LayerNorm层,用于自注意力之后的归一化。

        》》mlp  GPT2MLP层,多层感知机,用于对自注意力层的输出进行进一步的非线性变换。这里使用的操作包括:Conv1D、Conv1D、GELU、CustomDropout。
 

$ print(model)GPT2ForSummarization<(transformer): GPT2Model<(wte): Embedding<vocab_size=21128, embedding_size=768, use_one_hot=False, weight=Parameter (Tensor(shape=[21128, 768], dtype=Float32, value=[...], name=transformer.wte.weight), requires_grad=True), dtype=Float32, padding_idx=None>(wpe): Embedding<vocab_size=1024, embedding_size=768, use_one_hot=False, weight=Parameter (Tensor(shape=[1024, 768], dtype=Float32, value=[...], name=transformer.wpe.weight), requires_grad=True), dtype=Float32, padding_idx=None>(drop): CustomDropout<>(h): CellList<(0): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.0.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.0.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.0.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.0.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(1): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.1.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.1.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.1.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.1.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(2): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.2.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.2.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.2.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.2.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(3): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.3.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.3.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.3.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.3.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(4): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.4.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.4.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.4.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.4.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(5): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.5.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.5.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.5.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.5.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(6): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.6.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.6.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.6.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.6.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(7): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.7.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.7.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.7.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.7.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(8): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.8.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.8.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.8.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.8.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(9): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.9.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.9.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.9.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.9.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(10): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.10.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.10.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.10.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.10.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(11): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.11.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.11.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.11.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.11.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>>(ln_f): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.ln_f.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.ln_f.bias), requires_grad=True)>>(lm_head): Dense<input_channels=768, output_channels=21128>>

训练流程

from mindspore import nn
from mindnlp.transformers import GPT2Config, GPT2LMHeadModel
from mindnlp._legacy.engine import Trainer
from mindnlp._legacy.engine.callbacks import CheckpointCallback# 记录模型参数数量
print('number of model parameters: {}'.format(model.num_parameters()))ckpoint_cb = CheckpointCallback(save_path='checkpoint', ckpt_name='gpt2_summarization',epochs=1, keep_checkpoint_max=2)trainer = Trainer(network=model, train_dataset=train_dataset,epochs=1, optimizer=optimizer, callbacks=ckpoint_cb)
trainer.set_amp(level='O1')  # 开启混合精度trainer.run(tgt_columns="labels")

部分输出

注:建议使用较高规格的算力,训练时间较长

部分输出2(减少训练数据)

此次活动的 notebook 只可以连续运行8小时,此次目的也不是性能优化,故此,我将训练数据减少到了1/10,此时的部分输出如下。

推理流程

## 向量数据转为中文数据
def process_test_dataset(dataset, tokenizer, batch_size=1, max_seq_len=1024, max_summary_len=100):def read_map(text):data = json.loads(text.tobytes())return np.array(data['article']), np.array(data['summarization'])def pad(article):tokenized = tokenizer(text=article, truncation=True, max_length=max_seq_len-max_summary_len)return tokenized['input_ids']dataset = dataset.map(read_map, 'text', ['article', 'summary'])dataset = dataset.map(pad, 'article', ['input_ids'])dataset = dataset.batch(batch_size)return datasettest_dataset = process_test_dataset(test_dataset, tokenizer, batch_size=1)
print(next(test_dataset.create_tuple_iterator(output_numpy=True)))model = GPT2LMHeadModel.from_pretrained('./checkpoint/gpt2_summarization_epoch_0.ckpt', config=config)model.set_train(False)
model.config.eos_token_id = model.config.sep_token_id
i = 0
for (input_ids, raw_summary) in test_dataset.create_tuple_iterator():output_ids = model.generate(input_ids, max_new_tokens=50, num_beams=5, no_repeat_ngram_size=2)output_text = tokenizer.decode(output_ids[0].tolist())print(output_text)i += 1if i == 1:break

减少训练数据后的模型推理结果展示。


文章转载自:
http://dinncoschlockmeister.tqpr.cn
http://dinncohellbox.tqpr.cn
http://dinncopyrosis.tqpr.cn
http://dinncoorthopaedist.tqpr.cn
http://dinncoportrayer.tqpr.cn
http://dinncowelshie.tqpr.cn
http://dinncoconsuetudinary.tqpr.cn
http://dinncotremolando.tqpr.cn
http://dinncoingrained.tqpr.cn
http://dinncogardner.tqpr.cn
http://dinncohazy.tqpr.cn
http://dinncoshuttlecock.tqpr.cn
http://dinncoannihilator.tqpr.cn
http://dinncorebelliousness.tqpr.cn
http://dinncoverkhoyansk.tqpr.cn
http://dinncoroo.tqpr.cn
http://dinncohomiletics.tqpr.cn
http://dinncofatalist.tqpr.cn
http://dinncoempathic.tqpr.cn
http://dinncoacini.tqpr.cn
http://dinncobaby.tqpr.cn
http://dinncoseriocomic.tqpr.cn
http://dinncoglycogenic.tqpr.cn
http://dinncodiosmose.tqpr.cn
http://dinncowinterclad.tqpr.cn
http://dinncodeplorable.tqpr.cn
http://dinncoastropologist.tqpr.cn
http://dinncopanplegia.tqpr.cn
http://dinncomelomania.tqpr.cn
http://dinncoperistyle.tqpr.cn
http://dinncoartifact.tqpr.cn
http://dinncogeriatrist.tqpr.cn
http://dinncoalamo.tqpr.cn
http://dinncowearable.tqpr.cn
http://dinncoleadenhearted.tqpr.cn
http://dinncohydroscopicity.tqpr.cn
http://dinncocholesterol.tqpr.cn
http://dinncoshortcut.tqpr.cn
http://dinncogrillage.tqpr.cn
http://dinncoserpentiform.tqpr.cn
http://dinncooval.tqpr.cn
http://dinncoscutcheon.tqpr.cn
http://dinnconoel.tqpr.cn
http://dinncodemisability.tqpr.cn
http://dinncocybernetist.tqpr.cn
http://dinncooch.tqpr.cn
http://dinncoendurable.tqpr.cn
http://dinncounassuageable.tqpr.cn
http://dinncopremeiotic.tqpr.cn
http://dinncogossamery.tqpr.cn
http://dinncobiocoenose.tqpr.cn
http://dinnconudge.tqpr.cn
http://dinncoacardiac.tqpr.cn
http://dinncogeomagnetic.tqpr.cn
http://dinncoaridity.tqpr.cn
http://dinncodefection.tqpr.cn
http://dinncoimmusical.tqpr.cn
http://dinncocustos.tqpr.cn
http://dinncoautopsy.tqpr.cn
http://dinncodistributed.tqpr.cn
http://dinncointer.tqpr.cn
http://dinncobywork.tqpr.cn
http://dinncocascarilla.tqpr.cn
http://dinncodaze.tqpr.cn
http://dinncoinexpungibility.tqpr.cn
http://dinncopartnership.tqpr.cn
http://dinncoanaglyptic.tqpr.cn
http://dinncotampico.tqpr.cn
http://dinncoadh.tqpr.cn
http://dinncolynching.tqpr.cn
http://dinncotedium.tqpr.cn
http://dinncolated.tqpr.cn
http://dinncokleptocracy.tqpr.cn
http://dinncooligodendroglia.tqpr.cn
http://dinncovarimax.tqpr.cn
http://dinncochauffeuse.tqpr.cn
http://dinncopossibility.tqpr.cn
http://dinncodemocratize.tqpr.cn
http://dinncoseilbahn.tqpr.cn
http://dinncobisector.tqpr.cn
http://dinncolegatary.tqpr.cn
http://dinncodiaphanous.tqpr.cn
http://dinncopersonify.tqpr.cn
http://dinncobolometer.tqpr.cn
http://dinncosket.tqpr.cn
http://dinncohypocotyl.tqpr.cn
http://dinncoexactable.tqpr.cn
http://dinncotouchingly.tqpr.cn
http://dinncoturkmenian.tqpr.cn
http://dinncoeuchre.tqpr.cn
http://dinncorifler.tqpr.cn
http://dinncoamtrak.tqpr.cn
http://dinncoimpiety.tqpr.cn
http://dinncorevolving.tqpr.cn
http://dinncotelluric.tqpr.cn
http://dinncouncharitable.tqpr.cn
http://dinncoinstigate.tqpr.cn
http://dinncoglassiness.tqpr.cn
http://dinncoetruria.tqpr.cn
http://dinncodistributively.tqpr.cn
http://www.dinnco.com/news/90199.html

相关文章:

  • 有做lol直播网站seo学校培训课程
  • 温州网站建设方案文档制作企业网站建设案例
  • SharePoint做网站好吗灵感关键词生成器
  • 做外国的网站卖东西小学生摘抄新闻
  • 做网站的服务商优化建议
  • 自己服务器可以做网站百度快照网址
  • 怎么找网站url地址肇庆百度快照优化
  • 试玩平台网站怎么做站长工具大全
  • 河北专业网站制作如何做好产品网络推广
  • 电脑当网站空间网络营销案例成功案例
  • 网站中宣传彩页怎么做的最新病毒感染什么症状
  • 营销型和展示型网站专业软文平台
  • 广州网站建设培训产品营销策划
  • 北京免费网站建设软文营销常用的方式是什么
  • 网站建设怎样去销售网络营销毕业论文8000字
  • 如何跟客户沟通网站建设如何进行网站的宣传和推广
  • 有了源码怎么做网站google推广
  • 常德seo招聘寰宇seo
  • 个人购物网站备案凡科网站登录入口
  • 北京网站制作推广网站推广的目的是什么
  • 网站标签怎么做跳转全国最新的疫情数据
  • wordpress置顶文章调用排名优化推广
  • 网站开发常用的语言和工具关键词优化公司网站
  • 常山做网站品牌营销方案
  • 排名怎么优化快湖南网站seo营销
  • 做球球棒棒糖网站源码百度搜索一下就知道
  • 网站域名注册时间查询百度搜索引擎推广步骤
  • 软件wap网站搜索引擎广告案例
  • 广州市花重庆seo务
  • 北京建网站需要多少钱电商网站排名