最新要闻
- 【世界速看料】见证历史!苹果成全球首家市值3万亿美元公司 意味着什么?
- 男生旅行3年收集50余个城市水土:覆盖30个省份 比打卡拍照更有意义
- 云南男子外出干活遇超大菌子 网友羡慕:菌之大一锅炖不下_当前聚焦
- 网友吐槽一条街三四十个井盖 官方回应:雨污分流、将铺上沥青
- 当前热议!五年中考三年模拟八上物理答案
- 荷兰DUV光刻机一律管制出口?一文看懂|环球快播
- 埃安6月销量45013辆创历史新高 今年已卖出比亚迪一个月销量
- 特斯拉加入价格乱战!买这两款车最高优惠4.5万元
- 国内高端手机市场除了苹果iPhone 还能有谁?! 焦点要闻
- 2023年全国铁路暑期运输启动:满血复活 暴涨超70%
- 灭霸演员是谁_ 芝士回答
- 世界短讯!心爱的小乌龟死了:主人把它做成《七龙珠》龟仙人手办!
- 担心的事情发生了!泰国女游客卷入电动步道腿被夹断
- 火车站保洁阿姨不慎弄脏乘客衣服跪地道歉 公司回应:已和解 世界观焦点
- 蔚来智能系统Banyan 2.0正式发布:超120项功能 号称全场景领先
- 天天快播:一键打开动态日历 锁定2023下半年这些大事!
广告
手机
光庭信息跌4.57% 2021上市超募11亿2022扣非降74% 时快讯
搜狐汽车全球快讯 | 大众汽车最新专利曝光:仪表支持拆卸 可用手机、平板替代-环球关注
- 光庭信息跌4.57% 2021上市超募11亿2022扣非降74% 时快讯
- 搜狐汽车全球快讯 | 大众汽车最新专利曝光:仪表支持拆卸 可用手机、平板替代-环球关注
- 视点!美国首位女总统即将诞生?拜登恐怕要提前下岗,美政坛迎来变局?
- 当前速递!用理想仪器实现更好的颗粒 德国新帕泰克亮相CPHI & PMEC China获好评
- 微粒贷怎么申请开通 开通方法如下
- 焦点简讯:心疼!这位40岁的云南缉毒警,已是满头白发
家电
当前资讯!02修剪标准&&方法
2.1修剪标准
2.1.1基于权重大小的修剪标准
参考上一节,对权重做绝对值按大小修剪,或者做L1/L2范数来进行修剪
2.1.2基于梯度幅度来修剪
基于前面可知,我们按照值的大小来修剪,把值小的裁剪掉了,或者说某个权重在训练过程中一直不变,直观上感觉没有那么重要。但其实这样是不对的,从梯度上来说,该权值可能初始就很低(接近合适值),所以他的更新值就会很小,而不能认为他变动小就不重要。因此我们需要将权重大小和梯度大小结合考虑,进行裁剪。最简单的一种方式是考虑乘积。
下面这段代码可以应用于硕士毕业论文中,按照你的想法来剪枝
(资料图片仅供参考)
import numpy as npimport torchdef prune_by_gradient_weight_product(model, pruning_rate): grad_weight_product_list = [] for name, param in model.named_parameters(): if "weight" in name: # 计算梯度与权重的乘积 grad_weight_product = torch.abs(param.grad * param.data) grad_weight_product_list.append(grad_weight_product) # 将所有的乘积值合并到一个张量中 all_product_values = torch.cat([torch.flatten(x) for x in grad_weight_product_list]) # 计算需要修剪的阈值 threshold = np.percentile(all_product_values.cpu().detach().numpy(), pruning_rate) # 对权重进行修剪 for name, param in model.named_parameters(): if "weight" in name: # 创建一个掩码,表示哪些权重应该保留 mask = torch.where(torch.abs(param.grad * param.data) >= threshold, 1, 0) # 应用掩码 param.data *= mask.float()pruning_rate = 50#一个全连接层,输入10的向量输出5的向量,然后是激活层,然后又是全连接层输入5的向量输出1的向量model = torch.nn.Sequential(torch.nn.Linear(10, 5), torch.nn.ReLU(), torch.nn.Linear(5, 1))input_tensor = torch.randn(1, 10) # 创建一个随机输入张量#output_tensor就是传入input_tensor,跑一个最小的前向和反向传播output_tensor = model(input_tensor) # 前向传递loss = torch.sum(output_tensor) # 定义一个虚拟损失loss.backward() # 执行反向传递以计算梯度prune_by_gradient_weight_product(model, pruning_rate) # 对模型进行修剪
2.2修剪方法
2.2.1修剪框架
2015年提出了经典框架,训练-剪枝-微调
import torchimport torch.nn as nnimport torch.optim as optimfrom torchvision import datasets, transformsimport numpy as np# 1. 训练基础的大网络#这里实现了一个bigmodel只构建了三个全连接层class BigModel(nn.Module): def __init__(self): super(BigModel, self).__init__() self.fc1 = nn.Linear(784, 512) self.fc2 = nn.Linear(512, 256) self.fc3 = nn.Linear(256, 10) def forward(self, x): x = torch.relu(self.fc1(x)) x = torch.relu(self.fc2(x)) x = self.fc3(x) return x# 准备MNIST数据集transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])train_dataset = datasets.MNIST("./data", train=True, download=True, transform=transform)train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True)def train(model, dataloader, criterion, optimizer, device="cpu", num_epochs=10): model.train() model.to(device) for epoch in range(num_epochs): running_loss = 0.0 for batch_idx, (inputs, targets) in enumerate(dataloader): inputs, targets = inputs.to(device), targets.to(device) # 前向传播 outputs = model(inputs.view(inputs.size(0), -1)) loss = criterion(outputs, targets) # 反向传播 optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() print(f"Epoch {epoch + 1}, Loss: {running_loss / len(dataloader)}") return modelbig_model = BigModel()criterion = nn.CrossEntropyLoss()optimizer = optim.Adam(big_model.parameters(), lr=1e-3)big_model = train(big_model, train_loader, criterion, optimizer, device="cuda", num_epochs=2)# 保存训练好的大网络torch.save(big_model.state_dict(), "big_model.pth")# 2. 修剪大网络为小网络 <==================================def prune_network(model, pruning_rate=0.5, method="global"): for name, param in model.named_parameters(): if "weight" in name: tensor = param.data.cpu().numpy() if method == "global": threshold = np.percentile(abs(tensor), pruning_rate * 100) else: # local pruning threshold = np.percentile(abs(tensor), pruning_rate * 100, axis=1, keepdims=True) mask = abs(tensor) > threshold param.data = torch.FloatTensor(tensor * mask.astype(float)).to(param.device)big_model.load_state_dict(torch.load("big_model.pth"))prune_network(big_model, pruning_rate=0.5, method="global") # <==================================# 保存修剪后的模型torch.save(big_model.state_dict(), "pruned_model.pth")# 3. 以低的学习率做微调criterion = nn.CrossEntropyLoss()optimizer = optim.Adam(big_model.parameters(), lr=1e-4) # <==================================finetuned_model = train(big_model, train_loader, criterion, optimizer, device="cuda", num_epochs=10)# 保存微调后的模型torch.save(finetuned_model.state_dict(), "finetuned_pruned_model.pth")# Epoch 1, Loss: 0.2022465198550985# Epoch 2, Loss: 0.08503768096334421# Epoch 1, Loss: 0.03288614955859935# Epoch 2, Loss: 0.021574671817958347# Epoch 3, Loss: 0.015933904873507806
2018年提出了边训练边剪枝,在每完成一个epoch之后就进行剪枝,剪枝只是置0,下一次依然会更新
import torchimport torch.nn as nnimport torch.optim as optimfrom torchvision import datasets, transformsimport numpy as npclass BigModel(nn.Module): def __init__(self): super(BigModel, self).__init__() self.fc1 = nn.Linear(784, 512) self.fc2 = nn.Linear(512, 256) self.fc3 = nn.Linear(256, 10) def forward(self, x): x = torch.relu(self.fc1(x)) x = torch.relu(self.fc2(x)) x = self.fc3(x) return xtransform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])train_dataset = datasets.MNIST("./data", train=True, download=True, transform=transform)train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True)def prune_network(model, pruning_rate=0.5, method="global"): for name, param in model.named_parameters(): if "weight" in name: tensor = param.data.cpu().numpy() if method == "global": threshold = np.percentile(abs(tensor), pruning_rate * 100) else: # local pruning threshold = np.percentile(abs(tensor), pruning_rate * 100, axis=1, keepdims=True) mask = abs(tensor) > threshold param.data = torch.FloatTensor(tensor * mask.astype(float)).to(param.device)def train_with_pruning(model, dataloader, criterion, optimizer, device="cpu", num_epochs=10, pruning_rate=0.5): model.train() model.to(device) for epoch in range(num_epochs): running_loss = 0.0 for batch_idx, (inputs, targets) in enumerate(dataloader): inputs, targets = inputs.to(device), targets.to(device) # 前向传播 outputs = model(inputs.view(inputs.size(0), -1)) loss = criterion(outputs, targets) # 反向传播 optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() print(f"Epoch {epoch + 1}, Loss: {running_loss / len(dataloader)}") # 在每个 epoch 结束后进行剪枝 prune_network(model, pruning_rate, method="global") # <================================== just prune the weights ot 0 but still allow them to grow back by optimizer.step() return modelbig_model = BigModel()criterion = nn.CrossEntropyLoss()optimizer = optim.Adam(big_model.parameters(), lr=1e-3)big_model = train_with_pruning(big_model, train_loader, criterion, optimizer, device="cuda", num_epochs=10, pruning_rate=0.1)# 保存训练好的模型torch.save(big_model.state_dict(), "trained_with_pruning_model.pth")
直接remove剪枝,优点是可以减少模型的计算量和内存使用。可以通过减少网络容量来防止过拟合。
缺点是可能会降低网络的表示能力,导致性能下降。需要对网络结构进行改变,这可能会增加实现和微调的复杂性
# train phaseimport torchimport torch.nn as nnimport torch.optim as optimfrom torchvision import datasets, transformsimport numpy as np# 1. Train a large base networkclass BigModel(nn.Module): def __init__(self): super(BigModel, self).__init__() self.conv1 = nn.Conv2d(1, 32, kernel_size=3, padding=1) self.conv2 = nn.Conv2d(32, 16, kernel_size=3, padding=1) self.fc = nn.Linear(16 * 28 * 28, 10) # Initialize l1norm as a parameter and register as buffer self.conv1_l1norm = nn.Parameter(torch.Tensor(32), requires_grad=False) self.conv2_l1norm = nn.Parameter(torch.Tensor(16), requires_grad=False) self.register_buffer("conv1_l1norm_buffer", self.conv1_l1norm) self.register_buffer("conv2_l1norm_buffer", self.conv2_l1norm) def forward(self, x): x = torch.relu(self.conv1(x)) self.conv1_l1norm.data = torch.sum(torch.abs(self.conv1.weight.data), dim=(1, 2, 3)) x = torch.relu(self.conv2(x)) self.conv2_l1norm.data = torch.sum(torch.abs(self.conv2.weight.data), dim=(1, 2, 3)) x = x.view(x.size(0), -1) x = self.fc(x) return x# Training functiondef train(model, dataloader, criterion, optimizer, device="gpu", num_epochs=10): model.train() model.to(device) for epoch in range(num_epochs): running_loss = 0.0 for batch_idx, (inputs, targets) in enumerate(dataloader): inputs, targets = inputs.to(device), targets.to(device) # Forward propagation outputs = model(inputs) loss = criterion(outputs, targets) # Backpropagation optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() # print(f"Loss: {running_loss / len(dataloader)}") print(f"Epoch {epoch + 1}, Loss: {running_loss / len(dataloader)}") return modelif __name__ == "__main__": # Prepare the MNIST dataset transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) train_dataset = datasets.MNIST("./data", train=True, download=True, transform=transform) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True) big_model = BigModel() criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(big_model.parameters(), lr=1e-3) big_model = train(big_model, train_loader, criterion, optimizer, device="cuda", num_epochs=3) # Save the trained big network torch.save(big_model.state_dict(), "big_model.pth") # Set the input shape of the model dummy_input = torch.randn(1, 1, 28, 28).to("cuda") # Export the model to ONNX format torch.onnx.export(big_model, dummy_input, "big_model.onnx")# (/home/coeyliang/miniconda3) coeyliang_pruning> python train.py # Epoch 1, Loss: 0.14482067066501939# Epoch 2, Loss: 0.05070804020739657# Epoch 3, Loss: 0.03378467213614771
剪枝部分,下面这份代码,对conv1修剪第一个通道,对conv2修剪第2个通道
#初始化一个空列表,用于存储二维卷积的层的L1范数l1norms_for_local_threshold = []for name, m in model.name_modules: if isinstance(m, nn.Conv2d): #为当前模块的L1范数创建一个名称 l1norm_buffer_name = f"{name}_l1norm_buffer" #使用getattr函数从模型中获取该名称对应的属性。这个属性应该是当前模块的L1范数 l1norm = getattr(model, l1norm_buffer_name) l1norms_for_local_threshold.append(l1norm)#排序并且只取value,然后确定阈值,这里的0.5用于确定阈值T_conv1 = torch.sort(l1norms_for_local_threshold[0])[0][int(len(l1norms_for_local_threshold[0]) * 0.5)]#下面就是剪掉#先取出来,方便后面操作conv1 = model.conv1 #[32*1*3*3]conv2 = model.conv2 #[16*32*3*3]conv1_l1norm_buffer = model.conv1_l1norm_bufferconv2_l1norm_buffer = model.conv2_l1norm_buffer#比T_conv1大的留下keep_idxs = torch.where(conv1_l1norm_buffer >= T_conv1)[0]k = len(keep_idxs)conv1.weight.data = conv1.weight.data[keep_idxs]conv1.bias.data = conv1.bias.data[keep_idxs]conv1_l1norm_buffer.data = conv1_l1norm_buffer.data[keep_idxs]conv1.out_channels = k_, keep_idxs = torch.topk(conv2_l1norm_buffer, k)#注意这里要塞到第2个通道,所以是[:, keep_idxs]conv2.weight.data = conv2.weight.data[:,keep_idxs]conv2_l1norm_buffer.data = conv2_l1norm_buffer.data[keep_idxs]conv2.in_channels = ktorch.save(model.state_dict(), "pruned_model.pth")dummy_input = torch.randn(1, 1, 28, 28)torch.onnx.export(model, dummy_input, "pruned_model.onnx")#后面是finetune略
2.3.1稀疏训练
2018年提出,步骤分为
- 初始化一个带有随机mask的网络
- 训练这个pruned network 一个epoch
- 去掉一些权重较小的一些weights(或者不满足自定义条件的weights)
- 重新生成(regrow)同样数量的random weights
如下,我们拿到一个网络,考虑如何将其变成一个稀疏训练。
# raw netimport torchimport torch.nn as nnimport torch.optim as optimfrom torchvision import datasets, transformsfrom torch.utils.data import DataLoader# Define the network architectureclass SparseNet(nn.Module): def __init__(self): super(SparseNet, self).__init__() self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): x = x.view(-1, 784) x = torch.relu(self.fc1(x)) x = self.fc2(x) return x# Load MNIST datasettransform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])train_dataset = datasets.MNIST("./data", train=True, download=True, transform=transform)train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)# Initialize the network, loss function, and optimizersparsity_rate = 0.5model = SparseNet()criterion = nn.CrossEntropyLoss()optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)# Training loopn_epochs = 10for epoch in range(n_epochs): running_loss = 0.0 for batch_idx, (inputs, targets) in enumerate(train_loader): optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() running_loss += loss.item() # print(f"Loss: {running_loss / (batch_idx+1)}") print(f"Epoch {epoch+1}/{n_epochs}, Loss: {running_loss / (batch_idx+1)}")
下图展示了稀疏训练,选出部分将其置0(对应图上红色),绿色是保留项(这只是前人的设计,后续有更好的)下面是2018年这篇文章的实现
# sparse netimport torchimport torch.nn as nnimport torch.optim as optimfrom torchvision import datasets, transformsfrom torch.utils.data import DataLoader# Define the network architectureclass SparseNet(nn.Module): def __init__(self, sparsity_rate, mutation_rate = 0.5): super(SparseNet, self).__init__() self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 10) self.sparsity_rate = sparsity_rate self.mutation_rate = mutation_rate self.initialize_masks() # <== 1.initialize a network with random mask def forward(self, x): x = x.view(-1, 784) x = x @ (self.fc1.weight * self.mask1.to(x.device)).T + self.fc1.bias x = torch.relu(x) x = x @ (self.fc2.weight * self.mask2.to(x.device)).T + self.fc2.bias return x def initialize_masks(self): self.mask1 = self.create_mask(self.fc1.weight, self.sparsity_rate) self.mask2 = self.create_mask(self.fc2.weight, self.sparsity_rate) def create_mask(self, weight, sparsity_rate): k = int(sparsity_rate * weight.numel()) _, indices = torch.topk(weight.abs().view(-1), k, largest=False) mask = torch.ones_like(weight, dtype=bool) mask.view(-1)[indices] = False return mask # <== 1.initialize a network with random mask def update_masks(self): self.mask1 = self.mutate_mask(self.fc1.weight, self.mask1, self.mutation_rate) self.mask2 = self.mutate_mask(self.fc2.weight, self.mask2, self.mutation_rate) def mutate_mask(self, weight, mask, mutation_rate=0.5): # weight and mask: 2d shape # Find the number of elements in the mask that are True num_true = torch.count_nonzero(mask) # Compute the number of elements to mutate mutate_num = int(mutation_rate * num_true) # 3) pruning a certain amount of weights of lower magnitude true_indices_2d = torch.where(mask == True) # index the 2d mask where is true true_element_1d_idx_prune = torch.topk(weight[true_indices_2d], mutate_num, largest=False)[1] for i in true_element_1d_idx_prune: mask[true_indices_2d[0][i], true_indices_2d[1][i]] = False # 4) regrowing the same amount of random weights. # Get the indices of the False elements in the mask false_indices = torch.nonzero(~mask) # Randomly select n indices from the false_indices tensor random_indices = torch.randperm(false_indices.shape[0])[:mutate_num] # the elemnt to be regrow regrow_indices = false_indices[random_indices] for regrow_idx in regrow_indices: mask[tuple(regrow_idx)] = True return mask# Set the device to CUDA if availabledevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")# Load MNIST dataset and move to the devicetransform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])train_dataset = datasets.MNIST("./data", train=True, download=True, transform=transform)train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)sparsity_rate = 0.5model = SparseNet(sparsity_rate).to(device)criterion = nn.CrossEntropyLoss()optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)n_epochs = 10for epoch in range(n_epochs): running_loss = 0.0 for batch_idx, (inputs, targets) in enumerate(train_loader): # Move the data to the device inputs, targets = inputs.to(device), targets.to(device) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() running_loss += loss.item() # print(f"Loss: {running_loss / (batch_idx+1)}") # Update masks model.update_masks() # generate a new mask based on the updated weights print(f"Epoch {epoch+1}/{n_epochs}, Loss: {running_loss / (batch_idx+1)}")
关键词:
当前资讯!02修剪标准&&方法
环球滚动:ThinkPHP6.0 链式SQL语句
【世界速看料】见证历史!苹果成全球首家市值3万亿美元公司 意味着什么?
男生旅行3年收集50余个城市水土:覆盖30个省份 比打卡拍照更有意义
云南男子外出干活遇超大菌子 网友羡慕:菌之大一锅炖不下_当前聚焦
网友吐槽一条街三四十个井盖 官方回应:雨污分流、将铺上沥青
当前热议!五年中考三年模拟八上物理答案
全球观热点:MinIO-对象存储简单使用
JS必学的11个工具方法(避免重复造轮子) 世界快看
IDA的使用2_全球新视野
荷兰DUV光刻机一律管制出口?一文看懂|环球快播
埃安6月销量45013辆创历史新高 今年已卖出比亚迪一个月销量
特斯拉加入价格乱战!买这两款车最高优惠4.5万元
国内高端手机市场除了苹果iPhone 还能有谁?! 焦点要闻
2023年全国铁路暑期运输启动:满血复活 暴涨超70%
焦点热议:使用Gitee或GitHub托管Maven仓库JAR包的便捷方法
灭霸演员是谁_ 芝士回答
世界短讯!心爱的小乌龟死了:主人把它做成《七龙珠》龟仙人手办!
担心的事情发生了!泰国女游客卷入电动步道腿被夹断
火车站保洁阿姨不慎弄脏乘客衣服跪地道歉 公司回应:已和解 世界观焦点
蔚来智能系统Banyan 2.0正式发布:超120项功能 号称全场景领先
天天快播:一键打开动态日历 锁定2023下半年这些大事!
本田飞度彻底无语!新款马自达昂克赛拉官降3万:8.99万起_今头条
小米下一代旗舰还用1英寸主摄!1.33英寸还得等|快报
天天头条:今天起全国汽车实施国6排放标准6b阶段!对老车年检有何影响?官方解答
Web安全-渗透测试-waf绕过02
每日快讯!微服务设计:集成
vue中封装服务器地址/接口与设置请求头
天津关于调整2023年住房公积金缴存额的通知政策解读
环球讯息:你敢吃吗?知了肉初上市400元一斤:号称高蛋白
不用求助“拍瓜师”!西瓜甜不甜:看这里就知道
QQ空间《抢车位》游戏改名了 还把特斯拉Model X名字拼错_每日快看
中国包揽全球液晶电视面板TOP5 牢牢掌控话语权 日韩份额仅剩1成-全球热闻
男子围观火灾拍视频结果着火的是自家 还讨论是谁家倒霉:网友直呼尴尬 当前看点
关于Linux-Kernel-Live-patching-的效果演示-kpatch auto-配置-今日聚焦
世界视讯!win10投影仪连接电脑后画面不显示怎么办
无惧高通华为压制!苹果5G也站起来了:跟诺基亚签许可_天天实时
焦点日报:中国豪华车市场格局改写!理想汽车6月交付32575辆 首次突破三万辆:要超过BBA
五菱再出神车!缤果6月爆卖1.9万台:上市三月销量已超6万
关于lvm磁盘管理-单个磁盘分区PV的扩容 全球聚看点
一天吃透Redis面试八股文
如何使用libavfilter库给输入文件input.yuv添加视频滤镜?
Linux主流架构运维工作简单剖析 今日热搜
视讯!年货再出新花样 透视玩家现原形
14岁女孩500元卖头发被商贩剪坏 网友吐槽太坑:应先通女孩父母 每日动态
努比亚官宣新旗舰:镜头超越一英寸 7月见|全球聚看点
天天视点!别喝工业勾兑啤酒了:熊猫精酿12°P啤酒2.6元新低 好喝不上头
环球快消息!随笔[七律]
【见·闻】巴西精品咖啡市场从业者非常看好中国市场发展前景-百事通
撤销日本福岛核污染地区食品进口限制?欧盟回应很合理 美国早撤销 世界通讯
即时看!交付12万台电池零起火!极氪汽车6月交付10620辆 极氪001蝉联30万元以上纯电车型销冠
大额券手慢无:361°板鞋/运动鞋/休闲鞋74元抄底(多款式)
联合电子X-Pin电机批量生产 X-Pin绕组技术比对解读
燃烧烈爱无删减在线_燃烧烈爱-全球热推荐
当前要闻:电话号码过户要预存1万元话费?中国移动回应来了
我国CR450动车组研制取得阶段性成果:时速453km性能指标良好-焦点讯息
观天下!大猩猩性别歧视 英国小报遭头版制裁
2023年6月随笔暨半年总结 世界今亮点
cad如何调比例尺寸_cad比例怎么调整原图纸比例 环球热点
复读14年的高考钉子户认清事实:考了594分没能力上清华 没浪费教育资源
泰坦号潜艇失事前水下短信通信记录曝光:报警前8小时就已出现问题
简讯:如何做屋顶的防水层呢(如何做屋顶的防水层)
ASP.NET Core Web API之Token验证 天天快看
讯息:# 02. 数据分组整合之unique+groupby
天天快播:IDApython的学习
胖手指戴哪种戒指图片_胖手指
环球微动态丨山东大学拟聘用2名硕士为寓管:一人毕业于哈工大 另一人是海归
跑单王1年送25000单外卖:从外卖小白到外卖王者只用了一个月时间
Intel 13代酷睿最低端U300 CPU首次现身:1个大核、4个小核|速讯
当前看点!因为四个字 Kindle彻底告别中国!说真的 我有点难过
AMD锐龙5 5600X3D处理器确认:美国独享
起亚k2怎么样专家点评(起亚k2怎么样)-热消息
大话墨香online(大话墨香)
决算是什么意思通俗易懂_决算是什么意思_全球观点
特斯拉CEO马斯克被曝患有抑郁症 其服用氯胺酮以此“治疗”
演员马丽迎来了自己41岁生日 晒出美丽照片为自己庆生
微软巴西透露Xbox Series X|S总销量已超2100万台 与总部说法一致
《主播女孩重度依赖》开发商宣布全球累计销量突破百万 将举办庆祝直播
胖东来老板称企业家要活得像人 引发网友广泛热议
天天观速讯丨我在树莓派上跑通了bert模型,使用numpy实现bert模型,使用hugging face 或pytorch训练模型,保存参数为numpy格式,然
开源通用高性能的分布式id序列组件
【当前独家】时文选粹摘抄100字_时文选粹 摘抄
2023上半年票房冠军公布:《满江红》 由沈腾主演_天天亮点
千万别搜索“李斯特菌” 搜完我把冰箱里食物扔了|天天实时
3年过去 骁龙888处理器的库存还没清完:三星被逼重发Galaxy S21 FE
k8s安装环境准备:Virtualbox安装CentOS;复制多个CentOS虚拟机
1999元 LG新款27寸显示器开售:2K 165Hz IPS屏
世界聚焦:Kindle真的要拿来盖泡面了!官方店铺正式停运
超越南方!大数据看这一个6月北方有多热:专家科普原因
世界球精选!Java-语法基础
大数据面试题集锦-Hadoop面试题(三)-MapReduce-每日看点
素描鼻子的画法(素描鼻子)
【金融街发布】外汇局:一季度我国外债规模有所回升 结构保持基本稳定
马斯克:决斗可能约在斗兽场 需要锻炼一下我的耐力
世界今头条!湖南遇特大暴雨:市民街头抱团互救 防灾减灾进行时
当前时讯:中国创纪录一箭41星揭秘:印度一箭104星弱爆了
小学毕业典礼多名学生疑因天热晕倒 学校:当天多云 孩子因低血糖出现状况|速看料
3499元 优派新款便携显示器上架:4K OLED屏-环球快报
快讯:科普书单·新书|动物园开饭啦
北方热过南方 高温屡破纪录!大数据看这个6月北方有多热-新动态