当前位置: 首页 > news >正文

# Pytorch 中可以直接调用的Loss Functions总结:

Pytorch 中可以直接调用的Loss Functions总结:

这里,我们想对Pytorch中可以直接调用的Loss Functions做一个简单的梳理,对于每个Loss Functions,标记了它的使用方法,并对一些不那么常见的Loss FunctionsLink了一些介绍它的Blogs,方便我们学习与使用这些Loss Functions。

文章目录

  • Pytorch 中可以直接调用的Loss Functions总结:
    • L1Loss
    • MSELoss
    • CROSSENTROPYLOSS
    • CTCLoss
    • NLLLoss
    • PoissonNLLLoss
    • GAUSSIANNLLLOSS
    • KLDIVLOSS
    • BCELOSS
    • BCEWITHLOGITSLOSS
    • MARGINRANKINGLOSS
    • HingeEmbeddingLoss
    • COSINEEMBEDDINGLOSS
    • MultiLabelMarginLoss
    • HuberLoss
    • SmoothL1Loss
    • SoftMarginLoss
    • MultiLabelSoftMarginLoss
    • MULTIMARGINLOSS
    • TripletMarginLoss
    • TripletMarginWithDistanceLoss
    • nn.xx 与 nn.functional .xx区别:

文档链接: Loss Functions

L1Loss

用于测量输入中每个元素之间的平均绝对误差 (MAE)。

Creates a criterion that measures the mean absolute error (MAE) between each element in the input x* and target y.

torch.nn.L1Loss(size_average=None, reduce=None, reduction='mean')

参数:

  • size_average (bool, optional) – Deprecated (see ). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field is set to , the losses are instead summed for each minibatch. Ignored when is . Default: reduction``size_average``False``reduce``False``True
  • reduce (bool, optional) – Deprecated (see ). By default, the losses are averaged or summed over observations for each minibatch depending on . When is , returns a loss per batch element instead and ignores . Default: reduction``size_average``reduce``False``size_average``True
  • reduction (string*,* optional) – Specifies the reduction to apply to the output: | | . : no reduction will be applied, : the sum of the output will be divided by the number of elements in the output, : the output will be summed. Note: and are in the process of being deprecated, and in the meantime, specifying either of those two args will override . Default: 'none'``'mean'``'sum'``'none'``'mean'``'sum'``size_average``reduce``reduction``'mean'

使用:

>>> loss = nn.L1Loss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> output = loss(input, target)
>>> output.backward()

MSELoss

用于测量输入中每个元素之间的均方误差(L2 范数)

torch.nn.MSELoss(size_average=None, reduce=None, reduction='mean')

参数:

  • size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
  • reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
  • reduction (string*,* optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'

使用:

loss = nn.MSELoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.randn(3, 5)
output = loss(input, target)
output.backward()

CROSSENTROPYLOSS

此标准计算输入和目标之间的交叉熵损失

torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean', label_smoothing=0.0)

The input is expected to contain raw, unnormalized scores for each class. input has to be a Tensor of size ©(C) for unbatched input,(minibatc**h,C) or (minibatch, C, d_1, d_2, …, d_K)(minibatc**h,C,d1,d2,…,d**K) with K \geq 1K≥1 for the K-dimensional case. The last being useful for higher dimension inputs, such as computing cross entropy loss per-pixel for 2D images.

参数:

  • weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
  • size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
  • ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets. Note that ignore_index is only applicable when the target contains class indices.
  • reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
  • reduction (string*,* optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the weighted mean of the output is taken, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
  • label_smoothing (float, optional) – A float in [0.0, 1.0]. Specifies the amount of smoothing when computing the loss, where 0.0 means no smoothing. The targets become a mixture of the original ground truth and a uniform distribution as described in Rethinking the Inception Architecture for Computer Vision. Default: 0.00.0.

使用:

# Example of target with class indices

loss = nn.CrossEntropyLoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.empty(3, dtype=torch.long).random_(5)
output = loss(input, target)
output.backward()

# Example of target with class probabilities

input = torch.randn(3, 5, requires_grad=True)
target = torch.randn(3, 5).softmax(dim=1)
output = loss(input, target)
output.backward()

CTCLoss

CTC loss 理解_代码款款的博客-CSDN博客_ctc loss

CTC Loss原理 - 知乎 (zhihu.com)

计算连续(未分段)时间序列和目标序列之间的损失。CTCLoss 对输入与目标可能对齐的概率求和,生成一个相对于每个输入节点可微分的损失值。假定输入与目标的对齐方式为"多对一"

torch.nn.CTCLoss(blank=0, reduction='mean', zero_infinity=False)

参数:

  • Log_probs: Tensor of size (T, N, C)(T,N,C) or (T, C)(T,C), where T = \text{input length}T=input length, N = \text{batch size}N=batch size, and C = \text{number of classes (including blank)}C=number of classes (including blank). The logarithmized probabilities of the outputs (e.g. obtained with torch.nn.functional.log_softmax()).
  • Targets: Tensor of size (N, S)(N,S) or (\operatorname{sum}(\text{target_lengths}))(sum(target_lengths)), where N = \text{batch size}N=batch size and S = \text{max target length, if shape is } (N, S)S=max target length, if shape is (N,S). It represent the target sequences. Each element in the target sequence is a class index. And the target index cannot be blank (default=0). In the (N, S)(N,S) form, targets are padded to the length of the longest sequence, and stacked. In the (\operatorname{sum}(\text{target_lengths}))(sum(target_lengths)) form, the targets are assumed to be un-padded and concatenated within 1 dimension.
  • Input_lengths: Tuple or tensor of size (N)(N) or ()(), where N = \text{batch size}N=batch size. It represent the lengths of the inputs (must each be \leq T≤T). And the lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths.
  • Target_lengths: Tuple or tensor of size (N)(N) or ()(), where N = \text{batch size}N=batch size. It represent lengths of the targets. Lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths. If target shape is (N,S)(N,S), target_lengths are effectively the stop index s_ns**n for each target sequence, such that target_n = targets[n,0:s_n] for each target in a batch. Lengths must each be \leq S≤S If the targets are given as a 1d tensor that is the concatenation of individual targets, the target_lengths must add up to the total length of the tensor.
  • Output: scalar. If reduction is 'none', then (N)(N) if input is batched or ()() if input is unbatched, where N = \text{batch size}N=batch size.

使用:

# Target are to be padded
T = 50      # Input sequence length
C = 20      # Number of classes (including blank)
N = 16      # Batch size
S = 30      # Target sequence length of longest target in batch (padding length)
S_min = 10  # Minimum target length, for demonstration purposes
# Initialize random batch of input vectors, for *size = (T,N,C)
input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()
# Initialize random batch of targets (0 = blank, 1:C = classes)
target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long)
input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long)
ctc_loss = nn.CTCLoss()
loss = ctc_loss(input, target, input_lengths, target_lengths)
loss.backward()
# Target are to be un-padded
T = 50      # Input sequence length
C = 20      # Number of classes (including blank)
N = 16      # Batch size
# Initialize random batch of input vectors, for *size = (T,N,C)
input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()
input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
# Initialize random batch of targets (0 = blank, 1:C = classes)
target_lengths = torch.randint(low=1, high=T, size=(N,), dtype=torch.long)
target = torch.randint(low=1, high=C, size=(sum(target_lengths),), dtype=torch.long)
ctc_loss = nn.CTCLoss()
loss = ctc_loss(input, target, input_lengths, target_lengths)
loss.backward()
# Target are to be un-padded and unbatched (effectively N=1)
T = 50      # Input sequence length
C = 20      # Number of classes (including blank)
# Initialize random batch of input vectors, for *size = (T,C)
input = torch.randn(T, C).log_softmax(2).detach().requires_grad_()
input_lengths = torch.tensor(T, dtype=torch.long)
# Initialize random batch of targets (0 = blank, 1:C = classes)
target_lengths = torch.randint(low=1, high=T, size=(), dtype=torch.long)
target = torch.randint(low=1, high=C, size=(target_lengths,), dtype=torch.long)
ctc_loss = nn.CTCLoss()
loss = ctc_loss(input, target, input_lengths, target_lengths)
loss.backward()

NLLLoss

详解torch.nn.NLLLOSS - 知乎 (zhihu.com)

log_softmax与softmax的区别在哪里? - 知乎 (zhihu.com)

使用:

m = nn.LogSoftmax(dim=1)
loss = nn.NLLLoss()
# input is of size N x C = 3 x 5
input = torch.randn(3, 5, requires_grad=True)
# each element in target has to have 0 <= value < C
target = torch.tensor([1, 0, 4])
output = loss(m(input), target)
output.backward()
# 2D loss example (used, for example, with image inputs)
N, C = 5, 4
loss = nn.NLLLoss()
# input is of size N x C x height x width
data = torch.randn(N, 16, 10, 10)
conv = nn.Conv2d(16, C, (3, 3))
m = nn.LogSoftmax(dim=1)
# each element in target has to have 0 <= value < C
target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C)
output = loss(m(conv(data)), target)
output.backward()

PoissonNLLLoss

目标泊松分布的负对数似然损失。

使用:

loss = nn.PoissonNLLLoss()
log_input = torch.randn(5, 2, requires_grad=True)
target = torch.randn(5, 2)
output = loss(log_input, target)
output.backward()

GAUSSIANNLLLOSS

真实标签服从高斯分布的负对数似然损失,神经网络的输出作为高斯分布的均值和方差。

对于包含[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-q4kc1uHT-1664029582934)(https://math.jianshu.com/math?formula=N)]个样本的batch数据 [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-h83QFnzx-1664029582935)(https://math.jianshu.com/math?formula=D(x%2C%20var%2C%20y)]),[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-HqpRvEU0-1664029582936)(https://math.jianshu.com/math?formula=x)]神经网络的输出,作为高斯分布的均值,[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rx7lIhr4-1664029582937)(https://math.jianshu.com/math?formula=var)]神经网络的输出,作为高斯分布的方差,[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-j8SxYO5A-1664029582939)(https://math.jianshu.com/math?formula=y)]是样本对应的标签,服从高斯分布。[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5TpBJomt-1664029582940)(https://math.jianshu.com/math?formula=x)]与[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-i9xZLG0R-1664029582941)(https://math.jianshu.com/math?formula=y)]的维度相同,[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-J9stE5rL-1664029582942)(https://math.jianshu.com/math?formula=var)]和[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-xavLHkx4-1664029582943)(https://math.jianshu.com/math?formula=x)]的维度相同,或者最后一个维度不同且最后一个维度为1,可以进行broadcast。

参考链接:

loss函数之PoissonNLLLoss,GaussianNLLLoss - 简书 (jianshu.com)

使用:

loss = nn.GaussianNLLLoss()
input = torch.randn(5, 2, requires_grad=True)
target = torch.randn(5, 2)
var = torch.ones(5, 2, requires_grad=True) #heteroscedastic
output = loss(input, target, var)
output.backward()

KLDIVLOSS

KL散度,又叫相对熵,用于衡量两个分布(离散分布和连续分布)之间的距离。

设[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-4ZRB6KpE-1664029582944)(https://math.jianshu.com/math?formula=p(x)]) 、[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-WFmovzOi-1664029582945)(https://math.jianshu.com/math?formula=q(x)]) 是离散随机变量[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Xle4sPs0-1664029582947)(https://math.jianshu.com/math?formula=X)]的两个概率分布,则[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-A3Ob8twF-1664029582948)(https://math.jianshu.com/math?formula=p)] 对[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-DsIYvCCP-1664029582948)(https://math.jianshu.com/math?formula=q)] 的KL散度是:
D K L ( p ∥ q ) = E p ( x ) log ⁡ p ( x ) q ( x ) = ∑ i = 1 N p ( x i ) ⋅ ( log ⁡ p ( x i ) − log ⁡ q ( x i ) ) D_{K L}(p \| q)=E_{p(x)} \log \frac{p(x)}{q(x)}=\sum_{i=1}^{N} p\left(x_{i}\right) \cdot\left(\log p\left(x_{i}\right)-\log q\left(x_{i}\right)\right) DKL(pq)=Ep(x)logq(x)p(x)=i=1Np(xi)(logp(xi)logq(xi))
参考链接:

loss函数之KLDivLoss - 简书 (jianshu.com)

使用:

kl_loss = nn.KLDivLoss(reduction="batchmean")
# input should be a distribution in the log space
input = F.log_softmax(torch.randn(3, 5, requires_grad=True))
# Sample a batch of distributions. Usually this would come from the dataset
target = F.softmax(torch.rand(3, 5))
output = kl_loss(input, target)

kl_loss = nn.KLDivLoss(reduction="batchmean", log_target=True)
log_target = F.log_softmax(torch.rand(3, 5))
output = kl_loss(input, log_target)

BCELOSS

loss函数之BCELoss - 简书 (jianshu.com)

使用:

m = nn.Sigmoid()
loss = nn.BCELoss()
input = torch.randn(3, requires_grad=True)
target = torch.empty(3).random_(2)
output = loss(m(input), target)
output.backward()

BCEWITHLOGITSLOSS

这个东西,本质上和nn.BCELoss()没有区别,只是在BCELoss上加了个logits函数(也就是sigmoid函数)

使用:

loss = nn.BCEWithLogitsLoss()
input = torch.randn(3, requires_grad=True)
target = torch.empty(3).random_(2)
output = loss(input, target)
output.backward()

MARGINRANKINGLOSS

loss函数之MarginRankingLoss - 简书 (jianshu.com)

使用:

loss = nn.MarginRankingLoss()
input1 = torch.randn(3, requires_grad=True)
input2 = torch.randn(3, requires_grad=True)
target = torch.randn(3).sign()
output = loss(input1, input2, target)
output.backward()

HingeEmbeddingLoss

用于判断两个向量是否相似,输入是两个向量之间的距离。 常用于非线性词向量学习以及半监督学习。

loss函数之CosineEmbeddingLoss,HingeEmbeddingLoss_ltochange的博客-CSDN博客_余弦相似度损失函数

COSINEEMBEDDINGLOSS

余弦相似度损失函数,用于判断输入的两个向量是否相似。 常用于非线性词向量学习以及半监督学习。

loss函数之CosineEmbeddingLoss,HingeEmbeddingLoss_ltochange的博客-CSDN博客_余弦相似度损失函数

MultiLabelMarginLoss

多分类合页损失函数(hinge loss),对于一个样本不是考虑样本输出与真实类别之间的误差,而是考虑对应真实类别与其他类别之间的误差

loss函数之MultiMarginLoss, MultiLabelMarginLoss_ltochange的博客-CSDN博客

使用:

loss = nn.MultiLabelMarginLoss()
x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]])
# for target y, only consider labels 3 and 0, not after label -1
y = torch.LongTensor([[3, 0, -1, 1]])
loss(x, y)
# 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4)))

HuberLoss

回归损失函数:Huber Loss_Peanut_范的博客-CSDN博客_huber loss

一个损失函数,y是真实值,f(x)是预测值,δ是HuberLoss的参数,当预测偏差小于δ时,它采用平方误差,当预测偏差大于δ,采用线性误差。相比于最小二乘的线性回归,Huber Loss降低了对异常点的惩罚程度,是一种常用的robust regression的损失函数

SmoothL1Loss

创建一个条件,如果绝对元素误差低于 beta,则使用平方项,否则使用 L1 项。它对异常值的敏感度低于torch.nn.MSELoss,并且在某些情况下可以防止梯度爆炸(例如,参见Ross Girshick的论文Fast R-CNN)。

SoftMarginLoss

loss函数之SoftMarginLoss - 简书 (jianshu.com)

MultiLabelSoftMarginLoss

MultiLabelSoftMarginLoss函数_Coding-Prince的博客-CSDN博客_multilabelsoftmarginloss

MULTIMARGINLOSS

多分类合页损失函数(hinge loss),对于一个样本不是考虑样本输出与真实类别之间的误差,而是考虑对应真实类别与其他类别之间的误差

loss函数之MultiMarginLoss, MultiLabelMarginLoss_旺旺棒棒冰的博客-CSDN博客

使用:

loss = nn.MultiMarginLoss()
x = torch.tensor([[0.1, 0.2, 0.4, 0.8]])
y = torch.tensor([3])
loss(x, y)
# 0.25 * ((1-(0.8-0.1)) + (1-(0.8-0.2)) + (1-(0.8-0.4)))

TripletMarginLoss

PyTorch TripletMarginLoss(三元损失)_zj134_的博客-CSDN博客_pytorch 三元组损失

TripletMarginWithDistanceLoss

loss函数之TripletMarginLoss与TripletMarginWithDistanceLoss_ltochange的博客-CSDN博客

nn.xx 与 nn.functional .xx区别:

参考回答:

作者:肥波喇齐
链接:https://www.zhihu.com/question/66782101/answer/579393790

我们经常看到,二者有很多相同的loss函数,他们使用时有什么区别呢?

两者的相同之处:

  • nn.Xxxnn.functional.xxx的实际功能是相同的,即nn.Conv2dnn.functional.conv2d 都是进行卷积,nn.Dropoutnn.functional.dropout都是进行dropout,。。。。。;
  • 运行效率也是近乎相同。

nn.functional.xxx是函数接口,而nn.Xxxnn.functional.xxx的类封装,并且**nn.Xxx都继承于一个共同祖先nn.Module。**这一点导致nn.Xxx除了具有nn.functional.xxx功能之外,内部附带了nn.Module相关的属性和方法,例如train(), eval(),load_state_dict, state_dict 等。

什么时候使用nn.functional.xxx,什么时候使用nn.Xxx?

这个问题依赖于你要解决你问题的复杂度和个人风格喜好。在nn.Xxx不能满足你的功能需求时,nn.functional.xxx是更佳的选择,因为nn.functional.xxx更加的灵活(更加接近底层),你可以在其基础上定义出自己想要的功能。

个人偏向于在能使用nn.Xxx情况下尽量使用,不行再换nn.functional.xxx ,感觉这样更能显示出网络的层次关系,也更加的纯粹(所有layer和model本身都是Module,一种和谐统一的感觉)。

一点导致nn.Xxx除了具有nn.functional.xxx功能之外,内部附带了nn.Module相关的属性和方法,例如train(), eval(),load_state_dict, state_dict 等。

什么时候使用nn.functional.xxx,什么时候使用nn.Xxx?

这个问题依赖于你要解决你问题的复杂度和个人风格喜好。在nn.Xxx不能满足你的功能需求时,nn.functional.xxx是更佳的选择,因为nn.functional.xxx更加的灵活(更加接近底层),你可以在其基础上定义出自己想要的功能。

个人偏向于在能使用nn.Xxx情况下尽量使用,不行再换nn.functional.xxx ,感觉这样更能显示出网络的层次关系,也更加的纯粹(所有layer和model本身都是Module,一种和谐统一的感觉)。

相关文章:

  • 02-C语言经典算法100例
  • 〖Python 数据库开发实战 - Python与Redis交互篇③〗- 利用 redis-py 实现列表数据类型的常用指令操作
  • LwIP学习笔记1 - LwIP的设计目的、分层设计思想及模块划分概览
  • 使用 Convex 进行状态管理的指南
  • 卷积神经网络参数解读
  • IP 地址及其应用(计算机网络)
  • poi-tl(word模板渲染)
  • Java线程
  • 【Flink读写外部系统】Flink自定义kafka分区并输出
  • 【云原生】学习K8s的扩展技能(CRD)
  • Chapter05 修炼python基本功:条件语句和循环
  • 彻底掌握Makeifle(三)
  • 手机抓取蓝牙日志btsnoop的方法汇总(Android一直补充中)
  • 【Vue 开发实战】实战篇 # 30:实现一个可动态改变的页面布局
  • [单片机框架][drivers层][cw2015/ADC] fuelgauge 硬件电量计和软件电量计(一)
  • 《微软的软件测试之道》成书始末、出版宣告、补充致谢名单及相关信息
  • Android Volley源码解析
  • Angular6错误 Service: No provider for Renderer2
  • Angularjs之国际化
  • Gradle 5.0 正式版发布
  • javascript数组去重/查找/插入/删除
  • Linux gpio口使用方法
  • Odoo domain写法及运用
  • tensorflow学习笔记3——MNIST应用篇
  • 近期前端发展计划
  • 浅谈Golang中select的用法
  • 实现简单的正则表达式引擎
  • 微信公众号开发小记——5.python微信红包
  • 一些基于React、Vue、Node.js、MongoDB技术栈的实践项目
  • #C++ 智能指针 std::unique_ptr 、std::shared_ptr 和 std::weak_ptr
  • #include到底该写在哪
  • #Java第九次作业--输入输出流和文件操作
  • #stm32整理(一)flash读写
  • (1)(1.9) MSP (version 4.2)
  • (Redis使用系列) SpringBoot中Redis的RedisConfig 二
  • (补充)IDEA项目结构
  • (仿QQ聊天消息列表加载)wp7 listbox 列表项逐一加载的一种实现方式,以及加入渐显动画...
  • (附源码)基于ssm的模具配件账单管理系统 毕业设计 081848
  • (附源码)计算机毕业设计SSM保险客户管理系统
  • (七)微服务分布式云架构spring cloud - common-service 项目构建过程
  • (三)SvelteKit教程:layout 文件
  • (十)【Jmeter】线程(Threads(Users))之jp@gc - Stepping Thread Group (deprecated)
  • (十六)一篇文章学会Java的常用API
  • (十一)JAVA springboot ssm b2b2c多用户商城系统源码:服务网关Zuul高级篇
  • (四)【Jmeter】 JMeter的界面布局与组件概述
  • (四)React组件、useState、组件样式
  • (五)大数据实战——使用模板虚拟机实现hadoop集群虚拟机克隆及网络相关配置
  • (一)硬件制作--从零开始自制linux掌上电脑(F1C200S) <嵌入式项目>
  • (转)C#开发微信门户及应用(1)--开始使用微信接口
  • (转)MVC3 类型“System.Web.Mvc.ModelClientValidationRule”同时存在
  • (转)菜鸟学数据库(三)——存储过程
  • (转)可以带来幸福的一本书
  • (转)详解PHP处理密码的几种方式
  • (转)用.Net的File控件上传文件的解决方案
  • (自用)learnOpenGL学习总结-高级OpenGL-抗锯齿