GitHub地址:https://github.com/ICEORY/LearningVB/tree/master/quadratic-curves
功能说明:
根据设定的参数绘制二次曲线
GitHub地址:https://github.com/ICEORY/Minesweeping
一个简单的扫雷游戏,主要实现游戏里面空格的索引: 点开一个空格之后相连的空格以及周围的数字都会显示出来
GitHub地址:https://github.com/ICEORY/Learning-C-Plus-Plus/tree/master/multifunction-table-upgrade-again
GitHub地址:https://github.com/ICEORY/Learning-C-Plus-Plus/tree/master/multifunction-table-upgrade
GitHub地址:https://github.com/ICEORY/Learning-C-Plus-Plus/tree/master/multifunction-table-diamond
GitHub地址:https://github.com/ICEORY/Learning-C-Plus-Plus/tree/master/multifunction-table
创建一个表格,实现创建多个列,对每一列的数据类型以及列名进行定义,完成之后,可以对数据进行自动排序。
The ternary quantization methods proposed in this paper based on threshold and quantized weights to 0 and {-1, +1} with two different scaling factor. The authors also suggested the rule of scaled gradient to update weight in different group. Quantized ternary weight $w_l^i$ of the network can be calculated by:
$$
w_l^t =
\begin{cases}
W_l^p&:& \tilde{w_l} \gt \triangle_l \\
0&:& |\tilde{w_l}| \le \triangle_l \\
-W_l^n&:& \tilde{w_l} \lt -\triangle_l
\end{cases} \tag{1}
$$
The authors introduced ternary weight networks (TWNs) to address the limited storage and computational resources issues in hardware. The quantization problem can be formulated as follows:
$$
\begin{cases}
\alpha^*, W^{t*} = &\arg\min_{\alpha, W^t} J(\alpha, W^t) = |W-\alpha W^t|_2^2 \\
s.t. & a\ge0, W_i^t\in{-1,0,1}, i=1,2,\dots, n.
\end{cases} \tag{1}
$$
Here $n$ is the size of filter, $W$ represents weights of the network. With $W\approx \alpha W^t$ and assuming the convolutional layer do not have bias term, forward propagation of ternary weight networks is as follows:
$$
\begin{cases}
Z & = &X*W \approx X*(\alpha W^t) = (\alpha X)\oplus W^t \\
X^{next} & = & g(Z)
\end{cases} \tag{2}
$$
where $X$ indicates inputs, $*$ indicates convolutional operation, $g$ is the non-linear activation function, $\oplus$ indicates the inner product or convolutional operation without any multiplication, $X^{next}$ indicates the outputs.
Paper: https://arxiv.org/abs/1712.05877
Code: refer to TensorFlowLite.quantize
$$
clamp(r;a,b) := min(max(r, a), b)
$$
$$
s(a,b,n) := \frac{b-a}{n-1}
$$
$$
q(r;a,b,n):=\lfloor \frac{clamp(r;a,b)-a}{s(a,b,n)} \rceil s(a,b,n)+a
$$
![INQ overview](incremental-network-quantization/INQ overview.png)
As most of the existing methods suffer from high decreasing on model performance and need many training epochs, the authors provided a lossless quantization method to overcome these problems. The proposed method mainly contains three steps: weight partition, group-wise quantization and re-training. Given a trained model, the first step of INQ is to divide weights of the model into to group, one for quantization and another for re-training. Second, apply weight quantization and convert 32-bits floating point data to low precision data. Third, freeze the quantized weights and retraining the network using SGD, then update remaining weights of the network. Repeating these three steps until all weights are quantized, then we can get a low precision model without significant accuracy loss. Considering binary shift operation is more efficient in hardware, the authors quantize weights of convolutional layers and fully connected layers to the power of 2.