0%

GitHub地址:https://github.com/ICEORY/Learning-C-Plus-Plus/tree/master/multifunction-table-upgrade-again

功能说明

  1. 创建一个表格,输入表格名称,列数,列名称
  2. 输入各列的数据类型、数据内容
  3. 选择对数据进行正序或者倒序排序、筛选
  4. 退出表格
  5. 选择是否继续创建表格
  6. 重复以上步骤,或者可以选择显示所创建的各种表格或者退出
  7. 退出整个程序
阅读全文 »

GitHub地址:https://github.com/ICEORY/Learning-C-Plus-Plus/tree/master/multifunction-table-diamond

功能说明

  1. 输入表格名称,定义列数、列名称
  2. 定义列数据类型、输入列数据
  3. 新增功能:对数据进行自动计算
  4. 进行正序、倒序排序以及数据筛选
  5. 退出后可以选择继续创建表格或者选择所有创建的表格
  6. 继续退出之后显示退出界面
阅读全文 »

The ternary quantization methods proposed in this paper based on threshold and quantized weights to 0 and {-1, +1} with two different scaling factor. The authors also suggested the rule of scaled gradient to update weight in different group. Quantized ternary weight $w_l^i$ of the network can be calculated by:
$$
w_l^t =
\begin{cases}
W_l^p&:& \tilde{w_l} \gt \triangle_l \\
0&:& |\tilde{w_l}| \le \triangle_l \\
-W_l^n&:& \tilde{w_l} \lt -\triangle_l
\end{cases} \tag{1}
$$

阅读全文 »

The authors introduced ternary weight networks (TWNs) to address the limited storage and computational resources issues in hardware. The quantization problem can be formulated as follows:
$$
\begin{cases}
\alpha^*, W^{t*} = &\arg\min_{\alpha, W^t} J(\alpha, W^t) = |W-\alpha W^t|_2^2 \\
s.t. & a\ge0, W_i^t\in{-1,0,1}, i=1,2,\dots, n.
\end{cases} \tag{1}
$$
Here $n$ is the size of filter, $W$ represents weights of the network. With $W\approx \alpha W^t$ and assuming the convolutional layer do not have bias term, forward propagation of ternary weight networks is as follows:
$$
\begin{cases}
Z & = &X*W \approx X*(\alpha W^t) = (\alpha X)\oplus W^t \\
X^{next} & = & g(Z)
\end{cases} \tag{2}
$$
where $X$ indicates inputs, $*$ indicates convolutional operation, $g$ is the non-linear activation function, $\oplus$ indicates the inner product or convolutional operation without any multiplication, $X^{next}$ indicates the outputs.

阅读全文 »

![INQ overview](incremental-network-quantization/INQ overview.png)

As most of the existing methods suffer from high decreasing on model performance and need many training epochs, the authors provided a lossless quantization method to overcome these problems. The proposed method mainly contains three steps: weight partition, group-wise quantization and re-training. Given a trained model, the first step of INQ is to divide weights of the model into to group, one for quantization and another for re-training. Second, apply weight quantization and convert 32-bits floating point data to low precision data. Third, freeze the quantized weights and retraining the network using SGD, then update remaining weights of the network. Repeating these three steps until all weights are quantized, then we can get a low precision model without significant accuracy loss. Considering binary shift operation is more efficient in hardware, the authors quantize weights of convolutional layers and fully connected layers to the power of 2.

阅读全文 »