0%

![Overview of DNS](dynamic-network-surgery/Overview of DNS.png)

In this paper, the authors proposed dynamic network surgery to prune unimportant connections of the network. Different from previous methods, the proposed method contains two operations: pruning and splicing. Considering the complexity of deep neural networks, it is difficult to decide which connection is important and which one should be pruned. Thus the splicing operation is to recover the pruned weights that are found to be important during training phase. In contrary, weights are pruned with no chance to come back in previous methods, and that may lead to severe loss of accuracy.

阅读全文 »

![Pipeline of Deep Compression](deep-compression/Pipeline of Deep Compression.png)

In this paper the authors introduces “deep compression” to compress model size of deep convolutional neural networks. The proposed method consists of three stage: pruning, trained quantization and Huffman coding. The authors first prunes weights by learning the important connections. Second, quantize weights to enforce weight sharing. Third, apply Huffman coding to reduce storage further.

Proposed Methods

Network Pruning: First, learn connectivity by normal network training. Second, remove weights below threshold. Third, retrain the network to learn the final weights for remaining sparse connections. After pruning, the sparse structure is stored using compress sparse row (CSR) or compress sparse column (CSC).

阅读全文 »

为方便管理,以后发给提交给老师的文件、草稿,均按照以下命名规则给文件名命名:

  • Review Comments: R-YourGivenName-Journal/ConferenceAbbreviationName-PaperTitle
  • Draft: D-YourGivenName-Title-Time(DD-MM-YY)
  • Talk 谈话记录等: T-YourGivenName-Title-Time(DD-MM-YY#Brain Storm
  • 头脑风暴记录等: B-YourGivenName-Title-Time(DD-MM-YY)
  • Manuscript 草稿: M-YourGivenName-Title-Time(DD-MM-YY)
  • Others: O-YourGivenName-Title-Time(DD-MM-YY)

Although deep neural networks have shown great potential in several application domains including computer vision and speech recognition, it is hard to implement DNN methods in hardware with the limitation of storage, compute capabilities and battery power. The authors in this paper proposed two efficient approximation to the neural network: binary weight networks (BWN) and XNOR-Networks. In binary weight networks, all the weights are approximated with binary values. While in XNOR-Networks, both the weights and the inputs to the convolutional layers and fully connected layers are approximated with binary values. The authors also attempted to evaluate their methods on large scale data sets like ImageNet, and proved that their methods outperform baseline for about 16.3%. Source code is available on GitHub.

Binary Weight Networks

Represent an $L$-layer DNN model with a triplet $<\mathcal{I, W}, * >$. Each element $I=\mathcal{I}_{l(l=1,\cdots,L)}$ in $\mathcal{I}$ is the input tensor of the $l^{th}$ layer, $W=\mathcal{W}_{lk(k=1,\cdots, K^l)}$ is the $k^{th}$ weight filter in the $l^{th}$ layer of DNN. $*$ represents convolutional operation with $I$ and $W$. Note that the authors assume the convolutional layers in the network do not have bias terms. Thus the convolutional operation can be approximated by $I*W\approx(I\oplus B)\alpha$, where $\oplus$ indicates a convolution without multiplication, $B=\mathcal{B}_{lk}$ is a binary filter $\alpha=\mathcal{A}_{lk}$ is an scale factor and $\mathcal{W}\approx\mathcal{A}_{lk}\mathcal{B}_{lk}$.

阅读全文 »

In this paper, the authors proposed a method to train Binarized Neural Networks (BNNs), a network with binary weights and activations. The proposed BNNs drastically reduce the memory consumption (size and number of accesses) and have higher power-efficiency as it replaces most arithmetic operations with bit-wise operations. The code implemented in Theano and Torch is available on GitHub.

Proposed Method

Binarization Strategies

Constrain both weights and activation to either +1 or -1 has higher efficiency in hardware. The authors discussed two binarization functions including deterministic and stochastic. Formulation of deterministic binarization function is:
$$
x^b = sign(x)=
\begin{cases}
+1 & if ~x\ge 0 \\
-1 & otherwise,
\end{cases}
\tag{1}
$$

阅读全文 »

经过一段时间的思考,也找了挺多的资料,最终没有发现自己想要的网页笔记。主要应该是个人比较懒一点,对这些文件的管理不是很到位,也就希望借助现有的一些工具来实现。不过因为最后没有合适的,所以还是使用gitbook来构建自己的一个空间。

离毕业还剩下一年不到的时间,也趁着这最后的一点时间留一些痕迹。即使是养成一个写博客的习惯也是挺好的,希望能够坚持下去吧。之后回过头看自己写的东西,感觉还是蛮有成就感的。不过考虑到这个空间的开放性,还有自己的一些初衷。博客尽量还是写的好看一点吧,至于语言的话,中英混合,根据需要采用对应的语言,也就不在意这些细节了。

也就不立什么旗帜了,个人随心就好。