diff --git a/README.md b/README.md index f8c9464920a9893e7efd89e366925462c959f387..44d2c1053eab314deb0e8ce1770479b1eb814671 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ # Progressive Layer-based Compression for Convolutional Spiking Neural Network -Here you find the code for the paper `Progressive Layer-based Compression for Convolutional Spiking Neural Network` <!--([link](https://hal.archives-ouvertes.fr/hal-03826823))--> +Here you find the code for the paper `Progressive Layer-based Compression for Convolutional Spiking Neural Network`. <!--([link](https://hal.archives-ouvertes.fr/hal-03826823))--> ## CSNN @@ -11,16 +11,16 @@ Here you find the code for the paper `Progressive Layer-based Compression for Co ### Building the binaries -Run the following commands inside CSNN folder: +Run the following commands inside the CSNN folder: mkdir build cd build cmake ../ -G"Unix Makefiles" -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS='-std=c++11' make -Don't forget to build again if you change the source code. +Remember to build again if you change the source code. ### How to use CSNN -Once the `make` command is finished, you should see binary files which represent each simulation +Once the `make` command is finished, you should see binary files which represent each simulation. Run a simulation: @@ -30,11 +30,11 @@ Run a simulation: x = enable PP (pruning) [0 or 1] y = enable DSWR (reinforcement) [0 or 1] -To run MNIST simulation without compression and reinforcement: +For example: to run MNIST simulation without compression and reinforcement: ./Mnist 0 0 -In `apps` folder you find the source code for each simulation where you can change the architecture, the network parameters, or activate the layerwise compression. +In the `apps` folder, you find the source code for each simulation where you can change the architecture, the network parameters, or activate the [layerwise compression](https://gitlab.univ-lille.fr/hammouda.elbez/progressive-layer-based-compression-for-convolutional-spiking-neural-network/-/blob/main/CSNN-Simulator/apps/Mnist.cpp#L21). ## Going from CSNN to SpiNNaker To transfer the learned weights from CSNN to SpiNNaker, we use the following command: @@ -46,7 +46,7 @@ For example: ./Weight_extractor mnist_params conv1 > `weights_conv1` is generated -This will generate another binary file (named weights_[name_layer]) which contains only the weights of the selected layer . +This will generate another binary file (named weights_[name_layer]) which contains only the weights of the selected layer. ## How to use SpiNNaker scripts To setup the SpiNNaker board, please check the following link: @@ -57,11 +57,11 @@ http://spinnakermanchester.github.io/ in SpiNNaker folder: -run the `ConvertTheWeights.ipynb` notebook to adapt the extracted weights from CSNN to a text format readable by PyNN. +Run the `ConvertTheWeights.ipynb` notebook to adapt the extracted weights from CSNN to a text format readable by PyNN. ### Using the extracted weights with SpiNNaker -Run the `SpiNNakerRun.ipynb` notebook to deploy the weights on the board and run simulation. +Run the `SpiNNakerRun.ipynb` notebook to deploy the weights on the board and run the simulation. ## Folder structure