# Progressive Layer-based Compression for Convolutional Spiking Neural Network
Here you find the code for the paper `Progressive Layer-based Compression for Convolutional Spiking Neural Network` ([link](https://hal.archives-ouvertes.fr/hal-03826823))
## Requirement
Here you find the code for the paper `Progressive Layer-based Compression for Convolutional Spiking Neural Network`<!--([link](https://hal.archives-ouvertes.fr/hal-03826823))-->
Don't forget to build again if you change the source code.
## How to use CSNN
Run MNIST Example:
### How to use CSNN
Once the `make` command is finished, you should see binary files which represent each simulation
Run a simulation:
./[sim_name] x y
args:
x = enable PP (pruning) [0 or 1]
y = enable DSWR (reinforcement) [0 or 1]
To run MNIST simulation without compression and reinforcement:
./Mnist 0 0
In `apps` folder you find the source code for each simulation where you can change the architecture, the network parameters, or activate the layerwise compression.
## Going from CSNN to SpiNNaker
To transfer the learned weights from CSNN to SpiNNaker, we use the following command:
./Weight_extractor [binary file geenrated from a simulation] [name_layer]
For example:
./Weight_extractor mnist_params conv1
> `weights_conv1` is generated
This will generate another binary file (named weights_[name_layer]) which contains only the weights of the selected layer .
## How to use SpiNNaker scripts
To run the SpiNNaker scripts, please check the following link:
To setup the SpiNNaker board, please check the following link:
http://spinnakermanchester.github.io/
### Weights adaptation for PyNN and SpiNNaker
in SpiNNaker folder:
run the `ConvertTheWeights.ipynb` notebook to adapt the extracted weights from CSNN to a text format readable by PyNN.
### Using the extracted weights with SpiNNaker
Run the `SpiNNakerRun.ipynb` notebook to deploy the weights on the board and run simulation.
## Folder structure
```
...
...
@@ -32,8 +70,9 @@ CSNN # The C++ Simulator of Convolutional Spiking Neural Network
SpiNNaker # The scripts in python which are used for running on the SpiNNaker board
```
# Citation
<!--# Citation
If you found our work useful, please don't forget to cite:
2022-11-17 13:29:49 INFO: Will search these locations for binaries: /home/bbpnrsoa/sPyNNakerGit/sPyNNaker/spynnaker/pyNN/model_binaries : /home/bbpnrsoa/sPyNNakerGit/SpiNNFrontEndCommon/spinn_front_end_common/common_model_binaries
2022-11-17 13:29:49 WARNING: /home/bbpnrsoa/FromSep2022/test_For_Paper/ForReportOnTheResults(ToSeeWaySpikesMoveUpAndDone)/improved_2/reports has 46 old reports that have not been closed
2022-11-17 13:29:49 INFO: Setting hardware timestep as 1000 microseconds based on simulation time step of 1000 and timescale factor of 1
2022-11-17 13:30:10 INFO: Fixed route router took 0:00:00.009081
Routing
|0% 50% 100%|
/home/bbpnrsoa/sPyNNakerGit/sPyNNaker/spynnaker/pyNN/models/neural_projections/connectors/from_list_connector.py:157: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
2022-11-17 13:30:19 INFO: Sdram usage per chip report took 0:00:00.087471
2022-11-17 13:30:19 INFO: Drift report skipped as cfg Reports:write_drift_report_start is False
2022-11-17 13:30:19 INFO: Creating live event connection database in /home/bbpnrsoa/FromSep2022/test_For_Paper/ForReportOnTheResults(ToSeeWaySpikesMoveUpAndDone)/improved_2/reports/2022-11-17-13-29-49-630243/run_1/input_output_database.sqlite3
2022-11-17 13:30:23 INFO: Graph provenance gatherer took 0:00:00.011807
Getting provenance data
|0% 50% 100%|
======================================================2022-11-17 13:30:23 WARNING: On Slice Slice(lo_atom=0, hi_atom=255, n_atoms=256, shape=(256,), start=(0,)) of EXC_POP_0 on 0,0,5, 156 packets were dropped from the input buffer, because they arrived too late to be processed in a given time step. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
2022-11-17 13:30:23 WARNING: A maximum of 2 background tasks were queued on Slice Slice(lo_atom=0, hi_atom=255, n_atoms=256, shape=(256,), start=(0,)) of EXC_POP_0 on 0,0,5. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
=2022-11-17 13:30:23 WARNING: On Slice Slice(lo_atom=256, hi_atom=399, n_atoms=144, shape=(144,), start=(256,)) of EXC_POP_0 on 0,0,6, 64 packets were dropped from the input buffer, because they arrived too late to be processed in a given time step. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
2022-11-17 13:30:23 WARNING: A maximum of 2 background tasks were queued on Slice Slice(lo_atom=256, hi_atom=399, n_atoms=144, shape=(144,), start=(256,)) of EXC_POP_0 on 0,0,6. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
=2022-11-17 13:30:23 WARNING: The input buffer for Slice Slice(lo_atom=0, hi_atom=255, n_atoms=256, shape=(256,), start=(0,)) of EXC_POP_1 on 0,0,7 lost packets on 1629 occasions. This is often a sign that the system is running too quickly for the number of neurons per core. Please increase the timer_tic or time_scale_factor or decrease the number of neurons per core.
2022-11-17 13:30:23 WARNING: On Slice Slice(lo_atom=0, hi_atom=255, n_atoms=256, shape=(256,), start=(0,)) of EXC_POP_1 on 0,0,7, 8147 packets were dropped from the input buffer, because they arrived too late to be processed in a given time step. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
2022-11-17 13:30:23 WARNING: A maximum of 4 background tasks were queued on Slice Slice(lo_atom=0, hi_atom=255, n_atoms=256, shape=(256,), start=(0,)) of EXC_POP_1 on 0,0,7. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
2022-11-17 13:30:23 WARNING: The input buffer for Slice Slice(lo_atom=256, hi_atom=511, n_atoms=256, shape=(256,), start=(256,)) of EXC_POP_1 on 0,0,8 lost packets on 1629 occasions. This is often a sign that the system is running too quickly for the number of neurons per core. Please increase the timer_tic or time_scale_factor or decrease the number of neurons per core.
2022-11-17 13:30:23 WARNING: On Slice Slice(lo_atom=256, hi_atom=511, n_atoms=256, shape=(256,), start=(256,)) of EXC_POP_1 on 0,0,8, 8149 packets were dropped from the input buffer, because they arrived too late to be processed in a given time step. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
2022-11-17 13:30:23 WARNING: A maximum of 4 background tasks were queued on Slice Slice(lo_atom=256, hi_atom=511, n_atoms=256, shape=(256,), start=(256,)) of EXC_POP_1 on 0,0,8. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
=2022-11-17 13:30:23 WARNING: The input buffer for Slice Slice(lo_atom=512, hi_atom=767, n_atoms=256, shape=(256,), start=(512,)) of EXC_POP_1 on 0,0,9 lost packets on 1629 occasions. This is often a sign that the system is running too quickly for the number of neurons per core. Please increase the timer_tic or time_scale_factor or decrease the number of neurons per core.
2022-11-17 13:30:23 WARNING: On Slice Slice(lo_atom=512, hi_atom=767, n_atoms=256, shape=(256,), start=(512,)) of EXC_POP_1 on 0,0,9, 8151 packets were dropped from the input buffer, because they arrived too late to be processed in a given time step. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
2022-11-17 13:30:23 WARNING: A maximum of 4 background tasks were queued on Slice Slice(lo_atom=512, hi_atom=767, n_atoms=256, shape=(256,), start=(512,)) of EXC_POP_1 on 0,0,9. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
2022-11-17 13:30:23 WARNING: The input buffer for Slice Slice(lo_atom=768, hi_atom=1023, n_atoms=256, shape=(256,), start=(768,)) of EXC_POP_1 on 0,0,10 lost packets on 1629 occasions. This is often a sign that the system is running too quickly for the number of neurons per core. Please increase the timer_tic or time_scale_factor or decrease the number of neurons per core.
2022-11-17 13:30:23 WARNING: On Slice Slice(lo_atom=768, hi_atom=1023, n_atoms=256, shape=(256,), start=(768,)) of EXC_POP_1 on 0,0,10, 8151 packets were dropped from the input buffer, because they arrived too late to be processed in a given time step. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
2022-11-17 13:30:23 WARNING: A maximum of 4 background tasks were queued on Slice Slice(lo_atom=768, hi_atom=1023, n_atoms=256, shape=(256,), start=(768,)) of EXC_POP_1 on 0,0,10. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
=2022-11-17 13:30:23 WARNING: The input buffer for Slice Slice(lo_atom=1024, hi_atom=1279, n_atoms=256, shape=(256,), start=(1024,)) of EXC_POP_1 on 0,0,11 lost packets on 1629 occasions. This is often a sign that the system is running too quickly for the number of neurons per core. Please increase the timer_tic or time_scale_factor or decrease the number of neurons per core.
2022-11-17 13:30:23 WARNING: On Slice Slice(lo_atom=1024, hi_atom=1279, n_atoms=256, shape=(256,), start=(1024,)) of EXC_POP_1 on 0,0,11, 8150 packets were dropped from the input buffer, because they arrived too late to be processed in a given time step. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
2022-11-17 13:30:23 WARNING: A maximum of 4 background tasks were queued on Slice Slice(lo_atom=1024, hi_atom=1279, n_atoms=256, shape=(256,), start=(1024,)) of EXC_POP_1 on 0,0,11. Try increasing the time_scale_factor located within the .spynnaker.cfg file or in the pynn.setup() method.
2022-11-17 13:30:23 WARNING: Additional interesting provenace items in /home/bbpnrsoa/FromSep2022/test_For_Paper/ForReportOnTheResults(ToSeeWaySpikesMoveUpAndDone)/improved_2/reports/2022-11-17-13-29-49-630243/run_1/provenance_data/provenance.sqlite3
==
2022-11-17 13:30:23 INFO: Placements provenance gatherer took 0:00:00.332423
2022-11-17 13:30:26 INFO: Will search these locations for binaries: /home/bbpnrsoa/sPyNNakerGit/sPyNNaker/spynnaker/pyNN/model_binaries : /home/bbpnrsoa/sPyNNakerGit/SpiNNFrontEndCommon/spinn_front_end_common/common_model_binaries
2022-11-17 13:30:26 WARNING: /home/bbpnrsoa/FromSep2022/test_For_Paper/ForReportOnTheResults(ToSeeWaySpikesMoveUpAndDone)/improved_2/reports has 46 old reports that have not been closed
2022-11-17 13:30:26 INFO: Setting hardware timestep as 1000 microseconds based on simulation time step of 1000 and timescale factor of 1
2022-11-17 13:30:46 INFO: Fixed route router took 0:00:00.008056
Routing
|0% 50% 100%|
/home/bbpnrsoa/sPyNNakerGit/sPyNNaker/spynnaker/pyNN/models/neural_projections/connectors/from_list_connector.py:157: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.