diff --git a/arborescence_protocol.pdf b/arborescence_protocol.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b92b8e207250d0b0d1bb9d4022ec4d1c0da20934 Binary files /dev/null and b/arborescence_protocol.pdf differ diff --git a/main.py b/main.py index 454d089baef00c2f4c2781fb70f62a573ad98415..9d56d5c18afcd3aefb18d06990bada81cbe28b65 100644 --- a/main.py +++ b/main.py @@ -119,10 +119,7 @@ def p_error(params, iteration_f, model): PE_test.append(pe_test) PE_valid = np.asarray(PE_valid) mini = np.argmin(PE_valid) - dir_log = params.data_dir_prot+'train_'+model+'_'+str(iteration_f) - #np.save(dir_log+'p_error/train.npy',PE_train[mini]) - #np.save(dir_log+'p_error/valid.npy',PE_valid[mini]) - #np.save(dir_log+'p_error/test.npy',PE_test[mini]) + dir_log = params.data_dir_prot+'train_'+model+'_'+str(iteration_f) return(PE_train[mini], PE_valid[mini], PE_test[mini]) def run_job(params, mode, command, iteration, gpu=True, batch_adv=None): # Index if generation of adversarial embedding diff --git a/readme.md b/readme.md index 5cbbcca8704fb7bb75d52dd96d397450eb247bae..2bece6f913a758051a83761bd08c9337a0d59883 100644 --- a/readme.md +++ b/readme.md @@ -1,18 +1,5 @@ The protocol is made to run on a multi-GPU platform, with orders in .slurm files. - -### How are trained the classifiers -The classifiers are trained between cover images, and new stegos are generated in each batch with the corresponding cost map. It allows to train the classifier, if desired (depending on parameters --CL and --start_emb_rate) to use curriculum learning during the training, such as new stegos embedding any size of payload can be generated during the training. - - - -### Structure of the results of a run of the protocol -A run of a protocol is an experiment for given values of QF, emb_rate, intial cost_maps, different steganalysts... -A run of the protocol will save all values in a folder, which is defined in the parameter --data_dir_prot. -The organization of this folder is described by the illustration, and in the following. -At the beginning, it creates file description.txt which resumes all parameters parsed at the beggining of run of main.py. - -Adversarial images are saved in "data_adv_$i/adv_final/" and optimized cost maps in "data_adv_$i/adv_cost/". -Evalution of classifier with architecture $model trained at iteration $j on adversarial images generated at iteration $i are saved in "data_adv_$i/eval_$model_$j/". There are two files: "logits.npy" of size (10000,2) containing the raw logits given by the classifier, and "probas.npy" of size (10000,) which are the stego class probability given by the softmax of the logits. The images are ordred are in the file --permutation_files.npy given in input of the protocol. +This is the main.py to launch. It will create slurm files with CLI commands depending on the task of the protocol, such as it can run on a cluster. ### Here are the steps of the protocol: If --begin_step=0, the run of the protocol will begin with initialization which contains the following steps: @@ -28,6 +15,18 @@ It produces files at iteration $k are: + +### Structure of the results of a run of the protocol +A run of a protocol is an experiment for given values of QF, emb_rate, intial cost_maps, different steganalysts... +A run of the protocol will save all values in a folder, which is defined in the parameter --data_dir_prot. +The organization of this folder is described by the illustration, and in the following. +At the beginning, it creates file description.txt which resumes all parameters parsed at the beggining of run of main.py. + +Adversarial images are saved in "data_adv_$i/adv_final/" and optimized cost maps in "data_adv_$i/adv_cost/". +Evalution of classifier with architecture $model trained at iteration $j on adversarial images generated at iteration $i are saved in "data_adv_$i/eval_$model_$j/". There are two files: "logits.npy" of size (10000,2) containing the raw logits given by the classifier, and "probas.npy" of size (10000,) which are the stego class probability given by the softmax of the logits. The images are ordred are in the file --permutation_files.npy given in input of the protocol. + + + # Parameters to pass in main.py: * begin_step: first iteration of the protocol. Should be equals to 0 if you never launched it. * number_step: for how many further iteration to lauchn the protocol @@ -60,6 +59,8 @@ It produces files at iteration $k are: * lr: float value for the value of the learning rate to use in ADAM optimizer for the gradient descent. Advices: use 0.5 for QF 75 and 0.05 for QF 100. +### How are trained the classifiers +The classifiers are trained between cover images, and new stegos are generated in each batch with the corresponding cost map. It allows to train the classifier, if desired (depending on parameters --CL and --start_emb_rate) to use curriculum learning during the training, such as new stegos embedding any size of payload can be generated during the training. # In this folder: @@ -85,6 +86,9 @@ It produces files at iteration $k are: * train.py: definition of the class Fitter useful for training a classifier. + +Format:  +