Runner module

The optional parameters from the runner module are documentend in the evaluation module for simplicity.

class mdgru.runner.Runner(evaluationinstance, **kw)[source]

Bases: object

Parameters:
  • kw (dict containing the following options.) –
    • test_each [default: 2500] validate after # training iterations
    • save_each [default: None] save after # training iterations
    • plot_each [default: 2500]
    • test_size [default: 1]
    • test_iters [default: 1] number of validations to perform on random samples. Only makes sense if full_image_validation is not set
    • test_first [default: False] Perform validation on the untrained model as well.
    • perform_full_image_validation [default: True] Use random samples instead of the complete validation images
    • save_validation_results [default: True] Do not save validation results on the disk
    • notifyme [default: None] Experimental feature that when something goes amiss, this telegram chat id will be used to inform the respective user of the error. This requires a file called config.json in the same folder as this file, containing a simple dict structure as follows: {“chat_id”: CHATID, “token”: TOKEN}, where CHATID and TOKEN have to be created with Telegrams BotFather. The chatid from config can be overriden using a parameter together with this option.
    • results_to_csv [default: True] Do not create csv with validation results
    • checkpointfiles [default: None] provide checkpointfile for this template. If no modelname is provided, we will infer one from this file. Multiple files are only allowed if only_test is set. If the same number of optionnames are provided, they will be used to name the different results. If only one optionname is provided, they will be numbered in order and the checkpoint filename will be included in the result file.
    • epochs [default: 1] Number of times through the training dataset. Cant be used together with “iterations”
    • iterations [default: None] Number of iterations to perform. Can only be set and makes sense if epochs is 1
    • only_test [default: False] Only perform testing. Requires at least one ckpt
    • only_train [default: False] Only perform training and validation.
    • experimentloc [default: /home/docs/experiments]
    • optionname [default: None] name for chosen set of options, if multiple checkpoints provided, there needs to be 1 or the same number of names here
    • fullparameters [default: None]
  • evaluationinstance (instance of an evaluation class) – Will be used to call train and test routines on.
_defaults = {'checkpointfiles': {'help': 'provide checkpointfile for this template. If no modelname is provided, we will infer one from this file. Multiple files are only allowed if only_test is set. If the same number of optionnames are provided, they will be used to name the different results. If only one optionname is provided, they will be numbered in order and the checkpoint filename will be included in the result file.', 'name': 'ckpt', 'nargs': '+', 'value': None}, 'epochs': {'help': 'Number of times through the training dataset. Cant be used together with "iterations"', 'value': 1}, 'experimentloc': '/home/docs/experiments', 'fullparameters': None, 'iterations': {'help': 'Number of iterations to perform. Can only be set and makes sense if epochs is 1', 'type': <class 'int'>, 'value': None}, 'notifyme': {'help': 'Experimental feature that when something goes amiss, this telegram chat id will be used to inform the respective user of the error. This requires a file called config.json in the same folder as this file, containing a simple dict structure as follows: {"chat_id": CHATID, "token": TOKEN}, where CHATID and TOKEN have to be created with Telegrams BotFather. The chatid from config can be overriden using a parameter together with this option.', 'nargs': '?', 'type': <class 'str'>, 'value': None}, 'only_test': {'help': 'Only perform testing. Requires at least one ckpt', 'value': False}, 'only_train': {'help': 'Only perform training and validation.', 'value': False}, 'optionname': {'help': 'name for chosen set of options, if multiple checkpoints provided, there needs to be 1 or the same number of names here', 'nargs': '+', 'value': None}, 'perform_full_image_validation': {'help': 'Use random samples instead of the complete validation images', 'invert_meaning': 'dont_', 'value': True}, 'plot_each': {'type': <class 'int'>, 'value': 2500}, 'results_to_csv': {'help': 'Do not create csv with validation results', 'value': True}, 'save_each': {'help': 'save after # training iterations', 'type': <class 'int'>, 'value': None}, 'save_validation_results': {'help': 'Do not save validation results on the disk', 'invert_meaning': 'dont_', 'value': True}, 'test_each': {'help': 'validate after # training iterations', 'name': 'validate_each', 'type': <class 'int'>, 'value': 2500}, 'test_first': {'help': 'Perform validation on the untrained model as well.', 'name': 'validate_first', 'value': False}, 'test_iters': {'help': 'number of validations to perform on random samples. Only makes sense if full_image_validation is not set', 'name': 'validation_iterations', 'type': <class 'int'>, 'value': 1}, 'test_size': {'value': 1}}
_finish(signal)[source]
calc_min_mean_median_max_errors(errors)[source]
ignore_signal = True
run(**kw)[source]
save(filename)[source]
test()[source]
train()[source]
validation(showIt=True, name=1574669904.3189964)[source]
write_error_to_csv(errors, filename, minerrors, avgerrors, medianerrors, maxerrors)[source]