Command-line interface#

CLI reference documentation.

hpcflow#

hpcflow [OPTIONS] COMMAND [ARGS]...

Options

--version#

Show the version of hpcFlow and exit.

--hpcflow-version#

Show the version of hpcflow and exit.

--help#

Show this message and exit.

--run-time-info#

Print run-time information!

--config-dir <config_dir>#

Set the configuration directory.

--config-key <config_key>#

Set the configuration invocation key.

--with-config <with_config>#

Override a config item in the config file

--timeit#

Time function pathways as the code executes and write out a summary at the end. Only functions decorated by TimeIt.decorator are included.

--timeit-file <timeit_file>#

Time function pathways as the code executes and write out a summary at the end to a text file given by this file path. Only functions decorated by TimeIt.decorator are included.

cancel#

Stop all running jobscripts of the specified workflow.

WORKFLOW_REF is the local ID (that provided by the show command}) or the workflow path.

hpcflow cancel [OPTIONS] WORKFLOW_REF

Options

-r, --ref-type <ref_type>#

How to interpret a reference, as an ID, a path, or to guess.

Options:

assume-id | id | path

Arguments

WORKFLOW_REF#

Required argument

config#

Configuration sub-command for getting and setting data in the configuration file(s).

hpcflow config [OPTIONS] COMMAND [ARGS]...

Options

--no-callback <no_callback>#

Exclude a named get/set callback function during execution of the command.

add-scheduler#

hpcflow config add-scheduler [OPTIONS] NAME

Options

--defaults <defaults>#

Arguments

NAME#

Required argument

add-shell#

hpcflow config add-shell [OPTIONS] NAME

Options

--defaults <defaults>#

Arguments

NAME#

Required argument

add-shell-wsl#

hpcflow config add-shell-wsl [OPTIONS]

Options

--defaults <defaults>#

append#

Append a new value to the specified configuration item.

NAME is the dot-delimited path to the list to be appended to.

hpcflow config append [OPTIONS] NAME VALUE

Options

--json#

Interpret VALUE as a JSON string.

Arguments

NAME#

Required argument

VALUE#

Required argument

get#

Show the value of the specified configuration item.

hpcflow config get [OPTIONS] NAME

Options

--all#

Show all configuration items.

--metadata#

Show all metadata items.

--file#

Show the contents of the configuration file.

Arguments

NAME#

Required argument

import#

Update the config file with keys from a YAML file.

hpcflow config import [OPTIONS] FILE_PATH

Options

--rename, --no-rename#

Rename the currently loaded config file according to the name of the file that is being imported (default is to rename). Ignored if –new is specified.

--new#

If True, generate a new default config, and import the file into this config. If False, modify the currently loaded config.

Arguments

FILE_PATH#

Required argument

init#

hpcflow config init [OPTIONS] KNOWN_NAME

Options

--path <path>#

An fsspec-compatible path in which to look for configuration-import files.

Arguments

KNOWN_NAME#

Required argument

list#

Show a list of all configurable keys.

hpcflow config list [OPTIONS]

load-data-files#

Check we can load the data files (e.g. task schema files) as specified in the configuration.

hpcflow config load-data-files [OPTIONS]

open#

Alias for hpcflow open config: open the configuration file, or retrieve it’s path.

hpcflow config open [OPTIONS]

Options

--path#

pop#

Remove a value from a list-like configuration item.

NAME is the dot-delimited path to the list to be modified.

hpcflow config pop [OPTIONS] NAME INDEX

Arguments

NAME#

Required argument

INDEX#

Required argument

prepend#

Prepend a new value to the specified configuration item.

NAME is the dot-delimited path to the list to be prepended to.

hpcflow config prepend [OPTIONS] NAME VALUE

Options

--json#

Interpret VALUE as a JSON string.

Arguments

NAME#

Required argument

VALUE#

Required argument

set#

Set and save the value of the specified configuration item.

hpcflow config set [OPTIONS] NAME VALUE

Options

--json#

Interpret VALUE as a JSON string.

Arguments

NAME#

Required argument

VALUE#

Required argument

set-github-demo-data-dir#

hpcflow config set-github-demo-data-dir [OPTIONS] SHA

Arguments

SHA#

Required argument

unset#

Unset and save the value of the specified configuration item.

hpcflow config unset [OPTIONS] NAME

Arguments

NAME#

Required argument

update#

Update a map-like value in the configuration.

NAME is the dot-delimited path to the map to be updated.

hpcflow config update [OPTIONS] NAME VALUE

Options

--json#

Interpret VALUE as a JSON string.

Arguments

NAME#

Required argument

VALUE#

Required argument

configure-env#

Configure an app environment, using, for example, the currently activated Python environment.

hpcflow configure-env [OPTIONS] NAME

Options

--use-current-env#
--setup <setup>#
--env-source-file <env_source_file>#

Arguments

NAME#

Required argument

demo-data#

Interact with builtin demo data files.

hpcflow demo-data [OPTIONS] COMMAND [ARGS]...

Options

-l, --list#

Print available example data files.

cache#

Ensure a demo data file is in the demo data cache.

hpcflow demo-data cache [OPTIONS] FILE_NAME

Options

--all#

Cache all demo data files.

Arguments

FILE_NAME#

Required argument

copy#

Copy a demo data file to the specified location.

hpcflow demo-data copy [OPTIONS] FILE_NAME DESTINATION

Arguments

FILE_NAME#

Required argument

DESTINATION#

Required argument

demo-software#

hpcflow demo-software [OPTIONS] COMMAND [ARGS]...

doSomething#

hpcflow demo-software doSomething [OPTIONS]

Options

-i1, --infile1 <infile1>#

Required

-i2, --infile2 <infile2>#

Required

-v, --value <value>#
-o, --out <out>#

demo-workflow#

Interact with builtin demo workflows.

hpcflow demo-workflow [OPTIONS] COMMAND [ARGS]...

Options

-l, --list#

Print available builtin demo workflows.

copy#

hpcflow demo-workflow copy [OPTIONS] WORKFLOW_NAME DESTINATION

Options

--doc, --no-doc#

Arguments

WORKFLOW_NAME#

Required argument

DESTINATION#

Required argument

go#

hpcflow demo-workflow go [OPTIONS] WORKFLOW_NAME

Options

--format <format>#

If specified, one of “json” or “yaml”. This forces parsing from a particular format.

Options:

yaml | json

--path <path>#

The directory path into which the new workflow will be generated.

--name <name>#

The name of the workflow. If specified, the workflow directory will be path joined with name. If not specified the workflow template name will be used, in combination with a date-timestamp.

--overwrite#

If True and the workflow directory (path + name) already exists, the existing directory will be overwritten.

--store <store>#

The persistent store type to use.

Options:

zarr | zip | json

--ts-fmt <ts_fmt>#

The datetime format to use for storing datetimes. Datetimes are always stored in UTC (because Numpy does not store time zone info), so this should not include a time zone name.

--ts-name-fmt <ts_name_fmt>#

The datetime format to use when generating the workflow name, where it includes a timestamp.

-v, --var <variables>#

Workflow template variable value to be substituted in to the template file or string. Multiple variable values can be specified.

--js-parallelism <js_parallelism>#

If True, allow multiple jobscripts to execute simultaneously. Raises if set to True but the store type does not support the jobscript_parallelism feature. If not set, jobscript parallelism will be used if the store type supports it.

--wait#

If True, this command will block until the workflow execution is complete.

--add-to-known, --no-add-to-known#

If True, add this submission to the known-submissions file.

--print-idx#

If True, print the submitted jobscript indices for each submission index.

--tasks <tasks>#

List of comma-separated task indices to include in this submission. By default all tasks are included.

--cancel#

Immediately cancel the submission. Useful for testing and benchmarking.

--status, --no-status#

If True, display a live status to track submission progress.

Arguments

WORKFLOW_NAME#

Required argument

make#

hpcflow demo-workflow make [OPTIONS] WORKFLOW_NAME

Options

--format <format>#

If specified, one of “json” or “yaml”. This forces parsing from a particular format.

Options:

yaml | json

--path <path>#

The directory path into which the new workflow will be generated.

--name <name>#

The name of the workflow. If specified, the workflow directory will be path joined with name. If not specified the workflow template name will be used, in combination with a date-timestamp.

--overwrite#

If True and the workflow directory (path + name) already exists, the existing directory will be overwritten.

--store <store>#

The persistent store type to use.

Options:

zarr | zip | json

--ts-fmt <ts_fmt>#

The datetime format to use for storing datetimes. Datetimes are always stored in UTC (because Numpy does not store time zone info), so this should not include a time zone name.

--ts-name-fmt <ts_name_fmt>#

The datetime format to use when generating the workflow name, where it includes a timestamp.

-v, --var <variables>#

Workflow template variable value to be substituted in to the template file or string. Multiple variable values can be specified.

--status, --no-status#

If True, display a live status to track workflow creation progress.

Arguments

WORKFLOW_NAME#

Required argument

show#

hpcflow demo-workflow show [OPTIONS] WORKFLOW_NAME

Options

--syntax, --no-syntax#
--doc, --no-doc#

Arguments

WORKFLOW_NAME#

Required argument

go#

Generate and submit a new hpcFlow workflow.

TEMPLATE_FILE_OR_STR is either a path to a template file in YAML or JSON format, or a YAML/JSON string.

hpcflow go [OPTIONS] TEMPLATE_FILE_OR_STR

Options

--string#

Determines if passing a file path or a string.

--format <format>#

If specified, one of “json” or “yaml”. This forces parsing from a particular format.

Options:

yaml | json

--path <path>#

The directory path into which the new workflow will be generated.

--name <name>#

The name of the workflow. If specified, the workflow directory will be path joined with name. If not specified the workflow template name will be used, in combination with a date-timestamp.

--overwrite#

If True and the workflow directory (path + name) already exists, the existing directory will be overwritten.

--store <store>#

The persistent store type to use.

Options:

zarr | zip | json

--ts-fmt <ts_fmt>#

The datetime format to use for storing datetimes. Datetimes are always stored in UTC (because Numpy does not store time zone info), so this should not include a time zone name.

--ts-name-fmt <ts_name_fmt>#

The datetime format to use when generating the workflow name, where it includes a timestamp.

-v, --var <variables>#

Workflow template variable value to be substituted in to the template file or string. Multiple variable values can be specified.

--js-parallelism <js_parallelism>#

If True, allow multiple jobscripts to execute simultaneously. Raises if set to True but the store type does not support the jobscript_parallelism feature. If not set, jobscript parallelism will be used if the store type supports it.

--wait#

If True, this command will block until the workflow execution is complete.

--add-to-known, --no-add-to-known#

If True, add this submission to the known-submissions file.

--print-idx#

If True, print the submitted jobscript indices for each submission index.

--tasks <tasks>#

List of comma-separated task indices to include in this submission. By default all tasks are included.

--cancel#

Immediately cancel the submission. Useful for testing and benchmarking.

--status, --no-status#

If True, display a live status to track submission progress.

Arguments

TEMPLATE_FILE_OR_STR#

Required argument

helper#

hpcflow helper [OPTIONS] COMMAND [ARGS]...

clear#

Remove the PID file (and kill the helper process if it exists). This should not normally be needed.

hpcflow helper clear [OPTIONS]

log-path#

Get the path to the helper log file (may not exist).

hpcflow helper log-path [OPTIONS]

pid#

Get the process ID of the running helper, if running.

hpcflow helper pid [OPTIONS]

Options

-f, --file#

restart#

Restart (or start) the helper process.

hpcflow helper restart [OPTIONS]

Options

--timeout <timeout>#

Helper timeout in seconds.

Default:

3600

--timeout-check-interval <timeout_check_interval>#

Interval between testing if the timeout has been exceeded in seconds.

Default:

60

--watch-interval <watch_interval>#

Polling interval for watching workflows (and the workflow watch list) in seconds.

Default:

10

run#

Run the helper functionality.

hpcflow helper run [OPTIONS]

Options

--timeout <timeout>#

Helper timeout in seconds.

Default:

3600

--timeout-check-interval <timeout_check_interval>#

Interval between testing if the timeout has been exceeded in seconds.

Default:

60

--watch-interval <watch_interval>#

Polling interval for watching workflows (and the workflow watch list) in seconds.

Default:

10

start#

Start the helper process.

hpcflow helper start [OPTIONS]

Options

--timeout <timeout>#

Helper timeout in seconds.

Default:

3600

--timeout-check-interval <timeout_check_interval>#

Interval between testing if the timeout has been exceeded in seconds.

Default:

60

--watch-interval <watch_interval>#

Polling interval for watching workflows (and the workflow watch list) in seconds.

Default:

10

stop#

Stop the helper process, if it is running.

hpcflow helper stop [OPTIONS]

uptime#

Get the uptime of the helper process, if it is running.

hpcflow helper uptime [OPTIONS]

watch-list#

Get the list of workflows currently being watched.

hpcflow helper watch-list [OPTIONS]

watch-list-path#

Get the path to the workflow watch list file (may not exist).

hpcflow helper watch-list-path [OPTIONS]

internal#

Internal CLI to be invoked by scripts generated by the app.

hpcflow internal [OPTIONS] COMMAND [ARGS]...

get-invoc-cmd#

Get the invocation command for this app instance.

hpcflow internal get-invoc-cmd [OPTIONS]

workflow#

hpcflow internal workflow [OPTIONS] PATH COMMAND [ARGS]...

Arguments

PATH#

Required argument

check-loop#

Check if an iteration has met its loop’s termination condition.

hpcflow internal workflow PATH check-loop [OPTIONS] LOOP_NAME EAR_ID

Arguments

LOOP_NAME#

Required argument

EAR_ID#

Required argument

get-ear-skipped#

Return 1 if the given EAR is to be skipped, else return 0.

hpcflow internal workflow PATH get-ear-skipped [OPTIONS] EAR_ID

Arguments

EAR_ID#

Required argument

save-parameter#
hpcflow internal workflow PATH save-parameter [OPTIONS] NAME VALUE EAR_ID
                                              CMD_IDX

Options

--stderr#

Arguments

NAME#

Required argument

VALUE#

Required argument

EAR_ID#

Required argument

CMD_IDX#

Required argument

set-ear-end#
hpcflow internal workflow PATH set-ear-end [OPTIONS] JS_IDX JS_ACT_IDX EAR_ID
                                           EXIT_CODE

Arguments

JS_IDX#

Required argument

JS_ACT_IDX#

Required argument

EAR_ID#

Required argument

EXIT_CODE#

Required argument

set-ear-skip#
hpcflow internal workflow PATH set-ear-skip [OPTIONS] EAR_ID

Arguments

EAR_ID#

Required argument

set-ear-start#
hpcflow internal workflow PATH set-ear-start [OPTIONS] EAR_ID

Arguments

EAR_ID#

Required argument

write-commands#
hpcflow internal workflow PATH write-commands [OPTIONS] SUBMISSION_IDX
                                              JOBSCRIPT_IDX JS_ACTION_IDX
                                              EAR_ID

Arguments

SUBMISSION_IDX#

Required argument

JOBSCRIPT_IDX#

Required argument

JS_ACTION_IDX#

Required argument

EAR_ID#

Required argument

make#

Generate a new hpcFlow workflow.

TEMPLATE_FILE_OR_STR is either a path to a template file in YAML or JSON format, or a YAML/JSON string.

hpcflow make [OPTIONS] TEMPLATE_FILE_OR_STR

Options

--string#

Determines if passing a file path or a string.

--format <format>#

If specified, one of “json” or “yaml”. This forces parsing from a particular format.

Options:

yaml | json

--path <path>#

The directory path into which the new workflow will be generated.

--name <name>#

The name of the workflow. If specified, the workflow directory will be path joined with name. If not specified the workflow template name will be used, in combination with a date-timestamp.

--overwrite#

If True and the workflow directory (path + name) already exists, the existing directory will be overwritten.

--store <store>#

The persistent store type to use.

Options:

zarr | zip | json

--ts-fmt <ts_fmt>#

The datetime format to use for storing datetimes. Datetimes are always stored in UTC (because Numpy does not store time zone info), so this should not include a time zone name.

--ts-name-fmt <ts_name_fmt>#

The datetime format to use when generating the workflow name, where it includes a timestamp.

-v, --var <variables>#

Workflow template variable value to be substituted in to the template file or string. Multiple variable values can be specified.

--status, --no-status#

If True, display a live status to track workflow creation progress.

Arguments

TEMPLATE_FILE_OR_STR#

Required argument

manage#

Infrequent app management tasks.

App config is not loaded.

hpcflow manage [OPTIONS] COMMAND [ARGS]...

clear-cache#

Delete the app cache directory.

hpcflow manage clear-cache [OPTIONS]

Options

--hostname#

clear-demo-data-cache#

Delete the app demo data cache directory.

hpcflow manage clear-demo-data-cache [OPTIONS]

clear-known-subs#

Delete the contents of the known-submissions file.

hpcflow manage clear-known-subs [OPTIONS]

clear-temp-dir#

Delete all files in the user runtime directory.

hpcflow manage clear-temp-dir [OPTIONS]

get-config-path#

Print the config file path without loading the config.

This can be used instead of {app_name} open config –path if the config file is invalid, because this command does not load the config.

hpcflow manage get-config-path [OPTIONS]

Options

--config-dir <config_dir>#

The directory containing the config file whose path is to be returned.

reset-config#

Reset the configuration file to defaults.

This can be used if the current configuration file is invalid.

hpcflow manage reset-config [OPTIONS]

Options

--config-dir <config_dir>#

The directory containing the config file to be reset.

open#

Open a file (for example hpcFlow’s log file) using the default application.

hpcflow open [OPTIONS] COMMAND [ARGS]...

config#

Open the hpcFlow config file, or retrieve it’s path.

hpcflow open config [OPTIONS]

Options

--path#

demo-data-cache-dir#

hpcflow open demo-data-cache-dir [OPTIONS]

Options

--path#

env-source#

Open a named environment sources file, or the first one.

hpcflow open env-source [OPTIONS]

Options

--name <name>#
--path#

known-subs#

Open the known-submissions text file.

hpcflow open known-subs [OPTIONS]

Options

--path#

log#

Open the hpcFlow log file.

hpcflow open log [OPTIONS]

Options

--path#

user-cache-dir#

hpcflow open user-cache-dir [OPTIONS]

Options

--path#

user-cache-hostname-dir#

hpcflow open user-cache-hostname-dir [OPTIONS]

Options

--path#

user-data-dir#

hpcflow open user-data-dir [OPTIONS]

Options

--path#

user-data-hostname-dir#

hpcflow open user-data-hostname-dir [OPTIONS]

Options

--path#

user-runtime-dir#

hpcflow open user-runtime-dir [OPTIONS]

Options

--path#

workflow#

Open a workflow directory using, for example, File Explorer on Windows.

hpcflow open workflow [OPTIONS] WORKFLOW_REF

Options

--path#
-r, --ref-type <ref_type>#

How to interpret a reference, as an ID, a path, or to guess.

Options:

assume-id | id | path

Arguments

WORKFLOW_REF#

Required argument

rechunk#

Rechunk metadata/runs and parameters/base arrays.

WORKFLOW_REF is the local ID (that provided by the show command}) or the workflow path.

hpcflow rechunk [OPTIONS] WORKFLOW_REF

Options

-r, --ref-type <ref_type>#

How to interpret a reference, as an ID, a path, or to guess.

Options:

assume-id | id | path

--backup, --no-backup#

First copy a backup of the array to a directory ending in .bak.

--chunk-size <chunk_size>#

New chunk size (array items per chunk). If unset (as by default), the array will be rechunked to a single chunk array (i.e with a chunk size equal to the array’s shape).

--status, --no-status#

If True, display a live status to track rechunking progress.

Arguments

WORKFLOW_REF#

Required argument

show#

Show information about running and recently active workflows.

hpcflow show [OPTIONS]

Options

-r, --max-recent <max_recent>#

The maximum number of inactive submissions to show.

--no-update#

If True, do not update the known-submissions file to remove workflows that are no longer running.

-f, --full#

Allow multiple lines per workflow submission.

--legend#

Display the legend for the show command output.

submission#

Submission-related queries.

hpcflow submission [OPTIONS] COMMAND [ARGS]...

Options

--os-info#

Print information about the operating system.

get-known#

Print known-submissions information as a formatted Python object.

hpcflow submission get-known [OPTIONS]

Options

--json#

Do not format and only show JSON-compatible information.

scheduler#

hpcflow submission scheduler [OPTIONS] SCHEDULER_NAME COMMAND [ARGS]...

Arguments

SCHEDULER_NAME#

Required argument

get-login-nodes#
hpcflow submission scheduler SCHEDULER_NAME get-login-nodes 
    [OPTIONS]

shell-info#

Show information about the specified shell, such as the version.

hpcflow submission shell-info [OPTIONS] {bash|powershell|wsl+bash|wsl}

Options

--exclude-os#

Arguments

SHELL_NAME#

Required argument

tc#

For showing template component data.

hpcflow tc [OPTIONS]

test#

Run hpcFlow test suite.

PY_TEST_ARGS are arguments passed on to Pytest.

hpcflow test [OPTIONS] [PY_TEST_ARGS]...

Arguments

PY_TEST_ARGS#

Optional argument(s)

unzip#

Generate a copy of the specified zipped workflow in the submittable Zarr format in the current working directory.

WORKFLOW_PATH is path of the zip file to unzip.

hpcflow unzip [OPTIONS] WORKFLOW_PATH

Options

--path <path>#

Path at which to create the new unzipped workflow. If this is an existing directory, the new workflow directory will be created within this directory. Otherwise, this path will represent the new workflow directory path.

--log <log>#

Path to a log file to use during unzipping.

Arguments

WORKFLOW_PATH#

Required argument

workflow#

Interact with existing hpcFlow workflows.

WORKFLOW_REF is the path to, or local ID of, an existing workflow.

hpcflow workflow [OPTIONS] WORKFLOW_REF COMMAND [ARGS]...

Options

-r, --ref-type <ref_type>#

How to interpret a reference, as an ID, a path, or to guess.

Options:

assume-id | id | path

Arguments

WORKFLOW_REF#

Required argument

abort-run#

Abort the specified run.

hpcflow workflow WORKFLOW_REF abort-run [OPTIONS]

Options

--submission <submission>#
--task <task>#
--element <element>#

get-all-params#

Get all parameter values.

hpcflow workflow WORKFLOW_REF get-all-params [OPTIONS]

get-param#

Get a parameter value by data index.

hpcflow workflow WORKFLOW_REF get-param [OPTIONS] INDEX

Arguments

INDEX#

Required argument

get-param-source#

Get a parameter source by data index.

hpcflow workflow WORKFLOW_REF get-param-source [OPTIONS] INDEX

Arguments

INDEX#

Required argument

is-param-set#

Check if a parameter specified by data index is set.

hpcflow workflow WORKFLOW_REF is-param-set [OPTIONS] INDEX

Arguments

INDEX#

Required argument

rechunk#

Rechunk metadata/runs and parameters/base arrays.

hpcflow workflow WORKFLOW_REF rechunk [OPTIONS]

Options

--backup, --no-backup#

First copy a backup of the array to a directory ending in .bak.

--chunk-size <chunk_size>#

New chunk size (array items per chunk). If unset (as by default), the array will be rechunked to a single chunk array (i.e with a chunk size equal to the array’s shape).

--status, --no-status#

If True, display a live status to track rechunking progress.

rechunk-parameter-base#

Rechunk the parameters/base array.

hpcflow workflow WORKFLOW_REF rechunk-parameter-base [OPTIONS]

Options

--backup, --no-backup#

First copy a backup of the array to a directory ending in .bak.

--chunk-size <chunk_size>#

New chunk size (array items per chunk). If unset (as by default), the array will be rechunked to a single chunk array (i.e with a chunk size equal to the array’s shape).

--status, --no-status#

If True, display a live status to track rechunking progress.

rechunk-runs#

Rechunk the metadata/runs array.

hpcflow workflow WORKFLOW_REF rechunk-runs [OPTIONS]

Options

--backup, --no-backup#

First copy a backup of the array to a directory ending in .bak.

--chunk-size <chunk_size>#

New chunk size (array items per chunk). If unset (as by default), the array will be rechunked to a single chunk array (i.e with a chunk size equal to the array’s shape).

--status, --no-status#

If True, display a live status to track rechunking progress.

show-all-status#

Show the submission status of all workflow EARs.

hpcflow workflow WORKFLOW_REF show-all-status [OPTIONS]

sub#

Interact with existing hpcFlow workflow submissions.

SUB_IDX is the submission index.

hpcflow workflow WORKFLOW_REF sub [OPTIONS] SUB_IDX COMMAND [ARGS]...

Arguments

SUB_IDX#

Required argument

get-active-jobscripts#

Show active jobscripts and their jobscript-element states.

hpcflow workflow WORKFLOW_REF sub SUB_IDX get-active-jobscripts 
    [OPTIONS]
js#

Interact with existing hpcFlow workflow submission jobscripts.

JS_IDX is the jobscript index within the submission object.

hpcflow workflow WORKFLOW_REF sub SUB_IDX js [OPTIONS] JS_IDX COMMAND
                                             [ARGS]...

Arguments

JS_IDX#

Required argument

deps#

Get jobscript dependencies.

hpcflow workflow WORKFLOW_REF sub SUB_IDX js JS_IDX deps [OPTIONS]
path#

Get the file path to the jobscript.

hpcflow workflow WORKFLOW_REF sub SUB_IDX js JS_IDX path [OPTIONS]
res#

Get resources associated with this jobscript.

hpcflow workflow WORKFLOW_REF sub SUB_IDX js JS_IDX res [OPTIONS]
show#

Show the jobscript file.

hpcflow workflow WORKFLOW_REF sub SUB_IDX js JS_IDX show [OPTIONS]
needs-submit#

Check if this submission needs submitting.

hpcflow workflow WORKFLOW_REF sub SUB_IDX needs-submit [OPTIONS]
outstanding-js#

Get a list of jobscript indices that have not yet been submitted.

hpcflow workflow WORKFLOW_REF sub SUB_IDX outstanding-js [OPTIONS]
status#

Get the submission status.

hpcflow workflow WORKFLOW_REF sub SUB_IDX status [OPTIONS]
submitted-js#

Get a list of jobscript indices that have been submitted.

hpcflow workflow WORKFLOW_REF sub SUB_IDX submitted-js [OPTIONS]

submit#

Submit the workflow.

hpcflow workflow WORKFLOW_REF submit [OPTIONS]

Options

--js-parallelism <js_parallelism>#

If True, allow multiple jobscripts to execute simultaneously. Raises if set to True but the store type does not support the jobscript_parallelism feature. If not set, jobscript parallelism will be used if the store type supports it.

--wait#

If True, this command will block until the workflow execution is complete.

--add-to-known, --no-add-to-known#

If True, add this submission to the known-submissions file.

--print-idx#

If True, print the submitted jobscript indices for each submission index.

--tasks <tasks>#

List of comma-separated task indices to include in this submission. By default all tasks are included.

--cancel#

Immediately cancel the submission. Useful for testing and benchmarking.

--status, --no-status#

If True, display a live status to track submission progress.

unzip#

Generate a copy of the zipped workflow in the submittable Zarr format in the current working directory.

hpcflow workflow WORKFLOW_REF unzip [OPTIONS]

Options

--path <path>#

Path at which to create the new unzipped workflow. If this is an existing directory, the new workflow directory will be created within this directory. Otherwise, this path will represent the new workflow directory path.

--log <log>#

Path to a log file to use during unzipping.

wait#

hpcflow workflow WORKFLOW_REF wait [OPTIONS]

Options

-j, --jobscripts <jobscripts>#

Wait for only these jobscripts to finish. Jobscripts should be specified by their submission index, followed by a colon, followed by a comma-separated list of jobscript indices within that submission (no spaces are allowed). To specify jobscripts across multiple submissions, use a semicolon to separate patterns like these.

zip#

Generate a copy of the workflow in the zip file format in the current working directory.

hpcflow workflow WORKFLOW_REF zip [OPTIONS]

Options

--path <path>#

Path at which to create the new zipped workflow. If this is an existing directory, the zip file will be created within this directory. Otherwise, this path is assumed to be the full file path to the new zip file.

--overwrite#

If set, any existing file will be overwritten.

--log <log>#

Path to a log file to use during zipping.

--include-execute#
--include-rechunk-backups#

zip#

Generate a copy of the specified workflow in the zip file format in the current working directory.

WORKFLOW_REF is the local ID (that provided by the show command}) or the workflow path.

hpcflow zip [OPTIONS] WORKFLOW_REF

Options

--path <path>#

Path at which to create the new zipped workflow. If this is an existing directory, the zip file will be created within this directory. Otherwise, this path is assumed to be the full file path to the new zip file.

--overwrite#

If set, any existing file will be overwritten.

--log <log>#

Path to a log file to use during zipping.

--include-execute#
--include-rechunk-backups#
-r, --ref-type <ref_type>#

How to interpret a reference, as an ID, a path, or to guess.

Options:

assume-id | id | path

Arguments

WORKFLOW_REF#

Required argument