Command-line interface#

CLI reference documentation.

hpcflow#

Usage

hpcflow [OPTIONS] COMMAND [ARGS]...

Options

--version#

Show the version of hpcFlow and exit.

--hpcflow-version#

Show the version of hpcflow and exit.

--help#

Show this message and exit.

--run-time-info#

Print run-time information!

--config-dir <config_dir>#

Set the configuration directory.

--config-key <config_key>#

Set the configuration invocation key.

--with-config <with_config>#

Override a config item in the config file

--timeit#

Time function pathways as the code executes and write out a summary at the end. Only functions decorated by TimeIt.decorator are included.

--timeit-file <timeit_file>#

Time function pathways as the code executes and write out a summary at the end to a text file given by this file path. Only functions decorated by TimeIt.decorator are included.

--std-stream <std_stream>#

File to redirect standard output and error to, and to print exceptions to.

cancel#

Stop all running jobscripts of the specified workflow.

WORKFLOW_REF is the local ID (that provided by the show command}) or the workflow path.

Usage

hpcflow cancel [OPTIONS] WORKFLOW_REF

Options

-r, --ref-type <ref_type>#

How to interpret a reference, as an ID, a path, or to guess.

Options:

assume-id | id | path

--status, --no-status#

If True, display a live status to track cancel progress.

--quiet <quiet>#

If True, do not print anything (e.g. which jobscripts where cancelled).

Arguments

WORKFLOW_REF#

Required argument

config#

Configuration sub-command for getting and setting data in the configuration file(s).

Usage

hpcflow config [OPTIONS] COMMAND [ARGS]...

Options

--no-callback <no_callback>#

Exclude a named get/set callback function during execution of the command.

add-scheduler#

Usage

hpcflow config add-scheduler [OPTIONS] NAME

Options

--defaults <defaults>#

Arguments

NAME#

Required argument

add-shell#

Usage

hpcflow config add-shell [OPTIONS] NAME

Options

--defaults <defaults>#

Arguments

NAME#

Required argument

add-shell-wsl#

Usage

hpcflow config add-shell-wsl [OPTIONS]

Options

--defaults <defaults>#

append#

Append a new value to the specified configuration item.

NAME is the dot-delimited path to the list to be appended to.

Usage

hpcflow config append [OPTIONS] NAME VALUE

Options

--json#

Interpret VALUE as a JSON string.

Arguments

NAME#

Required argument

VALUE#

Required argument

get#

Show the value of the specified configuration item.

Usage

hpcflow config get [OPTIONS] NAME

Options

--all#

Show all configuration items.

--metadata#

Show all metadata items.

--file#

Show the contents of the configuration file.

Arguments

NAME#

Required argument

import#

Update the config file with keys from a YAML file.

Usage

hpcflow config import [OPTIONS] FILE_PATH

Options

--rename, --no-rename#

Rename the currently loaded config file according to the name of the file that is being imported (default is to rename). Ignored if –new is specified.

--new#

If True, generate a new default config, and import the file into this config. If False, modify the currently loaded config.

Arguments

FILE_PATH#

Required argument

init#

Usage

hpcflow config init [OPTIONS] KNOWN_NAME

Options

--path <path>#

An fsspec-compatible path in which to look for configuration-import files.

Arguments

KNOWN_NAME#

Required argument

list#

Show a list of all configurable keys.

Usage

hpcflow config list [OPTIONS]

load-data-files#

Check we can load the data files (e.g. task schema files) as specified in the configuration.

Usage

hpcflow config load-data-files [OPTIONS]

open#

Alias for hpcflow open config: open the configuration file, or retrieve it’s path.

Usage

hpcflow config open [OPTIONS]

Options

--path#

pop#

Remove a value from a list-like configuration item.

NAME is the dot-delimited path to the list to be modified.

Usage

hpcflow config pop [OPTIONS] NAME INDEX

Arguments

NAME#

Required argument

INDEX#

Required argument

prepend#

Prepend a new value to the specified configuration item.

NAME is the dot-delimited path to the list to be prepended to.

Usage

hpcflow config prepend [OPTIONS] NAME VALUE

Options

--json#

Interpret VALUE as a JSON string.

Arguments

NAME#

Required argument

VALUE#

Required argument

set#

Set and save the value of the specified configuration item.

Usage

hpcflow config set [OPTIONS] NAME VALUE

Options

--json#

Interpret VALUE as a JSON string.

Arguments

NAME#

Required argument

VALUE#

Required argument

unset#

Unset and save the value of the specified configuration item.

Usage

hpcflow config unset [OPTIONS] NAME

Arguments

NAME#

Required argument

update#

Update a map-like value in the configuration.

NAME is the dot-delimited path to the map to be updated.

Usage

hpcflow config update [OPTIONS] NAME VALUE

Options

--json#

Interpret VALUE as a JSON string.

Arguments

NAME#

Required argument

VALUE#

Required argument

data#

Interact with builtin demo data files.

Usage

hpcflow data [OPTIONS] COMMAND [ARGS]...

Options

-l, --list#

Print available example data files, and whether they are cached.

cache#

Ensure a demo data file is in the demo data cache.

Usage

hpcflow data cache [OPTIONS] FILE_NAME

Options

--all#

Cache all demo data files.

--exist-ok, --exist-not-ok#

Whether to raise an exception if the file is already cached.

Arguments

FILE_NAME#

Required argument

copy#

Copy a demo data file to the specified location.

Usage

hpcflow data copy [OPTIONS] FILE_NAME DESTINATION

Arguments

FILE_NAME#

Required argument

DESTINATION#

Required argument

install-cache#

Copy pre-existing cached data to the correct location.

Usage

hpcflow data install-cache [OPTIONS] SOURCE

Options

--overwrite#

If True, overwrite existing items in the cache directory; otherwise raise on items to be copied that already exist. Default is False.

Arguments

SOURCE#

Required argument

purge#

Delete the cache of a demo data file.

Usage

hpcflow data purge [OPTIONS] FILE_NAME

Options

--all#

Delete all demo data files.

--exist-ok, --not-exist-ok#

Whether to raise an exception if the file does not exist. False by default (i.e. do not raise if the file does not exist).

Arguments

FILE_NAME#

Required argument

recache#

Purge and then re-cache a demo data file.

Usage

hpcflow data recache [OPTIONS] FILE_NAME

Options

--all#

Recache all demo data files.

--exist-ok, --not-exist-ok#

Whether to raise an exception if the file does not exist. False by default (i.e. do not raise if the file does not exist).

Arguments

FILE_NAME#

Required argument

demo-software#

Usage

hpcflow demo-software [OPTIONS] COMMAND [ARGS]...

doSomething#

Usage

hpcflow demo-software doSomething [OPTIONS]

Options

-i1, --infile1 <infile1>#

Required

-i2, --infile2 <infile2>#

Required

-v, --value <value>#
-o, --out <out>#

demo-workflow#

Interact with builtin demo workflows.

Usage

hpcflow demo-workflow [OPTIONS] COMMAND [ARGS]...

Options

-l, --list#

Print available builtin demo workflows.

copy#

Usage

hpcflow demo-workflow copy [OPTIONS] WORKFLOW_NAME DESTINATION

Options

--doc, --no-doc#

Arguments

WORKFLOW_NAME#

Required argument

DESTINATION#

Required argument

go#

Usage

hpcflow demo-workflow go [OPTIONS] WORKFLOW_NAME

Options

--format <format>#

If specified, one of “json” or “yaml”. This forces parsing from a particular format.

Options:

yaml | json

--path <path>#

The directory in which the workflow will be generated. If not specified, the config item default_workflow_path will be used; if that is not set, the current directory is used.

--name <name>#

The name of the workflow. If specified, the workflow directory will be path joined with name. If not specified the workflow template name will be used, in combination with a date-timestamp.

--name-timestamp, --name-no-timestamp#

If True, suffix the workflow name with a date-timestamp. A default value can be set with the config item workflow_name_add_timestamp; otherwise set to True.

--name-dir, --name-no-dir#

If True, and –name-timestamp is also True, the workflow directory name will be just the date-timestamp, and will be contained within a parent directory corresponding to the workflow name. A default value can be set with the config item workflow_name_use_dir; otherwise set to False.

--overwrite#

If True and the workflow directory (path + name) already exists, the existing directory will be overwritten.

--store <store>#

The persistent store type to use.

Options:

zarr | zip | json

--ts-fmt <ts_fmt>#

The datetime format to use for storing datetimes. Datetimes are always stored in UTC (because Numpy does not store time zone info), so this should not include a time zone name.

--ts-name-fmt <ts_name_fmt>#

The datetime format to use when generating the workflow name, where it includes a timestamp.

-v, --var <variables>#

Workflow template variable value to be substituted in to the template file or string. Multiple variable values can be specified.

--js-parallelism <js_parallelism>#

If True, allow multiple jobscripts to execute simultaneously. If ‘scheduled’/’direct’, only allow simultaneous execution of scheduled/direct jobscripts. Raises if set to True, ‘scheduled’, or ‘direct’, but the store type does not support the jobscript_parallelism feature. If not set, jobscript parallelism will be used if the store type supports it, for scheduled jobscripts only.

--wait#

If True, this command will block until the workflow execution is complete.

--add-to-known, --no-add-to-known#

If True, add this submission to the known-submissions file.

--print-idx#

If True, print the submitted jobscript indices for each submission index.

--tasks <tasks>#

List of comma-separated task indices to include in this submission. By default all tasks are included.

--cancel#

Immediately cancel the submission. Useful for testing and benchmarking.

--status, --no-status#

If True, display a live status to track submission progress.

--quiet <quiet>#

If True, do not print anything about workflow submission.

Arguments

WORKFLOW_NAME#

Required argument

make#

Usage

hpcflow demo-workflow make [OPTIONS] WORKFLOW_NAME

Options

--format <format>#

If specified, one of “json” or “yaml”. This forces parsing from a particular format.

Options:

yaml | json

--path <path>#

The directory in which the workflow will be generated. If not specified, the config item default_workflow_path will be used; if that is not set, the current directory is used.

--name <name>#

The name of the workflow. If specified, the workflow directory will be path joined with name. If not specified the workflow template name will be used, in combination with a date-timestamp.

--name-timestamp, --name-no-timestamp#

If True, suffix the workflow name with a date-timestamp. A default value can be set with the config item workflow_name_add_timestamp; otherwise set to True.

--name-dir, --name-no-dir#

If True, and –name-timestamp is also True, the workflow directory name will be just the date-timestamp, and will be contained within a parent directory corresponding to the workflow name. A default value can be set with the config item workflow_name_use_dir; otherwise set to False.

--overwrite#

If True and the workflow directory (path + name) already exists, the existing directory will be overwritten.

--store <store>#

The persistent store type to use.

Options:

zarr | zip | json

--ts-fmt <ts_fmt>#

The datetime format to use for storing datetimes. Datetimes are always stored in UTC (because Numpy does not store time zone info), so this should not include a time zone name.

--ts-name-fmt <ts_name_fmt>#

The datetime format to use when generating the workflow name, where it includes a timestamp.

-v, --var <variables>#

Workflow template variable value to be substituted in to the template file or string. Multiple variable values can be specified.

--status, --no-status#

If True, display a live status to track workflow creation progress.

--add-submission#

If True, add a submission to the workflow (but do not submit).

Arguments

WORKFLOW_NAME#

Required argument

show#

Usage

hpcflow demo-workflow show [OPTIONS] WORKFLOW_NAME

Options

--syntax, --no-syntax#
--doc, --no-doc#

Arguments

WORKFLOW_NAME#

Required argument

env#

Configure execution environments.

Usage

hpcflow env [OPTIONS] COMMAND [ARGS]...

add#

Add a simple environment definition.

Usage

hpcflow env add [OPTIONS] NAME

Options

--use-current#
--setup <setup>#
--env-source-file <env_source_file>#

The environment source file to save the environment to, if specified.

--file-name <file_name>#

The file name of the environment source file within the app config directory to save the environment to, if –env-source-file is not provided.

--replace, --no-replace#

If True, replace an existing environment with the same name and specifiers.

Arguments

NAME#

Required argument

info#

Retrieve the value of an environment attribute. If multiple environments match, then the attribute values will appear on newlines.

Usage

hpcflow env info [OPTIONS] ATTRIBUTE [ID]

Options

-n, --name <name>#
-l, --label <label>#
-s, --specifier <specifier>#

Arguments

ATTRIBUTE#

Required argument

ID#

Optional argument

list#

List available environments.

Usage

hpcflow env list [OPTIONS]

remove#

Remove an environment definition.

Usage

hpcflow env remove [OPTIONS] [ID]

Options

-n, --name <name>#
-l, --label <label>#
-s, --specifier <specifier>#

Arguments

ID#

Optional argument

setup#

Setup one or more environments according to some sensible grouping.

Usage

hpcflow env setup [OPTIONS] COMMAND [ARGS]...

Options

--env-source-file <env_source_file>#
python#

Configure environments with python_script executables.

Usage

hpcflow env setup python [OPTIONS]

Options

-n, --name <name>#

In addition to the python_env set up these other named environments (suffixed by “_env”), also with a python_script executable.

--use-current, --no-use-current#

Use the currently active conda-like or Python virtual environment to add a python_script executable to the environment.

--env-source-file <env_source_file>#

The environment source file to save the environment to, if specified.

--file-name <file_name>#

The file name of the environment source file within the app config directory to save the environment to, if –env-source-file is not provided.

--replace, --no-replace#

If True, replace an existing environment with the same name and specifiers.

show#

Show an environment definition.

Usage

hpcflow env show [OPTIONS] [ID]

Options

-n, --name <name>#
-l, --label <label>#
-s, --specifier <specifier>#

Arguments

ID#

Optional argument

go#

Generate and submit a new hpcFlow workflow.

TEMPLATE_FILE_OR_STR is either a path to a template file in YAML or JSON format, or a YAML/JSON string.

Usage

hpcflow go [OPTIONS] TEMPLATE_FILE_OR_STR

Options

--string#

Determines if passing a file path or a string.

--format <format>#

If specified, one of “json” or “yaml”. This forces parsing from a particular format.

Options:

yaml | json

--path <path>#

The directory in which the workflow will be generated. If not specified, the config item default_workflow_path will be used; if that is not set, the current directory is used.

--name <name>#

The name of the workflow. If specified, the workflow directory will be path joined with name. If not specified the workflow template name will be used, in combination with a date-timestamp.

--name-timestamp, --name-no-timestamp#

If True, suffix the workflow name with a date-timestamp. A default value can be set with the config item workflow_name_add_timestamp; otherwise set to True.

--name-dir, --name-no-dir#

If True, and –name-timestamp is also True, the workflow directory name will be just the date-timestamp, and will be contained within a parent directory corresponding to the workflow name. A default value can be set with the config item workflow_name_use_dir; otherwise set to False.

--overwrite#

If True and the workflow directory (path + name) already exists, the existing directory will be overwritten.

--store <store>#

The persistent store type to use.

Options:

zarr | zip | json

--ts-fmt <ts_fmt>#

The datetime format to use for storing datetimes. Datetimes are always stored in UTC (because Numpy does not store time zone info), so this should not include a time zone name.

--ts-name-fmt <ts_name_fmt>#

The datetime format to use when generating the workflow name, where it includes a timestamp.

-v, --var <variables>#

Workflow template variable value to be substituted in to the template file or string. Multiple variable values can be specified.

--js-parallelism <js_parallelism>#

If True, allow multiple jobscripts to execute simultaneously. If ‘scheduled’/’direct’, only allow simultaneous execution of scheduled/direct jobscripts. Raises if set to True, ‘scheduled’, or ‘direct’, but the store type does not support the jobscript_parallelism feature. If not set, jobscript parallelism will be used if the store type supports it, for scheduled jobscripts only.

--wait#

If True, this command will block until the workflow execution is complete.

--add-to-known, --no-add-to-known#

If True, add this submission to the known-submissions file.

--print-idx#

If True, print the submitted jobscript indices for each submission index.

--tasks <tasks>#

List of comma-separated task indices to include in this submission. By default all tasks are included.

--cancel#

Immediately cancel the submission. Useful for testing and benchmarking.

--status, --no-status#

If True, display a live status to track submission progress.

--quiet <quiet>#

If True, do not print anything about workflow submission.

Arguments

TEMPLATE_FILE_OR_STR#

Required argument

helper#

Usage

hpcflow helper [OPTIONS] COMMAND [ARGS]...

clear#

Remove the PID file (and kill the helper process if it exists). This should not normally be needed.

Usage

hpcflow helper clear [OPTIONS]

log-path#

Get the path to the helper log file (may not exist).

Usage

hpcflow helper log-path [OPTIONS]

pid#

Get the process ID of the running helper, if running.

Usage

hpcflow helper pid [OPTIONS]

Options

-f, --file#

restart#

Restart (or start) the helper process.

Usage

hpcflow helper restart [OPTIONS]

Options

--timeout <timeout>#

Helper timeout in seconds.

Default:

3600

--timeout-check-interval <timeout_check_interval>#

Interval between testing if the timeout has been exceeded in seconds.

Default:

60

--watch-interval <watch_interval>#

Polling interval for watching workflows (and the workflow watch list) in seconds.

Default:

10

run#

Run the helper functionality.

Usage

hpcflow helper run [OPTIONS]

Options

--timeout <timeout>#

Helper timeout in seconds.

Default:

3600

--timeout-check-interval <timeout_check_interval>#

Interval between testing if the timeout has been exceeded in seconds.

Default:

60

--watch-interval <watch_interval>#

Polling interval for watching workflows (and the workflow watch list) in seconds.

Default:

10

start#

Start the helper process.

Usage

hpcflow helper start [OPTIONS]

Options

--timeout <timeout>#

Helper timeout in seconds.

Default:

3600

--timeout-check-interval <timeout_check_interval>#

Interval between testing if the timeout has been exceeded in seconds.

Default:

60

--watch-interval <watch_interval>#

Polling interval for watching workflows (and the workflow watch list) in seconds.

Default:

10

stop#

Stop the helper process, if it is running.

Usage

hpcflow helper stop [OPTIONS]

uptime#

Get the uptime of the helper process, if it is running.

Usage

hpcflow helper uptime [OPTIONS]

watch-list#

Get the list of workflows currently being watched.

Usage

hpcflow helper watch-list [OPTIONS]

watch-list-path#

Get the path to the workflow watch list file (may not exist).

Usage

hpcflow helper watch-list-path [OPTIONS]

internal#

Internal CLI to be invoked by scripts generated by the app.

Usage

hpcflow internal [OPTIONS] COMMAND [ARGS]...

get-invoc-cmd#

Get the invocation command for this app instance.

Usage

hpcflow internal get-invoc-cmd [OPTIONS]

noop#

Used only in CLI tests.

Usage

hpcflow internal noop [OPTIONS]

Options

--raise#
--click-exit-code <click_exit_code>#
--sleep <sleep>#

workflow#

Usage

hpcflow internal workflow [OPTIONS] PATH COMMAND [ARGS]...

Arguments

PATH#

Required argument

execute-combined-runs#

Usage

hpcflow internal workflow PATH execute-combined-runs [OPTIONS] SUBMISSION_IDX
                                                     JOBSCRIPT_IDX

Arguments

SUBMISSION_IDX#

Required argument

JOBSCRIPT_IDX#

Required argument

execute-run#

Usage

hpcflow internal workflow PATH execute-run [OPTIONS] SUBMISSION_IDX
                                           JOBSCRIPT_IDX BLOCK_IDX
                                           BLOCK_ACTION_IDX RUN_ID

Arguments

SUBMISSION_IDX#

Required argument

JOBSCRIPT_IDX#

Required argument

BLOCK_IDX#

Required argument

BLOCK_ACTION_IDX#

Required argument

RUN_ID#

Required argument

save-parameter#

Usage

hpcflow internal workflow PATH save-parameter [OPTIONS] NAME VALUE EAR_ID
                                              CMD_IDX

Options

--stderr#

Arguments

NAME#

Required argument

VALUE#

Required argument

EAR_ID#

Required argument

CMD_IDX#

Required argument

make#

Generate a new hpcFlow workflow.

TEMPLATE_FILE_OR_STR is either a path to a template file in YAML or JSON format, or a YAML/JSON string.

Usage

hpcflow make [OPTIONS] TEMPLATE_FILE_OR_STR

Options

--string#

Determines if passing a file path or a string.

--format <format>#

If specified, one of “json” or “yaml”. This forces parsing from a particular format.

Options:

yaml | json

--path <path>#

The directory in which the workflow will be generated. If not specified, the config item default_workflow_path will be used; if that is not set, the current directory is used.

--name <name>#

The name of the workflow. If specified, the workflow directory will be path joined with name. If not specified the workflow template name will be used, in combination with a date-timestamp.

--name-timestamp, --name-no-timestamp#

If True, suffix the workflow name with a date-timestamp. A default value can be set with the config item workflow_name_add_timestamp; otherwise set to True.

--name-dir, --name-no-dir#

If True, and –name-timestamp is also True, the workflow directory name will be just the date-timestamp, and will be contained within a parent directory corresponding to the workflow name. A default value can be set with the config item workflow_name_use_dir; otherwise set to False.

--overwrite#

If True and the workflow directory (path + name) already exists, the existing directory will be overwritten.

--store <store>#

The persistent store type to use.

Options:

zarr | zip | json

--ts-fmt <ts_fmt>#

The datetime format to use for storing datetimes. Datetimes are always stored in UTC (because Numpy does not store time zone info), so this should not include a time zone name.

--ts-name-fmt <ts_name_fmt>#

The datetime format to use when generating the workflow name, where it includes a timestamp.

-v, --var <variables>#

Workflow template variable value to be substituted in to the template file or string. Multiple variable values can be specified.

--status, --no-status#

If True, display a live status to track workflow creation progress.

--add-submission#

If True, add a submission to the workflow (but do not submit).

Arguments

TEMPLATE_FILE_OR_STR#

Required argument

manage#

Infrequent app management tasks.

App config is not loaded.

Usage

hpcflow manage [OPTIONS] COMMAND [ARGS]...

cache-all#

Cache all cacheable files: data files and programs.

Usage

hpcflow manage cache-all [OPTIONS]

Options

--exist-ok, --exist-not-ok#

Whether to raise an exception if the file is already cached.

clear-cache#

Delete the app cache directory.

Usage

hpcflow manage clear-cache [OPTIONS]

Options

--hostname#

clear-data-cache#

Delete the app demo data cache directory.

Usage

hpcflow manage clear-data-cache [OPTIONS]

clear-known-subs#

Delete the contents of the known-submissions file.

Usage

hpcflow manage clear-known-subs [OPTIONS]

clear-program-cache#

Delete the app program cache directory.

Usage

hpcflow manage clear-program-cache [OPTIONS]

clear-temp-dir#

Delete all files in the user runtime directory.

Usage

hpcflow manage clear-temp-dir [OPTIONS]

get-config-path#

Print the config file path without loading the config.

This can be used instead of {app_name} open config –path if the config file is invalid, because this command does not load the config.

Usage

hpcflow manage get-config-path [OPTIONS]

Options

--config-dir <config_dir>#

The directory containing the config file whose path is to be returned.

purge-all#

Delete all cacheable files from the cache: data files and programs.

Usage

hpcflow manage purge-all [OPTIONS]

Options

--exist-ok, --not-exist-ok#

Whether to raise an exception if the file does not exist. False by default (i.e. do not raise if the file does not exist).

recache-all#

Cache all cacheable files: data files and programs.

Usage

hpcflow manage recache-all [OPTIONS]

Options

--exist-ok, --not-exist-ok#

Whether to raise an exception if the file does not exist. False by default (i.e. do not raise if the file does not exist).

reset-config#

Reset the configuration file to defaults.

This can be used if the current configuration file is invalid.

Usage

hpcflow manage reset-config [OPTIONS]

Options

--config-dir <config_dir>#

The directory containing the config file to be reset.

open#

Open a file (for example hpcFlow’s log file) using the default application.

Usage

hpcflow open [OPTIONS] COMMAND [ARGS]...

config#

Open the hpcFlow config file, or retrieve it’s path.

Usage

hpcflow open config [OPTIONS]

Options

--path#

data-cache-dir#

Usage

hpcflow open data-cache-dir [OPTIONS]

Options

--path#

env-source#

Open a named environment sources file, or the first one.

Usage

hpcflow open env-source [OPTIONS]

Options

--name <name>#
--path#

known-subs#

Open the known-submissions text file.

Usage

hpcflow open known-subs [OPTIONS]

Options

--path#

log#

Open the hpcFlow log file.

Usage

hpcflow open log [OPTIONS]

Options

--path#

program-cache-dir#

Usage

hpcflow open program-cache-dir [OPTIONS]

Options

--path#

user-cache-dir#

Usage

hpcflow open user-cache-dir [OPTIONS]

Options

--path#

user-cache-hostname-dir#

Usage

hpcflow open user-cache-hostname-dir [OPTIONS]

Options

--path#

user-data-dir#

Usage

hpcflow open user-data-dir [OPTIONS]

Options

--path#

user-data-hostname-dir#

Usage

hpcflow open user-data-hostname-dir [OPTIONS]

Options

--path#

user-runtime-dir#

Usage

hpcflow open user-runtime-dir [OPTIONS]

Options

--path#

workflow#

Open a workflow directory using, for example, File Explorer on Windows.

Usage

hpcflow open workflow [OPTIONS] WORKFLOW_REF

Options

--path#
-r, --ref-type <ref_type>#

How to interpret a reference, as an ID, a path, or to guess.

Options:

assume-id | id | path

Arguments

WORKFLOW_REF#

Required argument

program#

Interact with builtin programs.

Usage

hpcflow program [OPTIONS] COMMAND [ARGS]...

Options

-l, --list#

Print available built-in programs, and whether they are cached.

cache#

Ensure a program file is in the demo data cache.

Usage

hpcflow program cache [OPTIONS] FILE_NAME

Options

--all#

Cache all built-in programs.

--exist-ok, --exist-not-ok#

Whether to raise an exception if the file is already cached.

Arguments

FILE_NAME#

Required argument

copy#

Copy a builtin program to the specified location.

Usage

hpcflow program copy [OPTIONS] FILE_NAME DESTINATION

Arguments

FILE_NAME#

Required argument

DESTINATION#

Required argument

install-cache#

Copy pre-existing cached programs to the correct location.

Usage

hpcflow program install-cache [OPTIONS] SOURCE

Options

--overwrite#

If True, overwrite existing items in the cache directory; otherwise raise on items to be copied that already exist. Default is False.

Arguments

SOURCE#

Required argument

purge#

Delete the cache of a program file.

Usage

hpcflow program purge [OPTIONS] FILE_NAME

Options

--all#

Delete all program files.

--exist-ok, --not-exist-ok#

Whether to raise an exception if the file does not exist. False by default (i.e. do not raise if the file does not exist).

Arguments

FILE_NAME#

Required argument

recache#

Purge and then re-cache a program.

Usage

hpcflow program recache [OPTIONS] FILE_NAME

Options

--all#

Recache all program files.

--exist-ok, --not-exist-ok#

Whether to raise an exception if the file does not exist. False by default (i.e. do not raise if the file does not exist).

Arguments

FILE_NAME#

Required argument

rechunk#

Rechunk metadata/runs and parameters/base arrays.

WORKFLOW_REF is the local ID (that provided by the show command}) or the workflow path.

Usage

hpcflow rechunk [OPTIONS] WORKFLOW_REF

Options

-r, --ref-type <ref_type>#

How to interpret a reference, as an ID, a path, or to guess.

Options:

assume-id | id | path

--backup, --no-backup#

First copy a backup of the array to a directory ending in .bak.

--chunk-size <chunk_size>#

New chunk size (array items per chunk). If unset (as by default), the array will be rechunked to a single chunk array (i.e with a chunk size equal to the array’s shape).

--status, --no-status#

If True, display a live status to track rechunking progress.

Arguments

WORKFLOW_REF#

Required argument

show#

Show information about running and recently active workflows.

Usage

hpcflow show [OPTIONS]

Options

-r, --max-recent <max_recent>#

The maximum number of inactive submissions to show.

--no-update#

If True, do not update the known-submissions file to remove workflows that are no longer running.

-f, --full#

Allow multiple lines per workflow submission.

--legend#

Display the legend for the show command output.

submission#

Submission-related queries.

Usage

hpcflow submission [OPTIONS] COMMAND [ARGS]...

Options

--os-info#

Print information about the operating system.

get-known#

Print known-submissions information as a formatted Python object.

Usage

hpcflow submission get-known [OPTIONS]

Options

--json#

Do not format and only show JSON-compatible information.

scheduler#

Usage

hpcflow submission scheduler [OPTIONS] SCHEDULER_NAME COMMAND [ARGS]...

Arguments

SCHEDULER_NAME#

Required argument

get-login-nodes#

Usage

hpcflow submission scheduler SCHEDULER_NAME get-login-nodes 
    [OPTIONS]

shell-info#

Show information about the specified shell, such as the version.

Usage

hpcflow submission shell-info [OPTIONS] {bash|powershell|wsl+bash|wsl}

Options

--exclude-os#

Arguments

SHELL_NAME#

Required argument

tc#

For showing template component data.

Usage

hpcflow tc [OPTIONS]

test#

Run hpcFlow test suite.

PYTEST_ARGS are arguments passed on to Pytest.

Usage

hpcflow test [OPTIONS] [PYTEST_ARGS]...

Options

--file <file>#

Paths to test files or directories to include in the Pytest run. If relative paths are provided, they are assumed to be relative to the root ‘tests’ directory (so that passing –file ‘.’ runs all tests). If not provided, all tests are run. Multiple are allowed.

Arguments

PYTEST_ARGS#

Optional argument(s)

unzip#

Generate a copy of the specified zipped workflow in the submittable Zarr format in the current working directory.

WORKFLOW_PATH is path of the zip file to unzip.

Usage

hpcflow unzip [OPTIONS] WORKFLOW_PATH

Options

--path <path>#

Path at which to create the new unzipped workflow. If this is an existing directory, the new workflow directory will be created within this directory. Otherwise, this path will represent the new workflow directory path.

--log <log>#

Path to a log file to use during unzipping.

Arguments

WORKFLOW_PATH#

Required argument

workflow#

Interact with existing hpcFlow workflows.

WORKFLOW_REF is the path to, or local ID of, an existing workflow.

Usage

hpcflow workflow [OPTIONS] WORKFLOW_REF COMMAND [ARGS]...

Options

-r, --ref-type <ref_type>#

How to interpret a reference, as an ID, a path, or to guess.

Options:

assume-id | id | path

Arguments

WORKFLOW_REF#

Required argument

abort-run#

Abort the specified run.

Usage

hpcflow workflow WORKFLOW_REF abort-run [OPTIONS]

Options

--submission <submission>#
--task <task>#
--element <element>#

add-submission#

Add a new submission to the workflow, but do not submit.

Usage

hpcflow workflow WORKFLOW_REF add-submission [OPTIONS]

Options

--js-parallelism <js_parallelism>#

If True, allow multiple jobscripts to execute simultaneously. If ‘scheduled’/’direct’, only allow simultaneous execution of scheduled/direct jobscripts. Raises if set to True, ‘scheduled’, or ‘direct’, but the store type does not support the jobscript_parallelism feature. If not set, jobscript parallelism will be used if the store type supports it, for scheduled jobscripts only.

--tasks <tasks>#

List of comma-separated task indices to include in this submission. By default all tasks are included.

--force-array#

Used to force the use of job arrays, even if the scheduler does not support it. This is provided for testing purposes only.

--status, --no-status#

If True, display a live status to track submission progress.

get-all-params#

Get all parameter values.

Usage

hpcflow workflow WORKFLOW_REF get-all-params [OPTIONS]

get-param#

Get a parameter value by data index.

Usage

hpcflow workflow WORKFLOW_REF get-param [OPTIONS] INDEX

Arguments

INDEX#

Required argument

get-param-source#

Get a parameter source by data index.

Usage

hpcflow workflow WORKFLOW_REF get-param-source [OPTIONS] INDEX

Arguments

INDEX#

Required argument

get-process-ids#

Print jobscript process IDs from all submissions of this workflow.

Usage

hpcflow workflow WORKFLOW_REF get-process-ids [OPTIONS]

get-scheduler-job-ids#

Print jobscript scheduler job IDs from all submissions of this workflow.

Usage

hpcflow workflow WORKFLOW_REF get-scheduler-job-ids [OPTIONS]

is-param-set#

Check if a parameter specified by data index is set.

Usage

hpcflow workflow WORKFLOW_REF is-param-set [OPTIONS] INDEX

Arguments

INDEX#

Required argument

list-jobscripts#

Print a table listing jobscripts and associated information from the specified submission.

Usage

hpcflow workflow WORKFLOW_REF list-jobscripts [OPTIONS]

Options

--sub-idx <sub_idx>#

Submission index whose jobscripts are to be shown.

--max-js <max_js>#

Display up to this jobscript only.

--jobscripts <jobscripts>#

Comma-separated list of jobscript indices to show.

--width <width>#

Width in characters of the table to print.

list-task-jobscripts#

Print a table listing tasks and their associated jobscripts from the specified submission.

Usage

hpcflow workflow WORKFLOW_REF list-task-jobscripts [OPTIONS]

Options

--sub-idx <sub_idx>#

Submission index whose tasks are to be shown.

--max-js <max_js>#

Include jobscripts up to this jobscript only.

--task-names <task_names>#

Comma-separated list of task name sub-strings to show.

--width <width>#

Width in characters of the table to print.

rechunk#

Rechunk metadata/runs and parameters/base arrays.

Usage

hpcflow workflow WORKFLOW_REF rechunk [OPTIONS]

Options

--backup, --no-backup#

First copy a backup of the array to a directory ending in .bak.

--chunk-size <chunk_size>#

New chunk size (array items per chunk). If unset (as by default), the array will be rechunked to a single chunk array (i.e with a chunk size equal to the array’s shape).

--status, --no-status#

If True, display a live status to track rechunking progress.

rechunk-parameter-base#

Rechunk the parameters/base array.

Usage

hpcflow workflow WORKFLOW_REF rechunk-parameter-base [OPTIONS]

Options

--backup, --no-backup#

First copy a backup of the array to a directory ending in .bak.

--chunk-size <chunk_size>#

New chunk size (array items per chunk). If unset (as by default), the array will be rechunked to a single chunk array (i.e with a chunk size equal to the array’s shape).

--status, --no-status#

If True, display a live status to track rechunking progress.

rechunk-runs#

Rechunk the metadata/runs array.

Usage

hpcflow workflow WORKFLOW_REF rechunk-runs [OPTIONS]

Options

--backup, --no-backup#

First copy a backup of the array to a directory ending in .bak.

--chunk-size <chunk_size>#

New chunk size (array items per chunk). If unset (as by default), the array will be rechunked to a single chunk array (i.e with a chunk size equal to the array’s shape).

--status, --no-status#

If True, display a live status to track rechunking progress.

show-all-status#

Show the submission status of all workflow EARs.

Usage

hpcflow workflow WORKFLOW_REF show-all-status [OPTIONS]

sub#

Interact with existing hpcFlow workflow submissions.

SUB_IDX is the submission index.

Usage

hpcflow workflow WORKFLOW_REF sub [OPTIONS] SUB_IDX COMMAND [ARGS]...

Arguments

SUB_IDX#

Required argument

get-active-jobscripts#

Show active jobscripts and their jobscript-element states.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX get-active-jobscripts 
    [OPTIONS]
get-process-ids#

Print jobscript process IDs.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX get-process-ids [OPTIONS]
get-scheduler-job-ids#

Print jobscript scheduler job IDs.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX get-scheduler-job-ids 
    [OPTIONS]
js#

Interact with existing hpcFlow workflow submission jobscripts.

JS_IDX is the jobscript index within the submission object.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX js [OPTIONS] JS_IDX COMMAND
                                             [ARGS]...

Arguments

JS_IDX#

Required argument

deps#

Get jobscript dependencies.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX js JS_IDX deps [OPTIONS]
path#

Get the file path to the jobscript.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX js JS_IDX path [OPTIONS]
res#

Get resources associated with this jobscript.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX js JS_IDX res [OPTIONS]
show#

Show the jobscript file.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX js JS_IDX show [OPTIONS]
stderr#

Print the contents of the standard error stream file.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX js JS_IDX stderr 
    [OPTIONS]

Options

--array-idx <array_idx>#

For array jobs only, the job array index whose standard stream is to be printed.

stdout#

Print the contents of the standard output stream file.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX js JS_IDX stdout 
    [OPTIONS]

Options

--array-idx <array_idx>#

For array jobs only, the job array index whose standard stream is to be printed.

list-jobscripts#

Print a table listing jobscripts and associated information.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX list-jobscripts [OPTIONS]

Options

--max-js <max_js>#

Display up to this jobscript only.

--jobscripts <jobscripts>#

Comma-separated list of jobscript indices to show.

--width <width>#

Width in characters of the table to print.

list-task-jobscripts#

Print a table listing tasks and their associated jobscripts.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX list-task-jobscripts 
    [OPTIONS]

Options

--max-js <max_js>#

Include jobscripts up to this jobscript only.

--task-names <task_names>#

Comma-separated list of task name sub-strings to show.

--width <width>#

Width in characters of the table to print.

needs-submit#

Check if this submission needs submitting.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX needs-submit [OPTIONS]
outstanding-js#

Get a list of jobscript indices that have not yet been submitted.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX outstanding-js [OPTIONS]
status#

Get the submission status.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX status [OPTIONS]
submitted-js#

Get a list of jobscript indices that have been submitted.

Usage

hpcflow workflow WORKFLOW_REF sub SUB_IDX submitted-js [OPTIONS]

submit#

Submit the workflow.

Usage

hpcflow workflow WORKFLOW_REF submit [OPTIONS]

Options

--js-parallelism <js_parallelism>#

If True, allow multiple jobscripts to execute simultaneously. If ‘scheduled’/’direct’, only allow simultaneous execution of scheduled/direct jobscripts. Raises if set to True, ‘scheduled’, or ‘direct’, but the store type does not support the jobscript_parallelism feature. If not set, jobscript parallelism will be used if the store type supports it, for scheduled jobscripts only.

--wait#

If True, this command will block until the workflow execution is complete.

--add-to-known, --no-add-to-known#

If True, add this submission to the known-submissions file.

--print-idx#

If True, print the submitted jobscript indices for each submission index.

--tasks <tasks>#

List of comma-separated task indices to include in this submission. By default all tasks are included.

--cancel#

Immediately cancel the submission. Useful for testing and benchmarking.

--status, --no-status#

If True, display a live status to track submission progress.

--quiet <quiet>#

If True, do not print anything about workflow submission.

unzip#

Generate a copy of the zipped workflow in the submittable Zarr format in the current working directory.

Usage

hpcflow workflow WORKFLOW_REF unzip [OPTIONS]

Options

--path <path>#

Path at which to create the new unzipped workflow. If this is an existing directory, the new workflow directory will be created within this directory. Otherwise, this path will represent the new workflow directory path.

--log <log>#

Path to a log file to use during unzipping.

wait#

Usage

hpcflow workflow WORKFLOW_REF wait [OPTIONS]

Options

-j, --jobscripts <jobscripts>#

Wait for only these jobscripts to finish. Jobscripts should be specified by their submission index, followed by a colon, followed by a comma-separated list of jobscript indices within that submission (no spaces are allowed). To specify jobscripts across multiple submissions, use a semicolon to separate patterns like these.

--quiet <quiet>#

If True, do not print anything (e.g. when jobscripts have completed).

zip#

Generate a copy of the workflow in the zip file format in the current working directory.

Usage

hpcflow workflow WORKFLOW_REF zip [OPTIONS]

Options

--path <path>#

Path at which to create the new zipped workflow. If this is an existing directory, the zip file will be created within this directory. Otherwise, this path is assumed to be the full file path to the new zip file.

--overwrite#

If set, any existing file will be overwritten.

--log <log>#

Path to a log file to use during zipping.

--include-execute#
--include-rechunk-backups#

zip#

Generate a copy of the specified workflow in the zip file format in the current working directory.

WORKFLOW_REF is the local ID (that provided by the show command}) or the workflow path.

Usage

hpcflow zip [OPTIONS] WORKFLOW_REF

Options

--path <path>#

Path at which to create the new zipped workflow. If this is an existing directory, the zip file will be created within this directory. Otherwise, this path is assumed to be the full file path to the new zip file.

--overwrite#

If set, any existing file will be overwritten.

--log <log>#

Path to a log file to use during zipping.

--include-execute#
--include-rechunk-backups#
-r, --ref-type <ref_type>#

How to interpret a reference, as an ID, a path, or to guess.

Options:

assume-id | id | path

Arguments

WORKFLOW_REF#

Required argument