hpcflow.app.ResourceSpec#
- class hpcflow.app.ResourceSpec(scope=None, scratch=None, parallel_mode=None, num_cores=None, num_cores_per_node=None, num_threads=None, num_nodes=None, scheduler=None, shell=None, use_job_array=None, max_array_items=None, time_limit=None, scheduler_args=None, shell_args=None, os_name=None, environments=None, SGE_parallel_env=None, SLURM_partition=None, SLURM_num_tasks=None, SLURM_num_tasks_per_node=None, SLURM_num_nodes=None, SLURM_num_cpus_per_task=None)#
Bases:
ResourceSpec
Class to represent specification of resource requirements for a (set of) actions.
Notes
os_name is used for retrieving a default shell name and for retrieving the correct Shell class; when using WSL, it should still be nt (i.e. Windows).
- Parameters:
scope (app.ActionScope) – Which scope does this apply to.
scratch (str) – Which scratch space to use.
parallel_mode (ParallelMode) – Which parallel mode to use.
num_cores (int) – How many cores to request.
num_cores_per_node (int) – How many cores per compute node to request.
num_threads (int) – How many threads to request.
num_nodes (int) – How many compute nodes to request.
scheduler (str) – Which scheduler to use.
shell (str) – Which system shell to use.
use_job_array (bool) – Whether to use array jobs.
max_array_items (int) – If using array jobs, up to how many items should be in the job array.
time_limit (str) – How long to run for.
scheduler_args (dict[str, Any]) – Additional arguments to pass to the scheduler.
shell_args (dict[str, Any]) – Additional arguments to pass to the shell.
os_name (str) – Which OS to use.
environments (dict) – Which execution environments to use.
SGE_parallel_env (str) – Which SGE parallel environment to request.
SLURM_partition (str) – Which SLURM partition to request.
SLURM_num_tasks (str) – How many SLURM tasks to request.
SLURM_num_tasks_per_node (str) – How many SLURM tasks per compute node to request.
SLURM_num_nodes (str) – How many compute nodes to request.
SLURM_num_cpus_per_task (str) – How many CPU cores to ask for per SLURM task.
Methods
Make a non-persistent copy.
Make an instance of this class from JSON (or YAML) data.
Save to a persistent workflow.
Serialize this object as a dictionary.
Serialize this object as an object structure that can be trivially converted to JSON.
Attributes
The names of parameters that may be used when making an instance of this class.
Which SGE parallel environment to request.
How many CPU cores to ask for per SLURM task.
How many compute nodes to request.
How many SLURM tasks to request.
How many SLURM tasks per compute node to request.
Which SLURM partition to request.
The element set that will use this resource spec.
Which execution environments to use.
If using array jobs, up to how many items should be in the job array.
Full name of this resource spec.
Standard name of this resource spec.
How many cores to request.
How many cores per compute node to request.
How many compute nodes to request.
How many threads to request.
Which OS to use.
Which parallel mode to use.
Which scheduler to use.
Additional arguments to pass to the scheduler.
Which scratch space to use.
Which system shell to use.
Additional arguments to pass to the shell.
How long to run for.
Whether to use array jobs.
The workflow owning this resource spec.
The workflow template that will use this resource spec.
Which scope does this apply to.
- ALLOWED_PARAMETERS = {'SGE_parallel_env', 'SLURM_num_cpus_per_task', 'SLURM_num_nodes', 'SLURM_num_tasks', 'SLURM_num_tasks_per_node', 'SLURM_partition', 'environments', 'max_array_items', 'num_cores', 'num_cores_per_node', 'num_nodes', 'num_threads', 'os_name', 'parallel_mode', 'scheduler', 'scheduler_args', 'scratch', 'shell', 'shell_args', 'time_limit', 'use_job_array'}#
The names of parameters that may be used when making an instance of this class.
- property SGE_parallel_env#
Which SGE parallel environment to request.
- property SLURM_num_cpus_per_task#
How many CPU cores to ask for per SLURM task.
- property SLURM_num_nodes#
How many compute nodes to request.
- property SLURM_num_tasks#
How many SLURM tasks to request.
- property SLURM_num_tasks_per_node#
How many SLURM tasks per compute node to request.
- property SLURM_partition#
Which SLURM partition to request.
- app = BaseApp(name='hpcFlow', version='0.2.0a181')#
- copy_non_persistent()#
Make a non-persistent copy.
- property element_set#
The element set that will use this resource spec.
- property environments#
Which execution environments to use.
- classmethod from_json_like(json_like, shared_data=None)#
Make an instance of this class from JSON (or YAML) data.
- Parameters:
json_like (Union[Dict, List]) – The data to deserialise.
shared_data (Optional[Dict[str, ObjectList]]) – Shared context data.
- Return type:
The deserialised object.
- make_persistent(workflow, source)#
Save to a persistent workflow.
- property max_array_items#
If using array jobs, up to how many items should be in the job array.
- property normalised_path#
Full name of this resource spec.
- property normalised_resources_path#
Standard name of this resource spec.
- property num_cores#
How many cores to request.
- property num_cores_per_node#
How many cores per compute node to request.
- property num_nodes#
How many compute nodes to request.
- property num_threads#
How many threads to request.
- property os_name#
Which OS to use.
- property parallel_mode#
Which parallel mode to use.
- property scheduler#
Which scheduler to use.
- property scheduler_args#
Additional arguments to pass to the scheduler.
- scope#
Which scope does this apply to.
- property scratch#
Which scratch space to use.
- property shell#
Which system shell to use.
- property shell_args#
Additional arguments to pass to the shell.
- property time_limit#
How long to run for.
- to_dict()#
Serialize this object as a dictionary.
- to_json_like(dct=None, shared_data=None, exclude=None, path=None)#
Serialize this object as an object structure that can be trivially converted to JSON. Note that YAML can also be produced from the result of this method; it just requires a different final serialization step.
- property use_job_array#
Whether to use array jobs.
- property workflow#
The workflow owning this resource spec.
- property workflow_template#
The workflow template that will use this resource spec.