hpcflow.sdk.core.parameters.ResourceSpec#

class hpcflow.sdk.core.parameters.ResourceSpec(scope=None, scratch=None, parallel_mode=None, num_cores=None, num_cores_per_node=None, num_threads=None, num_nodes=None, scheduler=None, shell=None, use_job_array=None, max_array_items=None, time_limit=None, scheduler_args=None, shell_args=None, os_name=None, environments=None, SGE_parallel_env=None, SLURM_partition=None, SLURM_num_tasks=None, SLURM_num_tasks_per_node=None, SLURM_num_nodes=None, SLURM_num_cpus_per_task=None)#

Bases: JSONLike

Class to represent specification of resource requirements for a (set of) actions.

Notes

os_name is used for retrieving a default shell name and for retrieving the correct Shell class; when using WSL, it should still be nt (i.e. Windows).

Parameters:
  • scope (ActionScope | str | None) – Which scope does this apply to.

  • scratch (str) – Which scratch space to use.

  • parallel_mode (ParallelMode) – Which parallel mode to use.

  • num_cores (int) – How many cores to request.

  • num_cores_per_node (int) – How many cores per compute node to request.

  • num_threads (int) – How many threads to request.

  • num_nodes (int) – How many compute nodes to request.

  • scheduler (str) – Which scheduler to use.

  • shell (str) – Which system shell to use.

  • use_job_array (bool) – Whether to use array jobs.

  • max_array_items (int) – If using array jobs, up to how many items should be in the job array.

  • time_limit (str) – How long to run for.

  • scheduler_args (dict[str, Any]) – Additional arguments to pass to the scheduler.

  • shell_args (dict[str, Any]) – Additional arguments to pass to the shell.

  • os_name (str) – Which OS to use.

  • environments (dict) – Which execution environments to use.

  • SGE_parallel_env (str) – Which SGE parallel environment to request.

  • SLURM_partition (str) – Which SLURM partition to request.

  • SLURM_num_tasks (str) – How many SLURM tasks to request.

  • SLURM_num_tasks_per_node (str) – How many SLURM tasks per compute node to request.

  • SLURM_num_nodes (str) – How many compute nodes to request.

  • SLURM_num_cpus_per_task (str) – How many CPU cores to ask for per SLURM task.

Methods

copy_non_persistent

Make a non-persistent copy.

from_json_like

Make an instance of this class from JSON (or YAML) data.

make_persistent

Save to a persistent workflow.

to_dict

Serialize this object as a dictionary.

to_json_like

Serialize this object as an object structure that can be trivially converted to JSON.

Attributes

ALLOWED_PARAMETERS

The names of parameters that may be used when making an instance of this class.

SGE_parallel_env

Which SGE parallel environment to request.

SLURM_num_cpus_per_task

How many CPU cores to ask for per SLURM task.

SLURM_num_nodes

How many compute nodes to request.

SLURM_num_tasks

How many SLURM tasks to request.

SLURM_num_tasks_per_node

How many SLURM tasks per compute node to request.

SLURM_partition

Which SLURM partition to request.

element_set

The element set that will use this resource spec.

environments

Which execution environments to use.

max_array_items

If using array jobs, up to how many items should be in the job array.

normalised_path

Full name of this resource spec.

normalised_resources_path

Standard name of this resource spec.

num_cores

How many cores to request.

num_cores_per_node

How many cores per compute node to request.

num_nodes

How many compute nodes to request.

num_threads

How many threads to request.

os_name

Which OS to use.

parallel_mode

Which parallel mode to use.

scheduler

Which scheduler to use.

scheduler_args

Additional arguments to pass to the scheduler.

scratch

Which scratch space to use.

shell

Which system shell to use.

shell_args

Additional arguments to pass to the shell.

time_limit

How long to run for.

use_job_array

Whether to use array jobs.

workflow

The workflow owning this resource spec.

workflow_template

The workflow template that will use this resource spec.

scope

Which scope does this apply to.

ALLOWED_PARAMETERS: ClassVar[set[str]] = {'SGE_parallel_env', 'SLURM_num_cpus_per_task', 'SLURM_num_nodes', 'SLURM_num_tasks', 'SLURM_num_tasks_per_node', 'SLURM_partition', 'environments', 'max_array_items', 'num_cores', 'num_cores_per_node', 'num_nodes', 'num_threads', 'os_name', 'parallel_mode', 'scheduler', 'scheduler_args', 'scratch', 'shell', 'shell_args', 'time_limit', 'use_job_array'}#

The names of parameters that may be used when making an instance of this class.

property SGE_parallel_env: str | None#

Which SGE parallel environment to request.

property SLURM_num_cpus_per_task: int | None#

How many CPU cores to ask for per SLURM task.

property SLURM_num_nodes: int | None#

How many compute nodes to request.

property SLURM_num_tasks: int | None#

How many SLURM tasks to request.

property SLURM_num_tasks_per_node: int | None#

How many SLURM tasks per compute node to request.

property SLURM_partition: str | None#

Which SLURM partition to request.

copy_non_persistent()#

Make a non-persistent copy.

property element_set: ElementSet | None#

The element set that will use this resource spec.

property environments: Mapping | None#

Which execution environments to use.

classmethod from_json_like(json_like, shared_data=None)#

Make an instance of this class from JSON (or YAML) data.

Parameters:
  • json_like (str | Mapping[str, JSONed] | Sequence[Mapping[str, JSONed]] | None) – The data to deserialise.

  • shared_data (Mapping[str, ObjectList[JSONable]] | None) – Shared context data.

Return type:

The deserialised object.

make_persistent(workflow, source)#

Save to a persistent workflow.

Returns:

  • String is the data path for this task input and integer list

  • contains the indices of the parameter data Zarr groups where the data is

  • stored.

Parameters:
Return type:

tuple[str, list[int | list[int]], bool]

Note

May modify the internal state of this object.

property max_array_items: int | None#

If using array jobs, up to how many items should be in the job array.

property normalised_path: str#

Full name of this resource spec.

property normalised_resources_path: str#

Standard name of this resource spec.

property num_cores: int | None#

How many cores to request.

property num_cores_per_node: int | None#

How many cores per compute node to request.

property num_nodes: int | None#

How many compute nodes to request.

property num_threads: int | None#

How many threads to request.

property os_name: str#

Which OS to use.

property parallel_mode: ParallelMode | None#

Which parallel mode to use.

property scheduler: str | None#

Which scheduler to use.

property scheduler_args: Mapping#

Additional arguments to pass to the scheduler.

scope#

Which scope does this apply to.

property scratch: str | None#

Which scratch space to use.

property shell: str | None#

Which system shell to use.

property shell_args: Mapping | None#

Additional arguments to pass to the shell.

property time_limit: str | None#

How long to run for.

to_dict()#

Serialize this object as a dictionary.

Return type:

dict[str, Any]

to_json_like(dct=None, shared_data=None, exclude=(), path=None)#

Serialize this object as an object structure that can be trivially converted to JSON. Note that YAML can also be produced from the result of this method; it just requires a different final serialization step.

Parameters:
  • dct (dict[str, JSONable] | None) –

  • shared_data (_JSONDeserState) –

  • exclude (Container[str | None]) –

  • path (list | None) –

Return type:

tuple[JSONDocument, _JSONDeserState]

property use_job_array: bool#

Whether to use array jobs.

property workflow: Workflow | None#

The workflow owning this resource spec.

property workflow_template: WorkflowTemplate | None#

The workflow template that will use this resource spec.