ecs_composex.ecs.task_compute package ¶
Submodules ¶
ecs_composex.ecs.task_compute.helpers module ¶
- ecs_composex.ecs.task_compute.helpers. handle_multi_services ( sidecar_used_memory , family ) [source] ¶
-
Identifies the essential containers. If there is only one, we assume that’s the one that would make use of the left-over CPU/RAM from the Fargate profile
- Return type :
-
None
- ecs_composex.ecs.task_compute.helpers. reset_for_single_main_container ( sidecar_used_memory , family , container_definition ) [source] ¶
-
When we have managed containers and a single application container, it is safe to assume that our managed sidecars have limits on CPU and RAM (because we defined it). In case we are then left with memory or CPU to spare, we want then to allow the final container of the task definition to have access to all the task remaining CPU and RAM
If the container had memory limit and that is smaller than the memory that is left to use for a Fargate task, we set the limit as the new reservation.
- Parameters :
-
-
sidecar_used_memory ( int ) – The amount of RAM used by the sidecars
-
family ( ComposeFamily ) – The family to update the settings for.
-
- Return type :
-
None
- ecs_composex.ecs.task_compute.helpers. unlock_compute_for_main_container ( family ) [source] ¶
-
When adding new containers (i.e. AppMesh/XRay etc.) the task definition CPU and RAM got bumped with these resources, the task CPU/Memory could have been bumped to the next Fargate profile. This results into waste of resource for a given service.
This aims to identify the main service running in the family and grant it to us all unreserved CPU/RAM of the task
- Return type :
-
None