diff options
| author | Dante Niewenhuis <d.niewenhuis@hotmail.com> | 2025-05-19 13:31:34 +0200 |
|---|---|---|
| committer | GitHub <noreply@github.com> | 2025-05-19 13:31:34 +0200 |
| commit | e9a1b6078e366a8ee071f5d423a1874608618e4d (patch) | |
| tree | ef539af46703cd25fb66775b4580c3460c72be91 /site/docs/documentation/Input | |
| parent | d70312f122d9ef7c31b05757239ffc66af832dee (diff) | |
Removing gh-pages site from master branch (#338)
* Removing site from master branch
* Updated README.md
Diffstat (limited to 'site/docs/documentation/Input')
| -rw-r--r-- | site/docs/documentation/Input/AllocationPolicy.md | 265 | ||||
| -rw-r--r-- | site/docs/documentation/Input/CheckpointModel.md | 25 | ||||
| -rw-r--r-- | site/docs/documentation/Input/Experiment.md | 107 | ||||
| -rw-r--r-- | site/docs/documentation/Input/ExportModel.md | 50 | ||||
| -rw-r--r-- | site/docs/documentation/Input/FailureModel.md | 224 | ||||
| -rw-r--r-- | site/docs/documentation/Input/Topology/Battery.md | 37 | ||||
| -rw-r--r-- | site/docs/documentation/Input/Topology/Host.md | 55 | ||||
| -rw-r--r-- | site/docs/documentation/Input/Topology/PowerModel.md | 31 | ||||
| -rw-r--r-- | site/docs/documentation/Input/Topology/PowerSource.md | 20 | ||||
| -rw-r--r-- | site/docs/documentation/Input/Topology/Topology.md | 183 | ||||
| -rw-r--r-- | site/docs/documentation/Input/Workload.md | 31 | ||||
| -rw-r--r-- | site/docs/documentation/Input/_category_.json | 7 |
12 files changed, 0 insertions, 1035 deletions
diff --git a/site/docs/documentation/Input/AllocationPolicy.md b/site/docs/documentation/Input/AllocationPolicy.md deleted file mode 100644 index 96aacc9c..00000000 --- a/site/docs/documentation/Input/AllocationPolicy.md +++ /dev/null @@ -1,265 +0,0 @@ -Allocation policies define how, when and where a task is executed. - -There are two types of allocation policies: -1. **[Filter](#filter-policy)** - The basic allocation policy that selects a host for each task based on filters and weighters -2. **[TimeShift](#timeshift-policy)** - Extends the Filter scheduler allowing tasks to be delayed to better align with the availability of low-carbon power. - -In the following section we discuss the different allocation policies, and how to define them in an Experiment file. - -## Filter policy -To use a filter scheduler, the user has to set the type of the policy to "filter". -A filter policy requires a list of filters and weighters which characterize the policy. - -A filter policy consists of two main components: -1. **[Filters](#filters)** - Filters select all hosts that are eligible to execute the given task. -2. **[Weighters](#weighters)** - Weighters are used to rank the eligible hosts. The host with the highest weight is selected to execute the task. - -:::info Code -All code related to reading Allocation policies can be found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-experiments/opendc-experiments-base/src/main/kotlin/org/opendc/experiments/base/experiment/specs/allocation/AllocationPolicySpec.kt) -::: - -### Filters -Filters select all hosts that are eligible to execute the given task. -Filters are defined as JSON objects in the experiment file. - -The user defines which filter to use by setting the "type". -OpenDC currently supports the following 7 filters: - -#### ComputeFilter -Returns host if it is running. -Does not require any more parameters. - -```json -{ - "type": "Compute" -} -``` - -#### SameHostHostFilter -Ensures that after failure, a task is executed on the same host again. -Does not require any more parameters. - -```json -{ - "type": "DifferentHost" -} -``` - -#### DifferentHostFilter -Ensures that after failure, a task is *not* executed on the same host again. -Does not require any more parameters. - -```json -{ - "type": "DifferentHost" -} -``` - -#### InstanceCountHostFilter -Returns host if the number of instances running on the host is less than the maximum number of instances allowed. -The User needs to provide the maximum number of instances that can be run on a host. -```json -{ - "type": "InstanceCount", - "limit": 1 -} -``` - -#### RamHostFilter -Returns hosts if the amount of RAM available on the host is greater than the amount of RAM required by the task. -The user can provide an allocationRatio which is multiplied with the amount of RAM available on the host. -This can be used to allow for over subscription. -```json -{ - "type": "Ram", - "allocationRatio": 2.5 -} -``` - -#### VCpuCapacityHostFilter -Returns hosts if CPU capacity available on the host is greater than the CPU capacity required by the task. - -```json -{ - "type": "VCpuCapacity" -} -``` - -#### VCpuHostFilter -Returns host if the number of cores available on the host is greater than the number of cores required by the task. -The user can provide an allocationRatio which is multiplied with the amount of RAM available on the host. -This can be used to allow for over subscription. - -```json -{ - "type": "VCpu", - "allocationRatio": 2.5 -} -``` - -:::info Code -All code related to reading Filters can be found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-experiments/opendc-experiments-base/src/main/kotlin/org/opendc/experiments/base/experiment/specs/allocation/HostFilterSpec.kt) -::: - -### Weighters -Weighters are used to rank the eligible hosts. The host with the highest weight is selected to execute the task. -Weighters are defined as JSON objects in the experiment file. - -The user defines which filter to use by setting the "type". -The user can also provide a multiplying that is multiplied with the weight of the host. -This can be used to increase or decrease the importance of the host. -Negative multipliers are also allowed, and can be used to invert the ranking of the host. -OpenDC currently supports the following 5 weighters: - -#### RamWeigherSpec -Order the hosts by the amount of RAM available on the host. - -```json -{ - "type": "Ram", - "multiplier": 2.0 -} -``` - -#### CoreRamWeighter -Order the hosts by the amount of RAM available per core on the host. - -```json -{ - "type": "CoreRam", - "multiplier": 0.5 -} -``` - -#### InstanceCountWeigherSpec -Order the hosts by the number of instances running on the host. - -```json -{ - "type": "InstanceCount", - "multiplier": -1.0 -} -``` - -#### VCpuCapacityWeigherSpec -Order the hosts by the capacity per core on the host. - -```json -{ - "type": "VCpuCapacity", - "multiplier": 0.5 -} -``` - -#### VCpuWeigherSpec -Order the hosts by the number of cores available on the host. - -```json -{ - "type": "VCpu", - "multiplier": 2.5 -} -``` - -:::info Code -All code related to reading Weighters can be found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-experiments/opendc-experiments-base/src/main/kotlin/org/opendc/experiments/base/experiment/specs/allocation/HostWeigherSpec.kt) -::: - -### Examples -Following is an example of a Filter policy: -```json -{ - "type": "filter", - "filters": [ - { - "type": "Compute" - }, - { - "type": "VCpu", - "allocationRatio": 1.0 - }, - { - "type": "Ram", - "allocationRatio": 1.5 - } - ], - "weighers": [ - { - "type": "Ram", - "multiplier": 1.0 - } - ] -} -``` - -## TimeShift policy -Timeshift extends the Filter policy by allowing tasks to be delayed to better align with the availability of low-carbon power. -A user can define a timeshift policy by setting the type to "timeshift". - -task is scheduled when the current carbon intensity is below the carbon threshold. Otherwise, they are delayed. The -carbon threshold is determined by taking the 35 percentile of next week’s carbon forecast. When used, tasks can be interrupted -when the carbon intensity exceeds the threshold during execution. All tasks have a maximum delay time defined in the workload. When the maximum delay is reached, -tasks cannot be delayed anymore. - - -Similar to the filter policy, the user can define a list of filters and weighters. -However, in addittion, the user can provide parameters that influence how tasks are delayed: - -| Variable | Type | Required? | Default | Description | -|------------------------|-----------------------------|-----------|-----------------|-----------------------------------------------------------------------------------| -| filters | List[Filter] | no | [ComputeFilter] | Filters used to select eligible hosts. | -| weighters | List[Weighter] | no | [] | Weighters used to rank hosts. | -| windowSize | integer | no | 168 | How far back does the scheduler look to determine the Carbon Intensity threshold? | -| forecast | boolean | no | true | Does the the policy use carbon forecasts? | -| shortForecastThreshold | double | no | 0.2 | Threshold is used for short tasks (<2hours) | -| longForecastThreshold | double | no | 0.35 | Threshold is used for long tasks (>2hours) | -| forecastSize | integer | no | 24 | The number of hours of forecasts that is taken into account | -| taskStopper | [TaskStopper](#taskstopper) | no | null | Policy for interrupting tasks. If not provided, tasks are never interrupted | - -### TaskStopper - -Aside from delaying tasks, users might want to interrupt tasks that are running. -For example, if a tasks is running when only high-carbon energy is available, the task can be interrupted and rescheduled to a later time. - -A TaskStopper is defined as a JSON object in the Timeshift policy. -A TasksStopper consists of the following components: - -| Variable | Type | Required? | Default | Description | -|-----------------------|-----------------------------|-----------|---------|-----------------------------------------------------------------------------------| -| windowSize | integer | no | 168 | How far back does the scheduler look to determine the Carbon Intensity threshold? | -| forecast | boolean | no | true | Does the the policy use carbon forecasts? | -| forecastThreshold | double | no | 0.6 | Threshold is used for short tasks (<2hours) | -| forecastSize | integer | no | 24 | The number of hours of forecasts that is taken into account | - - -## Prefabs -Aside from custom policies, OpenDC also provides a set of pre-defined policies that can be used. -A prefab can be defined by setting the type to "prefab" and providing the name of the prefab. - -Example: -```json -{ - "type": "prefab", - "policyName": "Mem" -} -``` - -The following prefabs are available: - -| Name | Filters | Weighters | Timeshifting | -|---------------------|----------------------------------------------|----------------------------|--------------| -| Mem | ComputeFilter <br/>VCpuFilter<br/> RamFilter | RamWeigher(1.0) | No | -| MemInv | ComputeFilter <br/>VCpuFilter<br/> RamFilter | RamWeigher(-1.0) | No | -| CoreMem | ComputeFilter <br/>VCpuFilter<br/> RamFilter | CoreRamWeigher(1.0) | No | -| CoreMemInv | ComputeFilter <br/>VCpuFilter<br/> RamFilter | CoreRamWeigher(-1.0) | No | -| ActiveServers | ComputeFilter <br/>VCpuFilter<br/> RamFilter | InstanceCountWeigher(1.0) | No | -| ActiveServersInv | ComputeFilter <br/>VCpuFilter<br/> RamFilter | InstanceCountWeigher(-1.0) | No | -| ProvisionedCores | ComputeFilter <br/>VCpuFilter<br/> RamFilter | VCpuWeigher(1.0) | No | -| ProvisionedCoresInv | ComputeFilter <br/>VCpuFilter<br/> RamFilter | VCpuWeigher(-1.0) | No | -| Random | ComputeFilter <br/>VCpuFilter<br/> RamFilter | [] | No | -| TimeShift | ComputeFilter <br/>VCpuFilter<br/> RamFilter | RamWeigher(1.0) | Yes | - -:::info Code -All code related to prefab schedulers can be found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-compute/opendc-compute-simulator/src/main/kotlin/org/opendc/compute/simulator/scheduler/ComputeSchedulers.kt) -::: - diff --git a/site/docs/documentation/Input/CheckpointModel.md b/site/docs/documentation/Input/CheckpointModel.md deleted file mode 100644 index 7c622ea0..00000000 --- a/site/docs/documentation/Input/CheckpointModel.md +++ /dev/null @@ -1,25 +0,0 @@ -Checkpointing is a technique to reduce the impact of machine failure. -When using Checkpointing, tasks make periodical snapshots of their state. -If a task fails, it can be restarted from the last snapshot instead of starting from the beginning. - -A user can define a checkpoint model using the following parameters: - -| Variable | Type | Required? | Default | Description | -|---------------------------|--------|-----------|---------|----------------------------------------------------------------------------------------------------------------------| -| checkpointInterval | Int64 | no | 3600000 | The time between checkpoints in ms | -| checkpointDuration | Int64 | no | 300000 | The time to create a snapshot in ms | -| checkpointIntervalScaling | Double | no | 1.0 | The scaling of the checkpointInterval after each successful checkpoint. The default of 1.0 means no scaling happens. | - -### Example - -```json -{ - "checkpointInterval": 3600000, - "checkpointDuration": 300000, - "checkpointIntervalScaling": 1.5 -} -``` - -In this example, a snapshot is created every hour, and the snapshot creation takes 5 minutes. -The checkpointIntervalScaling is set to 1.5, which means that after each successful checkpoint, -the interval between checkpoints will be increased by 50% (for example from 1 to 1.5 hours). diff --git a/site/docs/documentation/Input/Experiment.md b/site/docs/documentation/Input/Experiment.md deleted file mode 100644 index 8d3462a9..00000000 --- a/site/docs/documentation/Input/Experiment.md +++ /dev/null @@ -1,107 +0,0 @@ -When using OpenDC, an experiment defines what should be run, and how. An experiment consists of one or more scenarios, -each defining a different simulation to run. Scenarios can differ in many things, such as the topology that is used, -the workload that is run, or the policies that are used to name a few. An experiment is defined using a JSON file. -In this page, we will discuss how to properly define experiments for OpenDC. - -:::info Code -All code related to reading and processing Experiment files can be found [here](https://github.com/atlarge-research/opendc/tree/master/opendc-experiments/opendc-experiments-base/src/main/kotlin/org/opendc/experiments/base/experiment) -The code used to run experiments can be found [here](https://github.com/atlarge-research/opendc/tree/master/opendc-experiments/opendc-experiments-base/src/main/kotlin/org/opendc/experiments/base/runner) -::: - -## Schema - -In the following section, we describe the different components of an experiment. Following is a table with all experiment components: - -| Variable | Type | Required? | Default | Description | -|--------------------|----------------------------------------------------------------------|-----------|---------------|-------------------------------------------------------------------------------------------------------| -| name | string | no | "" | Name of the scenario, used for identification and referencing. | -| outputFolder | string | no | "output" | Directory where the simulation outputs will be stored. | -| runs | integer | no | 1 | Number of times the same scenario should be run. Each scenario is run with a different seed. | -| initialSeed | integer | no | 0 | The seed used for random number generation during a scenario. Setting a seed ensures reproducability. | -| topologies | List[path/to/file] | yes | N/A | Paths to the JSON files defining the topologies. | -| workloads | List[[Workload](/docs/documentation/Input/Workload)] | yes | N/A | Paths to the files defining the workloads executed. | -| allocationPolicies | List[[AllocationPolicy](/docs/documentation/Input/AllocationPolicy)] | yes | N/A | Allocation policies used for resource management in the scenario. | -| failureModels | List[[FailureModel](/docs/documentation/Input/FailureModel)] | no | List[null] | List of failure models to simulate various types of failures. | -| maxNumFailures | List[integer] | no | [10] | The max number of times a task can fail before being terminated. | -| checkpointModels | List[[CheckpointModel](/docs/documentation/Input/CheckpointModel)] | no | List[null] | Paths to carbon footprint trace files. | -| exportModels | List[[ExportModel](/docs/documentation/Input/ExportModel)] | no | List[default] | Specifications for exporting data from the simulation. | - -Most components of an experiment are not single values, but lists of values. -This allows users to run multiple scenarios using a single experiment file. -OpenDC will generate and execute all permutations of the different values. - -Some of the components in an experiment file are paths to files, or complicated objects. The format of these components -are defined in their respective pages. - -## Examples -In the following section, we discuss several examples of experiment files. - -### Simple - -The simplest experiment that can be provided to OpenDC is shown below: -```json -{ - "topologies": [ - { - "pathToFile": "topologies/topology1.json" - } - ], - "workloads": [ - { - "type": "ComputeWorkload", - "pathToFile": "traces/bitbrains-small" - } - ], - "allocationPolicies": [ - { - "type": "prefab", - "policyName": "Mem" - } - ] -} -``` - -This experiment creates a simulation from file topology1, located in the topologies folder, with a workload trace from the -bitbrains-small file, and an allocation policy of type Mem. The simulation is run once (by default), and the default -name is "". - -### Complex -Following is an example of a more complex experiment: -```json -{ - "topologies": [ - { - "pathToFile": "topologies/topology1.json" - }, - { - "pathToFile": "topologies/topology2.json" - }, - { - "pathToFile": "topologies/topology3.json" - } - ], - "workloads": [ - { - "pathToFile": "traces/bitbrains-small", - "type": "ComputeWorkload" - }, - { - "pathToFile": "traces/bitbrains-large", - "type": "ComputeWorkload" - } - ], - "allocationPolicies": [ - { - "type": "prefab", - "policyName": "Mem" - }, - { - "type": "prefab", - "policyName": "Mem-Inv" - } - ] -} -``` - -This scenario runs a total of 12 experiments. We have 3 topologies (3 datacenter configurations), each simulated with -2 distinct workloads, each using a different allocation policy (either Mem or Mem-Inv). diff --git a/site/docs/documentation/Input/ExportModel.md b/site/docs/documentation/Input/ExportModel.md deleted file mode 100644 index 12e7eba2..00000000 --- a/site/docs/documentation/Input/ExportModel.md +++ /dev/null @@ -1,50 +0,0 @@ -During simulation, OpenDC exports data to files (see [Output](/docs/documentation/Output.md)). -The user can define what and how data is exported using the `exportModels` parameter in the experiment file. - -## ExportModel - - - -| Variable | Type | Required? | Default | Description | -|---------------------|-----------------------------------------|-----------|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| -| exportInterval | Int64 | no | 300 | The duration between two exports in seconds | -| filesToExport | Int64 | no | 24 | How often OpenDC prints an update during simulation. | | -| computeExportConfig | [ComputeExportConfig](#checkpointmodel) | no | Default | The features that should be exported during the simulation | -| filesToExport | List[string] | no | all files | List of the files that should be exported during simulation. The elements should be picked from the set ("host", "task", "powerSource", "battery", "service") | - - - -### ComputeExportConfig -The ComputeExportConfig defines which features should be exported during the simulation. -Several features will always be exported, regardless of the configuration. -When not provided, all features are exported. - - -| Variable | Type | Required? | Base | Default | Description | -|--------------------------|--------------|-----------|------------------------------------------------------------------------|--------------|-----------------------------------------------------------------------| -| hostExportColumns | List[String] | no | name <br/> cluster_name <br/> timestamp <br/> timestamp_absolute <br/> | All features | The features that should be exported to the host output file. | -| taskExportColumns | List[String] | no | task_id <br/> task_name <br/> timestamp <br/> timestamp_absolute <br/> | All features | The features that should be exported to the task output file. | -| powerSourceExportColumns | List[String] | no | name <br/> cluster_name <br/> timestamp <br/> timestamp_absolute <br/> | All features | The features that should be exported to the power source output file. | -| batteryExportColumns | List[String] | no | name <br/> cluster_name <br/> timestamp <br/> timestamp_absolute <br/> | All features | The features that should be exported to the battery output file. | -| serviceExportColumns | List[String] | no | timestamp <br/> timestamp_absolute <br/> | All features | The features that should be exported to the service output file. | - -### Example - -```json -{ - "exportInterval": 3600, - "printFrequency": 168, - "filesToExport": ["host", "task", "service"], - "computeExportConfig": { - "hostExportColumns": ["power_draw", "energy_usage", "cpu_usage", "cpu_utilization"], - "taskExportColumns": ["submission_time", "schedule_time", "finish_time", "task_state"], - "serviceExportColumns": ["tasks_total", "tasks_pending", "tasks_active", "tasks_completed", "tasks_terminated", "hosts_up"] - } -} -``` -In this example: -- the simulation will export data every hour (3600 seconds). -- The simulation will print an update every 168 seconds. -- Only the host, task and service files will be exported. -- Only a selection of features are exported for each file. - diff --git a/site/docs/documentation/Input/FailureModel.md b/site/docs/documentation/Input/FailureModel.md deleted file mode 100644 index 714d2157..00000000 --- a/site/docs/documentation/Input/FailureModel.md +++ /dev/null @@ -1,224 +0,0 @@ -### FailureModel -The failure model that should be used during the simulation -See [FailureModels](FailureModel.md) for detailed instructions. - - - -OpenDC provides three types of failure models: [Trace-based](#trace-based-failure-models), [Sample-based](#sample-based-failure-models), -and [Prefab](#prefab-failure-models). - -All failure models have a similar structure containing three simple steps. - -1. The _interval_ time determines the time between two failures. -2. The _duration_ time determines how long a single failure takes. -3. The _intensity_ determines how many hosts are effected by a failure. - -:::info Code -The code that defines the Failure Models can found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-experiments/opendc-experiments-base/src/main/kotlin/org/opendc/experiments/base/experiment/specs/FailureModelSpec.kt). -::: - -## Trace based failure models -Trace-based failure models are defined by a parquet file. This file defines the interval, duration, and intensity of -several failures. The failures defined in the file are looped. A valid failure model file follows the format defined below: - -| Metric | Datatype | Unit | Summary | -|-------------------|------------|---------------|--------------------------------------------| -| failure_interval | int64 | milli seconds | The duration since the last failure | -| failure_duration | int64 | milli seconds | The duration of the failure | -| failure_intensity | float64 | ratio | The ratio of hosts effected by the failure | - -:::info Code -The code implementation of Trace Based Failure Models can be found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-compute/opendc-compute-failure/src/main/kotlin/org/opendc/compute/failure/models/TraceBasedFailureModel.kt) -::: - -### Example -A trace-based failure model is specified by setting "type" to "trace-based". -After, the user can define the path to the failure trace using "pathToFile": -```json -{ - "type": "trace-based", - "pathToFile": "path/to/your/failure_trace.parquet" -} -``` - -The "repeat" value can be set to false if the user does not want the failures to loop: -```json -{ - "type": "trace-based", - "pathToFile": "path/to/your/failure_trace.parquet", - "repeat": "false" -} -``` - -## Sample based failure models -Sample based failure models sample from three distributions to get the _interval_, _duration_, and _intensity_ of -each failure. Sample-based failure models are effected by randomness and will thus create different results based -on the provided seed. - -:::info Code -The code implementation for the Sample based failure models can be found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-compute/opendc-compute-failure/src/main/kotlin/org/opendc/compute/failure/models/SampleBasedFailureModel.kt) -::: - -### Distributions -OpenDC supports eight different distributions based on java's [RealDistributions](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/RealDistribution.html). -Because the different distributions require different variables, they have to be specified with a specific "type". -Next, we show an example of a correct specification of all available distributions in OpenDC. - -#### [ConstantRealDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/ConstantRealDistribution.html) - -```json -{ - "type": "constant", - "value": 10.0 -} -``` - -#### [ExponentialDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/ExponentialDistribution.html) -```json -{ - "type": "exponential", - "mean": 1.5 -} -``` - -#### [GammaDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/GammaDistribution.html) -```json -{ - "type": "gamma", - "shape": 1.0, - "scale": 0.5 -} -``` - -#### [LogNormalDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/LogNormalDistribution.html) -```json -{ - "type": "log-normal", - "scale": 1.0, - "shape": 0.5 -} -``` - -#### [NormalDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/NormalDistribution.html) -```json -{ - "type": "normal", - "mean": 1.0, - "std": 0.5 -} -``` - -#### [ParetoDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/ParetoDistribution.html) -```json -{ - "type": "pareto", - "scale": 1.0, - "shape": 0.6 -} -``` - -#### [UniformRealDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/UniformRealDistribution.html) -```json -{ - "type": "constant", - "lower": 5.0, - "upper": 10.0 -} -``` - -#### [WeibullDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/WeibullDistribution.html) -```json -{ - "type": "constant", - "alpha": 0.5, - "beta": 1.2 -} -``` - -### Example -A sample-based failure model is defined using three distributions for _intensity_, _duration_, and _intensity_. -Distributions can be mixed however the user wants. Note, values for _intensity_ and _duration_ are clamped to be positive. -The _intensity_ is clamped to the range [0.0, 1.0). -To specify a sample-based failure model, the type needs to be set to "custom". - -Example: -```json -{ - "type": "custom", - "iatSampler": { - "type": "exponential", - "mean": 1.5 - }, - "durationSampler": { - "type": "constant", - "alpha": 0.5, - "beta": 1.2 - }, - "nohSampler": { - "type": "constant", - "value": 0.5 - } -} -``` - -## Prefab failure models -The final type of failure models is the prefab models. These are models that are predefined in OpenDC and are based on -research. Currently, OpenDC has 9 prefab models based on [The Failure Trace Archive: Enabling the comparison of failure measurements and models of distributed systems](https://www-sciencedirect-com.vu-nl.idm.oclc.org/science/article/pii/S0743731513000634) -The figure below shows the values used to define the failure models. - - -Each failure model is defined four times, on for each of the four distribution. -The final list of available prefabs is thus: - - G5k06Exp - G5k06Wbl - G5k06LogN - G5k06Gam - Lanl05Exp - Lanl05Wbl - Lanl05LogN - Lanl05Gam - Ldns04Exp - Ldns04Wbl - Ldns04LogN - Ldns04Gam - Microsoft99Exp - Microsoft99Wbl - Microsoft99LogN - Microsoft99Gam - Nd07cpuExp - Nd07cpuWbl - Nd07cpuLogN - Nd07cpuGam - Overnet03Exp - Overnet03Wbl - Overnet03LogN - Overnet03Gam - Pl05Exp - Pl05Wbl - Pl05LogN - Pl05Gam - Skype06Exp - Skype06Wbl - Skype06LogN - Skype06Gam - Websites02Exp - Websites02Wbl - Websites02LogN - Websites02Gam - -:::info Code -The different Prefab models can be found [here](https://github.com/atlarge-research/opendc/tree/master/opendc-compute/opendc-compute-failure/src/main/kotlin/org/opendc/compute/failure/prefab) -::: - -### Example -To specify a prefab model, the "type" needs to be set to "prefab". -After, the prefab can be defined with "prefabName": - -```json -{ - "type": "prefab", - "prefabName": "G5k06Exp" -} -``` - diff --git a/site/docs/documentation/Input/Topology/Battery.md b/site/docs/documentation/Input/Topology/Battery.md deleted file mode 100644 index 70492694..00000000 --- a/site/docs/documentation/Input/Topology/Battery.md +++ /dev/null @@ -1,37 +0,0 @@ -Batteries can be used to store energy for later use. -In previous work, we have used batteries to store energy from the grid when the carbon intensity is low, -and use this energy when the carbon intensity is high. - -Batteries are defined using the following parameters: - -| variable | type | Unit | required? | default | description | -|------------------|---------------------------|-------|-----------|---------|-----------------------------------------------------------------------------------| -| name | string | N/A | no | Battery | The name of the battery. This is only important for debugging and post-processing | -| capacity | Double | kWh | yes | N/A | The total amount of energy that the battery can hold. | -| chargingSpeed | Double | W | yes | N/A | Charging speed of the battery. | -| initialCharge | Double | kWh | no | 0.0 | The initial charge of the battery. If not given, the battery starts empty. | -| batteryPolicy | [Policy](#battery-policy) | N/A | yes | N/A | The policy which decides when to charge and discharge. | -| embodiedCarbon | Double | gram | no | 0.0 | The embodied carbon emitted while creating this battery. | -| expectedLifetime | Double | Years | yes | 0.0 | The expected lifetime of the battery. | - -## Battery Policy -To determine when to charge and discharge the battery, a policy is required. -Currently, all policies for batteries are based on the carbon intensity of the grid. - -The best performing policy is called "runningMeanPlus" and is based on the running mean of the carbon intensity. -it can be defined with the following JSON: - -```json -{ - "type": "runningMeanPlus", - "startingThreshold": 123.2, - "windowSize": 168 -} -``` - -In which `startingThreshold` is the initial carbon threshold used. -`windowSize` is the size of the window used to calculate the running mean. - -:::info Alert -This page with be extended with more text and policies in the future. -::: diff --git a/site/docs/documentation/Input/Topology/Host.md b/site/docs/documentation/Input/Topology/Host.md deleted file mode 100644 index 7b5b8394..00000000 --- a/site/docs/documentation/Input/Topology/Host.md +++ /dev/null @@ -1,55 +0,0 @@ -A host is a machine that can execute tasks. A host consist of the following components: - -| variable | type | required? | default | description | -|-------------|:-------------------------------------------------------------|:----------|---------|--------------------------------------------------------------------------------| -| name | string | no | Host | The name of the host. This is only important for debugging and post-processing | -| count | integer | no | 1 | The amount of hosts of this type are in the cluster | -| cpuModel | [CPU](#cpu) | yes | N/A | The CPUs in the host | -| memory | [Memory](#memory) | yes | N/A | The memory used by the host | -| power model | [Power Model](/docs/documentation/Input/Topology/PowerModel) | no | Default | The power model used to determine the power draw of the host | - -## CPU - -| variable | type | Unit | required? | default | description | -|-----------|---------|-------|-----------|---------|--------------------------------------------------| -| modelName | string | N/A | no | unknown | The name of the CPU. | -| vendor | string | N/A | no | unknown | The vendor of the CPU | -| arch | string | N/A | no | unknown | the micro-architecture of the CPU | -| count | integer | N/A | no | 1 | The number of CPUs of this type used by the host | -| coreCount | integer | count | yes | N/A | The number of cores in the CPU | -| coreSpeed | Double | Mhz | yes | N/A | The speed of each core in Mhz | - -## Memory - -| variable | type | Unit | required? | default | description | -|-------------|---------|------|-----------|---------|--------------------------------------------------------------------------| -| modelName | string | N/A | no | unknown | The name of the CPU. | -| vendor | string | N/A | no | unknown | The vendor of the CPU | -| arch | string | N/A | no | unknown | the micro-architecture of the CPU | -| memorySize | integer | Byte | yes | N/A | The number of cores in the CPU | -| memorySpeed | Double | Mhz | no | -1 | The speed of each core in Mhz. PLACEHOLDER: this currently does nothing. | - -## Example - -```json -{ - "name": "H01", - "cpu": { - "coreCount": 16, - "coreSpeed": 2100 - }, - "memory": { - "memorySize": 100000 - }, - "powerModel": { - "modelType": "sqrt", - "idlePower": 32.0, - "maxPower": 180.0 - }, - "count": 100 -} -``` - -This example creates 100 hosts with 16 cores and 2.1 Ghz CPU speed, and 100 GB of memory. -The power model used is a square root model with a power of 400 W, idle power of 32 W, and max power of 180 W. -For more information on the power model, see [Power Model](/docs/documentation/Input/Topology/PowerModel). diff --git a/site/docs/documentation/Input/Topology/PowerModel.md b/site/docs/documentation/Input/Topology/PowerModel.md deleted file mode 100644 index 06f4a4da..00000000 --- a/site/docs/documentation/Input/Topology/PowerModel.md +++ /dev/null @@ -1,31 +0,0 @@ -OpenDC uses power models to determine the power draw based on the utilization of a host. -All models in OpenDC are based on linear models interpolated between the idle and max power draw. -OpenDC currently supports the following power models: -1. **Constant**: The power draw is constant and does not depend on the utilization of the host. -2. **Sqrt**: The power draw interpolates between idle and max using a square root function. -3. **Linear**: The power draw interpolates between idle and max using a linear function. -4. **Square**: The power draw interpolates between idle and max using a square function. -5. **Cubic**The power draw interpolates between idle and max using a cubic function. - -The power model is defined using the following parameters: - -| variable | type | Unit | required? | default | description | -|-----------|--------|------|-----------|---------|--------------------------------------------------------------------| -| modelType | string | N/A | yes | N/A | The type of model used to determine power draw | -| power | double | Mhz | no | 400 | The power draw of a host when using the constant power draw model. | -| idlePower | double | Mhz | yes | N/A | The power draw of a host when idle in Watt. | -| maxPower | double | Mhz | yes | N/A | The power draw of a host when using max capacity in Watt. | - - -## Example - -```json -{ - "modelType": "sqrt", - "idlePower": 32.0, - "maxPower": 180.0 -} -``` - -This creates a power model that uses a square root function to determine the power draw of a host. -The model uses an idle and max power of 32 W and 180 W respectively. diff --git a/site/docs/documentation/Input/Topology/PowerSource.md b/site/docs/documentation/Input/Topology/PowerSource.md deleted file mode 100644 index 993083dd..00000000 --- a/site/docs/documentation/Input/Topology/PowerSource.md +++ /dev/null @@ -1,20 +0,0 @@ -Each cluster has a power source that provides power to the hosts in the cluster. -A user can connect a power source to a carbon trace to determine the carbon emissions during a workload. - -The power source consist of the following components: - -| variable | type | Unit | required? | default | description | -|-----------------|--------------|------|-----------|----------------|-----------------------------------------------------------------------------------| -| name | string | N/A | no | PowerSource | The name of the cluster. This is only important for debugging and post-processing | -| maxPower | integer | Watt | no | Long.Max_Value | The total power that the power source can provide in Watt. | -| carbonTracePath | path/to/file | N/A | no | null | A list of the hosts in a cluster. | - -## Example - -```json -{ - "carbonTracePath": "carbon_traces/AT_2021-2024.parquet" -} -``` - -This example creates a power source with infinite power draw that uses the carbon trace from the file `carbon_traces/AT_2021-2024.parquet`. diff --git a/site/docs/documentation/Input/Topology/Topology.md b/site/docs/documentation/Input/Topology/Topology.md deleted file mode 100644 index afc94e08..00000000 --- a/site/docs/documentation/Input/Topology/Topology.md +++ /dev/null @@ -1,183 +0,0 @@ -The topology of a datacenter defines all available hardware. Topologies are defined using a JSON file. -A topology consist of one or more clusters. Each cluster consist of at least one host on which jobs can be executed. -Each host consist of one or more CPUs, a memory unit and a power model. - -:::info Code -The code related to reading and processing topology files can be found [here](https://github.com/atlarge-research/opendc/tree/master/opendc-compute/opendc-compute-topology/src/main/kotlin/org/opendc/compute/topology) -::: - -In the following section, we describe the different components of a topology file. - -### Cluster - -| variable | type | required? | default | description | -|-------------|---------------------------------------------------------------|-----------|---------|-----------------------------------------------------------------------------------| -| name | string | no | Cluster | The name of the cluster. This is only important for debugging and post-processing | -| count | integer | no | 1 | The amount of clusters of this type are in the data center | -| hosts | List[[Host](/docs/documentation/Input/Topology/Host)] | yes | N/A | A list of the hosts in a cluster. | -| powerSource | [PowerSource](/docs/documentation/Input/Topology/PowerSource) | no | N/A | The power source used by all hosts connected to this cluster. | -| battery | [Battery](/docs/documentation/Input/Topology/Battery) | no | null | The battery used by a cluster to store energy. When null, no batteries are used. | - -Hosts, power sources and batteries all require objects to use. See their respective pages for more information. - -## Examples - -In the following section, we discuss several examples of topology files. - -### Simple - -The simplest data center that can be provided to OpenDC is shown below: - -```json -{ - "clusters": [ - { - "hosts": [ - { - "cpu": - { - "coreCount": 16, - "coreSpeed": 1000 - }, - "memory": { - "memorySize": 100000 - } - } - ], - "powerSource": { - "carbonTracePath": "carbon_traces/AT_2021-2024.parquet" - } - } - ] -} -``` - -This creates a data center with a single cluster containing a single host. This host consist of a single 16 core CPU -with a speed of 1 Ghz, and 100 MiB RAM memory. - -### Count - -Duplicating clusters, hosts, or CPUs is easy using the "count" keyword: - -```json -{ - "clusters": [ - { - "count": 2, - "hosts": [ - { - "count": 5, - "cpu": - { - "coreCount": 16, - "coreSpeed": 1000, - "count": 10 - }, - "memory": - { - "memorySize": 100000 - } - } - ], - "powerSource": { - "carbonTracePath": "carbon_traces/AT_2021-2024.parquet" - } - } - ] -} -``` - -This topology creates a datacenter consisting of 2 clusters, both containing 5 hosts. Each host contains 10 16 core -CPUs. -Using "count" saves a lot of copying. - -### Complex - -Following is an example of a more complex topology: - -```json -{ - "clusters": [ - { - "name": "C01", - "count": 2, - "hosts": [ - { - "name": "H01", - "count": 2, - "cpus": [ - { - "coreCount": 16, - "coreSpeed": 1000 - } - ], - "memory": { - "memorySize": 1000000 - }, - "powerModel": { - "modelType": "linear", - "idlePower": 200.0, - "maxPower": 400.0 - } - }, - { - "name": "H02", - "count": 2, - "cpus": [ - { - "coreCount": 8, - "coreSpeed": 3000 - } - ], - "memory": { - "memorySize": 100000 - }, - "powerModel": { - "modelType": "square", - "idlePower": 300.0, - "maxPower": 500.0 - } - } - ] - } - ] -} -``` - -This topology defines two types of hosts with different coreCount, and coreSpeed. -Both types of hosts are created twice. - - -### With Units of Measure - -Aside from using number to indicate values it is also possible to define values using strings. This allows the user to define the unit of the input parameter. -```json -{ - "clusters": [ - { - "count": 2, - "hosts" : - [ - { - "name": "H01", - "cpuModel": - { - "coreCount": 8, - "coreSpeed": "3.2 Ghz" - }, - "memory": { - "memorySize": "128e3 MiB", - "memorySpeed": "1 Mhz" - }, - "powerModel": { - "modelType": "linear", - "power": "400 Watts", - "maxPower": "1 KW", - "idlePower": "0.4 W" - } - } - ] - } - ] -} -``` diff --git a/site/docs/documentation/Input/Workload.md b/site/docs/documentation/Input/Workload.md deleted file mode 100644 index 73f39e60..00000000 --- a/site/docs/documentation/Input/Workload.md +++ /dev/null @@ -1,31 +0,0 @@ -Workloads define what tasks in the simulation, when they were submitted, and their computational requirements. -Workload are defined using two files: - -- **[Tasks](#tasks)**: The Tasks file contains the metadata of the tasks -- **[Fragments](#fragments)**: The Fragments file contains the computational demand of each task over time - -Both files are provided using the parquet format. - -#### Tasks -The Tasks file provides an overview of the tasks: - -| Metric | Required? | Datatype | Unit | Summary | -|-----------------|-----------|----------|------------------------------|--------------------------------------------------------| -| id | Yes | string | | The id of the server | -| submission_time | Yes | int64 | datetime | The submission time of the server | -| nature | No | string | [deferrable, non-deferrable] | Defines if a task can be delayed | -| deadline | No | string | datetime | The latest the scheduling of a task can be delayed to. | -| duration | Yes | int64 | datetime | The finish time of the submission | -| cpu_count | Yes | int32 | count | The number of CPUs required to run this task | -| cpu_capacity | Yes | float64 | MHz | The amount of CPU required to run this task | -| mem_capacity | Yes | int64 | MB | The amount of memory required to run this task | - -#### Fragments -The Fragments file provides information about the computational demand of each task over time: - -| Metric | Required? | Datatype | Unit | Summary | -|-----------|-----------|----------|---------------|---------------------------------------------| -| id | Yes | string | | The id of the task | -| duration | Yes | int64 | milli seconds | The duration since the last sample | -| cpu_count | Yes | int32 | count | The number of cpus required | -| cpu_usage | Yes | float64 | MHz | The amount of computational power required. | diff --git a/site/docs/documentation/Input/_category_.json b/site/docs/documentation/Input/_category_.json deleted file mode 100644 index e433770c..00000000 --- a/site/docs/documentation/Input/_category_.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "label": "Input", - "position": 1, - "link": { - "type": "generated-index" - } -} |
