summaryrefslogtreecommitdiff
path: root/site/docs
diff options
context:
space:
mode:
authorDante Niewenhuis <d.niewenhuis@hotmail.com>2025-05-19 13:31:34 +0200
committerGitHub <noreply@github.com>2025-05-19 13:31:34 +0200
commite9a1b6078e366a8ee071f5d423a1874608618e4d (patch)
treeef539af46703cd25fb66775b4580c3460c72be91 /site/docs
parentd70312f122d9ef7c31b05757239ffc66af832dee (diff)
Removing gh-pages site from master branch (#338)
* Removing site from master branch * Updated README.md
Diffstat (limited to 'site/docs')
-rw-r--r--site/docs/documentation/Input/AllocationPolicy.md265
-rw-r--r--site/docs/documentation/Input/CheckpointModel.md25
-rw-r--r--site/docs/documentation/Input/Experiment.md107
-rw-r--r--site/docs/documentation/Input/ExportModel.md50
-rw-r--r--site/docs/documentation/Input/FailureModel.md224
-rw-r--r--site/docs/documentation/Input/Topology/Battery.md37
-rw-r--r--site/docs/documentation/Input/Topology/Host.md55
-rw-r--r--site/docs/documentation/Input/Topology/PowerModel.md31
-rw-r--r--site/docs/documentation/Input/Topology/PowerSource.md20
-rw-r--r--site/docs/documentation/Input/Topology/Topology.md183
-rw-r--r--site/docs/documentation/Input/Workload.md31
-rw-r--r--site/docs/documentation/Input/_category_.json7
-rw-r--r--site/docs/documentation/M3SA/M3SA.md92
-rw-r--r--site/docs/documentation/M3SA/M3SASchema.md115
-rw-r--r--site/docs/documentation/Output.md114
-rw-r--r--site/docs/documentation/_category_.json7
-rw-r--r--site/docs/getting-started/0-installation.md31
-rw-r--r--site/docs/getting-started/1-start-using-intellij.md172
-rw-r--r--site/docs/getting-started/2-first-experiment.md211
-rw-r--r--site/docs/getting-started/3-whats-next.md12
-rw-r--r--site/docs/getting-started/_category_.json8
-rw-r--r--site/docs/getting-started/documents/experiments/simple_experiment.json13
-rw-r--r--site/docs/getting-started/documents/topologies/big.json59
-rw-r--r--site/docs/getting-started/documents/topologies/small.json22
-rw-r--r--site/docs/getting-started/documents/workloads/bitbrains-small.zipbin573038 -> 0 bytes
-rw-r--r--site/docs/getting-started/img/Intellij_experimentcli.pngbin155669 -> 0 bytes
-rw-r--r--site/docs/getting-started/img/experiment_file_structure.pngbin18601 -> 0 bytes
-rw-r--r--site/docs/getting-started/img/intellij_edit_the_run_config.pngbin100556 -> 0 bytes
-rw-r--r--site/docs/getting-started/img/intellij_edit_the_run_config.psdbin501826 -> 0 bytes
-rw-r--r--site/docs/getting-started/img/intellij_gradle_panel.pngbin21292 -> 0 bytes
-rw-r--r--site/docs/getting-started/img/intellij_gradle_panel.psdbin165358 -> 0 bytes
-rw-r--r--site/docs/getting-started/img/intellij_libs_versions_toml.pngbin67876 -> 0 bytes
-rw-r--r--site/docs/getting-started/img/intellij_libs_versions_toml.psdbin258893 -> 0 bytes
-rw-r--r--site/docs/getting-started/img/intellij_open_project.pngbin41127 -> 0 bytes
-rw-r--r--site/docs/getting-started/img/intellij_open_run_config.pngbin37849 -> 0 bytes
-rw-r--r--site/docs/getting-started/img/intellij_settings.pngbin316241 -> 0 bytes
-rw-r--r--site/docs/getting-started/img/intellij_settings.psdbin1357431 -> 0 bytes
-rw-r--r--site/docs/intro.mdx27
-rw-r--r--site/docs/tutorials/M3SA-integration-tutorial.mdx188
-rw-r--r--site/docs/tutorials/_category_.json9
-rw-r--r--site/docs/tutorials/img/cpu-usage.pngbin126367 -> 0 bytes
-rw-r--r--site/docs/tutorials/img/resource-distribution.pngbin13884 -> 0 bytes
42 files changed, 0 insertions, 2115 deletions
diff --git a/site/docs/documentation/Input/AllocationPolicy.md b/site/docs/documentation/Input/AllocationPolicy.md
deleted file mode 100644
index 96aacc9c..00000000
--- a/site/docs/documentation/Input/AllocationPolicy.md
+++ /dev/null
@@ -1,265 +0,0 @@
-Allocation policies define how, when and where a task is executed.
-
-There are two types of allocation policies:
-1. **[Filter](#filter-policy)** - The basic allocation policy that selects a host for each task based on filters and weighters
-2. **[TimeShift](#timeshift-policy)** - Extends the Filter scheduler allowing tasks to be delayed to better align with the availability of low-carbon power.
-
-In the following section we discuss the different allocation policies, and how to define them in an Experiment file.
-
-## Filter policy
-To use a filter scheduler, the user has to set the type of the policy to "filter".
-A filter policy requires a list of filters and weighters which characterize the policy.
-
-A filter policy consists of two main components:
-1. **[Filters](#filters)** - Filters select all hosts that are eligible to execute the given task.
-2. **[Weighters](#weighters)** - Weighters are used to rank the eligible hosts. The host with the highest weight is selected to execute the task.
-
-:::info Code
-All code related to reading Allocation policies can be found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-experiments/opendc-experiments-base/src/main/kotlin/org/opendc/experiments/base/experiment/specs/allocation/AllocationPolicySpec.kt)
-:::
-
-### Filters
-Filters select all hosts that are eligible to execute the given task.
-Filters are defined as JSON objects in the experiment file.
-
-The user defines which filter to use by setting the "type".
-OpenDC currently supports the following 7 filters:
-
-#### ComputeFilter
-Returns host if it is running.
-Does not require any more parameters.
-
-```json
-{
- "type": "Compute"
-}
-```
-
-#### SameHostHostFilter
-Ensures that after failure, a task is executed on the same host again.
-Does not require any more parameters.
-
-```json
-{
- "type": "DifferentHost"
-}
-```
-
-#### DifferentHostFilter
-Ensures that after failure, a task is *not* executed on the same host again.
-Does not require any more parameters.
-
-```json
-{
- "type": "DifferentHost"
-}
-```
-
-#### InstanceCountHostFilter
-Returns host if the number of instances running on the host is less than the maximum number of instances allowed.
-The User needs to provide the maximum number of instances that can be run on a host.
-```json
-{
- "type": "InstanceCount",
- "limit": 1
-}
-```
-
-#### RamHostFilter
-Returns hosts if the amount of RAM available on the host is greater than the amount of RAM required by the task.
-The user can provide an allocationRatio which is multiplied with the amount of RAM available on the host.
-This can be used to allow for over subscription.
-```json
-{
- "type": "Ram",
- "allocationRatio": 2.5
-}
-```
-
-#### VCpuCapacityHostFilter
-Returns hosts if CPU capacity available on the host is greater than the CPU capacity required by the task.
-
-```json
-{
- "type": "VCpuCapacity"
-}
-```
-
-#### VCpuHostFilter
-Returns host if the number of cores available on the host is greater than the number of cores required by the task.
-The user can provide an allocationRatio which is multiplied with the amount of RAM available on the host.
-This can be used to allow for over subscription.
-
-```json
-{
- "type": "VCpu",
- "allocationRatio": 2.5
-}
-```
-
-:::info Code
-All code related to reading Filters can be found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-experiments/opendc-experiments-base/src/main/kotlin/org/opendc/experiments/base/experiment/specs/allocation/HostFilterSpec.kt)
-:::
-
-### Weighters
-Weighters are used to rank the eligible hosts. The host with the highest weight is selected to execute the task.
-Weighters are defined as JSON objects in the experiment file.
-
-The user defines which filter to use by setting the "type".
-The user can also provide a multiplying that is multiplied with the weight of the host.
-This can be used to increase or decrease the importance of the host.
-Negative multipliers are also allowed, and can be used to invert the ranking of the host.
-OpenDC currently supports the following 5 weighters:
-
-#### RamWeigherSpec
-Order the hosts by the amount of RAM available on the host.
-
-```json
-{
- "type": "Ram",
- "multiplier": 2.0
-}
-```
-
-#### CoreRamWeighter
-Order the hosts by the amount of RAM available per core on the host.
-
-```json
-{
- "type": "CoreRam",
- "multiplier": 0.5
-}
-```
-
-#### InstanceCountWeigherSpec
-Order the hosts by the number of instances running on the host.
-
-```json
-{
- "type": "InstanceCount",
- "multiplier": -1.0
-}
-```
-
-#### VCpuCapacityWeigherSpec
-Order the hosts by the capacity per core on the host.
-
-```json
-{
- "type": "VCpuCapacity",
- "multiplier": 0.5
-}
-```
-
-#### VCpuWeigherSpec
-Order the hosts by the number of cores available on the host.
-
-```json
-{
- "type": "VCpu",
- "multiplier": 2.5
-}
-```
-
-:::info Code
-All code related to reading Weighters can be found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-experiments/opendc-experiments-base/src/main/kotlin/org/opendc/experiments/base/experiment/specs/allocation/HostWeigherSpec.kt)
-:::
-
-### Examples
-Following is an example of a Filter policy:
-```json
-{
- "type": "filter",
- "filters": [
- {
- "type": "Compute"
- },
- {
- "type": "VCpu",
- "allocationRatio": 1.0
- },
- {
- "type": "Ram",
- "allocationRatio": 1.5
- }
- ],
- "weighers": [
- {
- "type": "Ram",
- "multiplier": 1.0
- }
- ]
-}
-```
-
-## TimeShift policy
-Timeshift extends the Filter policy by allowing tasks to be delayed to better align with the availability of low-carbon power.
-A user can define a timeshift policy by setting the type to "timeshift".
-
-task is scheduled when the current carbon intensity is below the carbon threshold. Otherwise, they are delayed. The
-carbon threshold is determined by taking the 35 percentile of next week’s carbon forecast. When used, tasks can be interrupted
-when the carbon intensity exceeds the threshold during execution. All tasks have a maximum delay time defined in the workload. When the maximum delay is reached,
-tasks cannot be delayed anymore.
-
-
-Similar to the filter policy, the user can define a list of filters and weighters.
-However, in addittion, the user can provide parameters that influence how tasks are delayed:
-
-| Variable | Type | Required? | Default | Description |
-|------------------------|-----------------------------|-----------|-----------------|-----------------------------------------------------------------------------------|
-| filters | List[Filter] | no | [ComputeFilter] | Filters used to select eligible hosts. |
-| weighters | List[Weighter] | no | [] | Weighters used to rank hosts. |
-| windowSize | integer | no | 168 | How far back does the scheduler look to determine the Carbon Intensity threshold? |
-| forecast | boolean | no | true | Does the the policy use carbon forecasts? |
-| shortForecastThreshold | double | no | 0.2 | Threshold is used for short tasks (<2hours) |
-| longForecastThreshold | double | no | 0.35 | Threshold is used for long tasks (>2hours) |
-| forecastSize | integer | no | 24 | The number of hours of forecasts that is taken into account |
-| taskStopper | [TaskStopper](#taskstopper) | no | null | Policy for interrupting tasks. If not provided, tasks are never interrupted |
-
-### TaskStopper
-
-Aside from delaying tasks, users might want to interrupt tasks that are running.
-For example, if a tasks is running when only high-carbon energy is available, the task can be interrupted and rescheduled to a later time.
-
-A TaskStopper is defined as a JSON object in the Timeshift policy.
-A TasksStopper consists of the following components:
-
-| Variable | Type | Required? | Default | Description |
-|-----------------------|-----------------------------|-----------|---------|-----------------------------------------------------------------------------------|
-| windowSize | integer | no | 168 | How far back does the scheduler look to determine the Carbon Intensity threshold? |
-| forecast | boolean | no | true | Does the the policy use carbon forecasts? |
-| forecastThreshold | double | no | 0.6 | Threshold is used for short tasks (<2hours) |
-| forecastSize | integer | no | 24 | The number of hours of forecasts that is taken into account |
-
-
-## Prefabs
-Aside from custom policies, OpenDC also provides a set of pre-defined policies that can be used.
-A prefab can be defined by setting the type to "prefab" and providing the name of the prefab.
-
-Example:
-```json
-{
- "type": "prefab",
- "policyName": "Mem"
-}
-```
-
-The following prefabs are available:
-
-| Name | Filters | Weighters | Timeshifting |
-|---------------------|----------------------------------------------|----------------------------|--------------|
-| Mem | ComputeFilter <br/>VCpuFilter<br/> RamFilter | RamWeigher(1.0) | No |
-| MemInv | ComputeFilter <br/>VCpuFilter<br/> RamFilter | RamWeigher(-1.0) | No |
-| CoreMem | ComputeFilter <br/>VCpuFilter<br/> RamFilter | CoreRamWeigher(1.0) | No |
-| CoreMemInv | ComputeFilter <br/>VCpuFilter<br/> RamFilter | CoreRamWeigher(-1.0) | No |
-| ActiveServers | ComputeFilter <br/>VCpuFilter<br/> RamFilter | InstanceCountWeigher(1.0) | No |
-| ActiveServersInv | ComputeFilter <br/>VCpuFilter<br/> RamFilter | InstanceCountWeigher(-1.0) | No |
-| ProvisionedCores | ComputeFilter <br/>VCpuFilter<br/> RamFilter | VCpuWeigher(1.0) | No |
-| ProvisionedCoresInv | ComputeFilter <br/>VCpuFilter<br/> RamFilter | VCpuWeigher(-1.0) | No |
-| Random | ComputeFilter <br/>VCpuFilter<br/> RamFilter | [] | No |
-| TimeShift | ComputeFilter <br/>VCpuFilter<br/> RamFilter | RamWeigher(1.0) | Yes |
-
-:::info Code
-All code related to prefab schedulers can be found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-compute/opendc-compute-simulator/src/main/kotlin/org/opendc/compute/simulator/scheduler/ComputeSchedulers.kt)
-:::
-
diff --git a/site/docs/documentation/Input/CheckpointModel.md b/site/docs/documentation/Input/CheckpointModel.md
deleted file mode 100644
index 7c622ea0..00000000
--- a/site/docs/documentation/Input/CheckpointModel.md
+++ /dev/null
@@ -1,25 +0,0 @@
-Checkpointing is a technique to reduce the impact of machine failure.
-When using Checkpointing, tasks make periodical snapshots of their state.
-If a task fails, it can be restarted from the last snapshot instead of starting from the beginning.
-
-A user can define a checkpoint model using the following parameters:
-
-| Variable | Type | Required? | Default | Description |
-|---------------------------|--------|-----------|---------|----------------------------------------------------------------------------------------------------------------------|
-| checkpointInterval | Int64 | no | 3600000 | The time between checkpoints in ms |
-| checkpointDuration | Int64 | no | 300000 | The time to create a snapshot in ms |
-| checkpointIntervalScaling | Double | no | 1.0 | The scaling of the checkpointInterval after each successful checkpoint. The default of 1.0 means no scaling happens. |
-
-### Example
-
-```json
-{
- "checkpointInterval": 3600000,
- "checkpointDuration": 300000,
- "checkpointIntervalScaling": 1.5
-}
-```
-
-In this example, a snapshot is created every hour, and the snapshot creation takes 5 minutes.
-The checkpointIntervalScaling is set to 1.5, which means that after each successful checkpoint,
-the interval between checkpoints will be increased by 50% (for example from 1 to 1.5 hours).
diff --git a/site/docs/documentation/Input/Experiment.md b/site/docs/documentation/Input/Experiment.md
deleted file mode 100644
index 8d3462a9..00000000
--- a/site/docs/documentation/Input/Experiment.md
+++ /dev/null
@@ -1,107 +0,0 @@
-When using OpenDC, an experiment defines what should be run, and how. An experiment consists of one or more scenarios,
-each defining a different simulation to run. Scenarios can differ in many things, such as the topology that is used,
-the workload that is run, or the policies that are used to name a few. An experiment is defined using a JSON file.
-In this page, we will discuss how to properly define experiments for OpenDC.
-
-:::info Code
-All code related to reading and processing Experiment files can be found [here](https://github.com/atlarge-research/opendc/tree/master/opendc-experiments/opendc-experiments-base/src/main/kotlin/org/opendc/experiments/base/experiment)
-The code used to run experiments can be found [here](https://github.com/atlarge-research/opendc/tree/master/opendc-experiments/opendc-experiments-base/src/main/kotlin/org/opendc/experiments/base/runner)
-:::
-
-## Schema
-
-In the following section, we describe the different components of an experiment. Following is a table with all experiment components:
-
-| Variable | Type | Required? | Default | Description |
-|--------------------|----------------------------------------------------------------------|-----------|---------------|-------------------------------------------------------------------------------------------------------|
-| name | string | no | "" | Name of the scenario, used for identification and referencing. |
-| outputFolder | string | no | "output" | Directory where the simulation outputs will be stored. |
-| runs | integer | no | 1 | Number of times the same scenario should be run. Each scenario is run with a different seed. |
-| initialSeed | integer | no | 0 | The seed used for random number generation during a scenario. Setting a seed ensures reproducability. |
-| topologies | List[path/to/file] | yes | N/A | Paths to the JSON files defining the topologies. |
-| workloads | List[[Workload](/docs/documentation/Input/Workload)] | yes | N/A | Paths to the files defining the workloads executed. |
-| allocationPolicies | List[[AllocationPolicy](/docs/documentation/Input/AllocationPolicy)] | yes | N/A | Allocation policies used for resource management in the scenario. |
-| failureModels | List[[FailureModel](/docs/documentation/Input/FailureModel)] | no | List[null] | List of failure models to simulate various types of failures. |
-| maxNumFailures | List[integer] | no | [10] | The max number of times a task can fail before being terminated. |
-| checkpointModels | List[[CheckpointModel](/docs/documentation/Input/CheckpointModel)] | no | List[null] | Paths to carbon footprint trace files. |
-| exportModels | List[[ExportModel](/docs/documentation/Input/ExportModel)] | no | List[default] | Specifications for exporting data from the simulation. |
-
-Most components of an experiment are not single values, but lists of values.
-This allows users to run multiple scenarios using a single experiment file.
-OpenDC will generate and execute all permutations of the different values.
-
-Some of the components in an experiment file are paths to files, or complicated objects. The format of these components
-are defined in their respective pages.
-
-## Examples
-In the following section, we discuss several examples of experiment files.
-
-### Simple
-
-The simplest experiment that can be provided to OpenDC is shown below:
-```json
-{
- "topologies": [
- {
- "pathToFile": "topologies/topology1.json"
- }
- ],
- "workloads": [
- {
- "type": "ComputeWorkload",
- "pathToFile": "traces/bitbrains-small"
- }
- ],
- "allocationPolicies": [
- {
- "type": "prefab",
- "policyName": "Mem"
- }
- ]
-}
-```
-
-This experiment creates a simulation from file topology1, located in the topologies folder, with a workload trace from the
-bitbrains-small file, and an allocation policy of type Mem. The simulation is run once (by default), and the default
-name is "".
-
-### Complex
-Following is an example of a more complex experiment:
-```json
-{
- "topologies": [
- {
- "pathToFile": "topologies/topology1.json"
- },
- {
- "pathToFile": "topologies/topology2.json"
- },
- {
- "pathToFile": "topologies/topology3.json"
- }
- ],
- "workloads": [
- {
- "pathToFile": "traces/bitbrains-small",
- "type": "ComputeWorkload"
- },
- {
- "pathToFile": "traces/bitbrains-large",
- "type": "ComputeWorkload"
- }
- ],
- "allocationPolicies": [
- {
- "type": "prefab",
- "policyName": "Mem"
- },
- {
- "type": "prefab",
- "policyName": "Mem-Inv"
- }
- ]
-}
-```
-
-This scenario runs a total of 12 experiments. We have 3 topologies (3 datacenter configurations), each simulated with
-2 distinct workloads, each using a different allocation policy (either Mem or Mem-Inv).
diff --git a/site/docs/documentation/Input/ExportModel.md b/site/docs/documentation/Input/ExportModel.md
deleted file mode 100644
index 12e7eba2..00000000
--- a/site/docs/documentation/Input/ExportModel.md
+++ /dev/null
@@ -1,50 +0,0 @@
-During simulation, OpenDC exports data to files (see [Output](/docs/documentation/Output.md)).
-The user can define what and how data is exported using the `exportModels` parameter in the experiment file.
-
-## ExportModel
-
-
-
-| Variable | Type | Required? | Default | Description |
-|---------------------|-----------------------------------------|-----------|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| exportInterval | Int64 | no | 300 | The duration between two exports in seconds |
-| filesToExport | Int64 | no | 24 | How often OpenDC prints an update during simulation. | |
-| computeExportConfig | [ComputeExportConfig](#checkpointmodel) | no | Default | The features that should be exported during the simulation |
-| filesToExport | List[string] | no | all files | List of the files that should be exported during simulation. The elements should be picked from the set ("host", "task", "powerSource", "battery", "service") |
-
-
-
-### ComputeExportConfig
-The ComputeExportConfig defines which features should be exported during the simulation.
-Several features will always be exported, regardless of the configuration.
-When not provided, all features are exported.
-
-
-| Variable | Type | Required? | Base | Default | Description |
-|--------------------------|--------------|-----------|------------------------------------------------------------------------|--------------|-----------------------------------------------------------------------|
-| hostExportColumns | List[String] | no | name <br/> cluster_name <br/> timestamp <br/> timestamp_absolute <br/> | All features | The features that should be exported to the host output file. |
-| taskExportColumns | List[String] | no | task_id <br/> task_name <br/> timestamp <br/> timestamp_absolute <br/> | All features | The features that should be exported to the task output file. |
-| powerSourceExportColumns | List[String] | no | name <br/> cluster_name <br/> timestamp <br/> timestamp_absolute <br/> | All features | The features that should be exported to the power source output file. |
-| batteryExportColumns | List[String] | no | name <br/> cluster_name <br/> timestamp <br/> timestamp_absolute <br/> | All features | The features that should be exported to the battery output file. |
-| serviceExportColumns | List[String] | no | timestamp <br/> timestamp_absolute <br/> | All features | The features that should be exported to the service output file. |
-
-### Example
-
-```json
-{
- "exportInterval": 3600,
- "printFrequency": 168,
- "filesToExport": ["host", "task", "service"],
- "computeExportConfig": {
- "hostExportColumns": ["power_draw", "energy_usage", "cpu_usage", "cpu_utilization"],
- "taskExportColumns": ["submission_time", "schedule_time", "finish_time", "task_state"],
- "serviceExportColumns": ["tasks_total", "tasks_pending", "tasks_active", "tasks_completed", "tasks_terminated", "hosts_up"]
- }
-}
-```
-In this example:
-- the simulation will export data every hour (3600 seconds).
-- The simulation will print an update every 168 seconds.
-- Only the host, task and service files will be exported.
-- Only a selection of features are exported for each file.
-
diff --git a/site/docs/documentation/Input/FailureModel.md b/site/docs/documentation/Input/FailureModel.md
deleted file mode 100644
index 714d2157..00000000
--- a/site/docs/documentation/Input/FailureModel.md
+++ /dev/null
@@ -1,224 +0,0 @@
-### FailureModel
-The failure model that should be used during the simulation
-See [FailureModels](FailureModel.md) for detailed instructions.
-
-
-
-OpenDC provides three types of failure models: [Trace-based](#trace-based-failure-models), [Sample-based](#sample-based-failure-models),
-and [Prefab](#prefab-failure-models).
-
-All failure models have a similar structure containing three simple steps.
-
-1. The _interval_ time determines the time between two failures.
-2. The _duration_ time determines how long a single failure takes.
-3. The _intensity_ determines how many hosts are effected by a failure.
-
-:::info Code
-The code that defines the Failure Models can found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-experiments/opendc-experiments-base/src/main/kotlin/org/opendc/experiments/base/experiment/specs/FailureModelSpec.kt).
-:::
-
-## Trace based failure models
-Trace-based failure models are defined by a parquet file. This file defines the interval, duration, and intensity of
-several failures. The failures defined in the file are looped. A valid failure model file follows the format defined below:
-
-| Metric | Datatype | Unit | Summary |
-|-------------------|------------|---------------|--------------------------------------------|
-| failure_interval | int64 | milli seconds | The duration since the last failure |
-| failure_duration | int64 | milli seconds | The duration of the failure |
-| failure_intensity | float64 | ratio | The ratio of hosts effected by the failure |
-
-:::info Code
-The code implementation of Trace Based Failure Models can be found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-compute/opendc-compute-failure/src/main/kotlin/org/opendc/compute/failure/models/TraceBasedFailureModel.kt)
-:::
-
-### Example
-A trace-based failure model is specified by setting "type" to "trace-based".
-After, the user can define the path to the failure trace using "pathToFile":
-```json
-{
- "type": "trace-based",
- "pathToFile": "path/to/your/failure_trace.parquet"
-}
-```
-
-The "repeat" value can be set to false if the user does not want the failures to loop:
-```json
-{
- "type": "trace-based",
- "pathToFile": "path/to/your/failure_trace.parquet",
- "repeat": "false"
-}
-```
-
-## Sample based failure models
-Sample based failure models sample from three distributions to get the _interval_, _duration_, and _intensity_ of
-each failure. Sample-based failure models are effected by randomness and will thus create different results based
-on the provided seed.
-
-:::info Code
-The code implementation for the Sample based failure models can be found [here](https://github.com/atlarge-research/opendc/blob/master/opendc-compute/opendc-compute-failure/src/main/kotlin/org/opendc/compute/failure/models/SampleBasedFailureModel.kt)
-:::
-
-### Distributions
-OpenDC supports eight different distributions based on java's [RealDistributions](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/RealDistribution.html).
-Because the different distributions require different variables, they have to be specified with a specific "type".
-Next, we show an example of a correct specification of all available distributions in OpenDC.
-
-#### [ConstantRealDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/ConstantRealDistribution.html)
-
-```json
-{
- "type": "constant",
- "value": 10.0
-}
-```
-
-#### [ExponentialDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/ExponentialDistribution.html)
-```json
-{
- "type": "exponential",
- "mean": 1.5
-}
-```
-
-#### [GammaDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/GammaDistribution.html)
-```json
-{
- "type": "gamma",
- "shape": 1.0,
- "scale": 0.5
-}
-```
-
-#### [LogNormalDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/LogNormalDistribution.html)
-```json
-{
- "type": "log-normal",
- "scale": 1.0,
- "shape": 0.5
-}
-```
-
-#### [NormalDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/NormalDistribution.html)
-```json
-{
- "type": "normal",
- "mean": 1.0,
- "std": 0.5
-}
-```
-
-#### [ParetoDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/ParetoDistribution.html)
-```json
-{
- "type": "pareto",
- "scale": 1.0,
- "shape": 0.6
-}
-```
-
-#### [UniformRealDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/UniformRealDistribution.html)
-```json
-{
- "type": "constant",
- "lower": 5.0,
- "upper": 10.0
-}
-```
-
-#### [WeibullDistribution](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/WeibullDistribution.html)
-```json
-{
- "type": "constant",
- "alpha": 0.5,
- "beta": 1.2
-}
-```
-
-### Example
-A sample-based failure model is defined using three distributions for _intensity_, _duration_, and _intensity_.
-Distributions can be mixed however the user wants. Note, values for _intensity_ and _duration_ are clamped to be positive.
-The _intensity_ is clamped to the range [0.0, 1.0).
-To specify a sample-based failure model, the type needs to be set to "custom".
-
-Example:
-```json
-{
- "type": "custom",
- "iatSampler": {
- "type": "exponential",
- "mean": 1.5
- },
- "durationSampler": {
- "type": "constant",
- "alpha": 0.5,
- "beta": 1.2
- },
- "nohSampler": {
- "type": "constant",
- "value": 0.5
- }
-}
-```
-
-## Prefab failure models
-The final type of failure models is the prefab models. These are models that are predefined in OpenDC and are based on
-research. Currently, OpenDC has 9 prefab models based on [The Failure Trace Archive: Enabling the comparison of failure measurements and models of distributed systems](https://www-sciencedirect-com.vu-nl.idm.oclc.org/science/article/pii/S0743731513000634)
-The figure below shows the values used to define the failure models.
-![failureModels.png](../../../static/img/failureModels.png)
-
-Each failure model is defined four times, on for each of the four distribution.
-The final list of available prefabs is thus:
-
- G5k06Exp
- G5k06Wbl
- G5k06LogN
- G5k06Gam
- Lanl05Exp
- Lanl05Wbl
- Lanl05LogN
- Lanl05Gam
- Ldns04Exp
- Ldns04Wbl
- Ldns04LogN
- Ldns04Gam
- Microsoft99Exp
- Microsoft99Wbl
- Microsoft99LogN
- Microsoft99Gam
- Nd07cpuExp
- Nd07cpuWbl
- Nd07cpuLogN
- Nd07cpuGam
- Overnet03Exp
- Overnet03Wbl
- Overnet03LogN
- Overnet03Gam
- Pl05Exp
- Pl05Wbl
- Pl05LogN
- Pl05Gam
- Skype06Exp
- Skype06Wbl
- Skype06LogN
- Skype06Gam
- Websites02Exp
- Websites02Wbl
- Websites02LogN
- Websites02Gam
-
-:::info Code
-The different Prefab models can be found [here](https://github.com/atlarge-research/opendc/tree/master/opendc-compute/opendc-compute-failure/src/main/kotlin/org/opendc/compute/failure/prefab)
-:::
-
-### Example
-To specify a prefab model, the "type" needs to be set to "prefab".
-After, the prefab can be defined with "prefabName":
-
-```json
-{
- "type": "prefab",
- "prefabName": "G5k06Exp"
-}
-```
-
diff --git a/site/docs/documentation/Input/Topology/Battery.md b/site/docs/documentation/Input/Topology/Battery.md
deleted file mode 100644
index 70492694..00000000
--- a/site/docs/documentation/Input/Topology/Battery.md
+++ /dev/null
@@ -1,37 +0,0 @@
-Batteries can be used to store energy for later use.
-In previous work, we have used batteries to store energy from the grid when the carbon intensity is low,
-and use this energy when the carbon intensity is high.
-
-Batteries are defined using the following parameters:
-
-| variable | type | Unit | required? | default | description |
-|------------------|---------------------------|-------|-----------|---------|-----------------------------------------------------------------------------------|
-| name | string | N/A | no | Battery | The name of the battery. This is only important for debugging and post-processing |
-| capacity | Double | kWh | yes | N/A | The total amount of energy that the battery can hold. |
-| chargingSpeed | Double | W | yes | N/A | Charging speed of the battery. |
-| initialCharge | Double | kWh | no | 0.0 | The initial charge of the battery. If not given, the battery starts empty. |
-| batteryPolicy | [Policy](#battery-policy) | N/A | yes | N/A | The policy which decides when to charge and discharge. |
-| embodiedCarbon | Double | gram | no | 0.0 | The embodied carbon emitted while creating this battery. |
-| expectedLifetime | Double | Years | yes | 0.0 | The expected lifetime of the battery. |
-
-## Battery Policy
-To determine when to charge and discharge the battery, a policy is required.
-Currently, all policies for batteries are based on the carbon intensity of the grid.
-
-The best performing policy is called "runningMeanPlus" and is based on the running mean of the carbon intensity.
-it can be defined with the following JSON:
-
-```json
-{
- "type": "runningMeanPlus",
- "startingThreshold": 123.2,
- "windowSize": 168
-}
-```
-
-In which `startingThreshold` is the initial carbon threshold used.
-`windowSize` is the size of the window used to calculate the running mean.
-
-:::info Alert
-This page with be extended with more text and policies in the future.
-:::
diff --git a/site/docs/documentation/Input/Topology/Host.md b/site/docs/documentation/Input/Topology/Host.md
deleted file mode 100644
index 7b5b8394..00000000
--- a/site/docs/documentation/Input/Topology/Host.md
+++ /dev/null
@@ -1,55 +0,0 @@
-A host is a machine that can execute tasks. A host consist of the following components:
-
-| variable | type | required? | default | description |
-|-------------|:-------------------------------------------------------------|:----------|---------|--------------------------------------------------------------------------------|
-| name | string | no | Host | The name of the host. This is only important for debugging and post-processing |
-| count | integer | no | 1 | The amount of hosts of this type are in the cluster |
-| cpuModel | [CPU](#cpu) | yes | N/A | The CPUs in the host |
-| memory | [Memory](#memory) | yes | N/A | The memory used by the host |
-| power model | [Power Model](/docs/documentation/Input/Topology/PowerModel) | no | Default | The power model used to determine the power draw of the host |
-
-## CPU
-
-| variable | type | Unit | required? | default | description |
-|-----------|---------|-------|-----------|---------|--------------------------------------------------|
-| modelName | string | N/A | no | unknown | The name of the CPU. |
-| vendor | string | N/A | no | unknown | The vendor of the CPU |
-| arch | string | N/A | no | unknown | the micro-architecture of the CPU |
-| count | integer | N/A | no | 1 | The number of CPUs of this type used by the host |
-| coreCount | integer | count | yes | N/A | The number of cores in the CPU |
-| coreSpeed | Double | Mhz | yes | N/A | The speed of each core in Mhz |
-
-## Memory
-
-| variable | type | Unit | required? | default | description |
-|-------------|---------|------|-----------|---------|--------------------------------------------------------------------------|
-| modelName | string | N/A | no | unknown | The name of the CPU. |
-| vendor | string | N/A | no | unknown | The vendor of the CPU |
-| arch | string | N/A | no | unknown | the micro-architecture of the CPU |
-| memorySize | integer | Byte | yes | N/A | The number of cores in the CPU |
-| memorySpeed | Double | Mhz | no | -1 | The speed of each core in Mhz. PLACEHOLDER: this currently does nothing. |
-
-## Example
-
-```json
-{
- "name": "H01",
- "cpu": {
- "coreCount": 16,
- "coreSpeed": 2100
- },
- "memory": {
- "memorySize": 100000
- },
- "powerModel": {
- "modelType": "sqrt",
- "idlePower": 32.0,
- "maxPower": 180.0
- },
- "count": 100
-}
-```
-
-This example creates 100 hosts with 16 cores and 2.1 Ghz CPU speed, and 100 GB of memory.
-The power model used is a square root model with a power of 400 W, idle power of 32 W, and max power of 180 W.
-For more information on the power model, see [Power Model](/docs/documentation/Input/Topology/PowerModel).
diff --git a/site/docs/documentation/Input/Topology/PowerModel.md b/site/docs/documentation/Input/Topology/PowerModel.md
deleted file mode 100644
index 06f4a4da..00000000
--- a/site/docs/documentation/Input/Topology/PowerModel.md
+++ /dev/null
@@ -1,31 +0,0 @@
-OpenDC uses power models to determine the power draw based on the utilization of a host.
-All models in OpenDC are based on linear models interpolated between the idle and max power draw.
-OpenDC currently supports the following power models:
-1. **Constant**: The power draw is constant and does not depend on the utilization of the host.
-2. **Sqrt**: The power draw interpolates between idle and max using a square root function.
-3. **Linear**: The power draw interpolates between idle and max using a linear function.
-4. **Square**: The power draw interpolates between idle and max using a square function.
-5. **Cubic**The power draw interpolates between idle and max using a cubic function.
-
-The power model is defined using the following parameters:
-
-| variable | type | Unit | required? | default | description |
-|-----------|--------|------|-----------|---------|--------------------------------------------------------------------|
-| modelType | string | N/A | yes | N/A | The type of model used to determine power draw |
-| power | double | Mhz | no | 400 | The power draw of a host when using the constant power draw model. |
-| idlePower | double | Mhz | yes | N/A | The power draw of a host when idle in Watt. |
-| maxPower | double | Mhz | yes | N/A | The power draw of a host when using max capacity in Watt. |
-
-
-## Example
-
-```json
-{
- "modelType": "sqrt",
- "idlePower": 32.0,
- "maxPower": 180.0
-}
-```
-
-This creates a power model that uses a square root function to determine the power draw of a host.
-The model uses an idle and max power of 32 W and 180 W respectively.
diff --git a/site/docs/documentation/Input/Topology/PowerSource.md b/site/docs/documentation/Input/Topology/PowerSource.md
deleted file mode 100644
index 993083dd..00000000
--- a/site/docs/documentation/Input/Topology/PowerSource.md
+++ /dev/null
@@ -1,20 +0,0 @@
-Each cluster has a power source that provides power to the hosts in the cluster.
-A user can connect a power source to a carbon trace to determine the carbon emissions during a workload.
-
-The power source consist of the following components:
-
-| variable | type | Unit | required? | default | description |
-|-----------------|--------------|------|-----------|----------------|-----------------------------------------------------------------------------------|
-| name | string | N/A | no | PowerSource | The name of the cluster. This is only important for debugging and post-processing |
-| maxPower | integer | Watt | no | Long.Max_Value | The total power that the power source can provide in Watt. |
-| carbonTracePath | path/to/file | N/A | no | null | A list of the hosts in a cluster. |
-
-## Example
-
-```json
-{
- "carbonTracePath": "carbon_traces/AT_2021-2024.parquet"
-}
-```
-
-This example creates a power source with infinite power draw that uses the carbon trace from the file `carbon_traces/AT_2021-2024.parquet`.
diff --git a/site/docs/documentation/Input/Topology/Topology.md b/site/docs/documentation/Input/Topology/Topology.md
deleted file mode 100644
index afc94e08..00000000
--- a/site/docs/documentation/Input/Topology/Topology.md
+++ /dev/null
@@ -1,183 +0,0 @@
-The topology of a datacenter defines all available hardware. Topologies are defined using a JSON file.
-A topology consist of one or more clusters. Each cluster consist of at least one host on which jobs can be executed.
-Each host consist of one or more CPUs, a memory unit and a power model.
-
-:::info Code
-The code related to reading and processing topology files can be found [here](https://github.com/atlarge-research/opendc/tree/master/opendc-compute/opendc-compute-topology/src/main/kotlin/org/opendc/compute/topology)
-:::
-
-In the following section, we describe the different components of a topology file.
-
-### Cluster
-
-| variable | type | required? | default | description |
-|-------------|---------------------------------------------------------------|-----------|---------|-----------------------------------------------------------------------------------|
-| name | string | no | Cluster | The name of the cluster. This is only important for debugging and post-processing |
-| count | integer | no | 1 | The amount of clusters of this type are in the data center |
-| hosts | List[[Host](/docs/documentation/Input/Topology/Host)] | yes | N/A | A list of the hosts in a cluster. |
-| powerSource | [PowerSource](/docs/documentation/Input/Topology/PowerSource) | no | N/A | The power source used by all hosts connected to this cluster. |
-| battery | [Battery](/docs/documentation/Input/Topology/Battery) | no | null | The battery used by a cluster to store energy. When null, no batteries are used. |
-
-Hosts, power sources and batteries all require objects to use. See their respective pages for more information.
-
-## Examples
-
-In the following section, we discuss several examples of topology files.
-
-### Simple
-
-The simplest data center that can be provided to OpenDC is shown below:
-
-```json
-{
- "clusters": [
- {
- "hosts": [
- {
- "cpu":
- {
- "coreCount": 16,
- "coreSpeed": 1000
- },
- "memory": {
- "memorySize": 100000
- }
- }
- ],
- "powerSource": {
- "carbonTracePath": "carbon_traces/AT_2021-2024.parquet"
- }
- }
- ]
-}
-```
-
-This creates a data center with a single cluster containing a single host. This host consist of a single 16 core CPU
-with a speed of 1 Ghz, and 100 MiB RAM memory.
-
-### Count
-
-Duplicating clusters, hosts, or CPUs is easy using the "count" keyword:
-
-```json
-{
- "clusters": [
- {
- "count": 2,
- "hosts": [
- {
- "count": 5,
- "cpu":
- {
- "coreCount": 16,
- "coreSpeed": 1000,
- "count": 10
- },
- "memory":
- {
- "memorySize": 100000
- }
- }
- ],
- "powerSource": {
- "carbonTracePath": "carbon_traces/AT_2021-2024.parquet"
- }
- }
- ]
-}
-```
-
-This topology creates a datacenter consisting of 2 clusters, both containing 5 hosts. Each host contains 10 16 core
-CPUs.
-Using "count" saves a lot of copying.
-
-### Complex
-
-Following is an example of a more complex topology:
-
-```json
-{
- "clusters": [
- {
- "name": "C01",
- "count": 2,
- "hosts": [
- {
- "name": "H01",
- "count": 2,
- "cpus": [
- {
- "coreCount": 16,
- "coreSpeed": 1000
- }
- ],
- "memory": {
- "memorySize": 1000000
- },
- "powerModel": {
- "modelType": "linear",
- "idlePower": 200.0,
- "maxPower": 400.0
- }
- },
- {
- "name": "H02",
- "count": 2,
- "cpus": [
- {
- "coreCount": 8,
- "coreSpeed": 3000
- }
- ],
- "memory": {
- "memorySize": 100000
- },
- "powerModel": {
- "modelType": "square",
- "idlePower": 300.0,
- "maxPower": 500.0
- }
- }
- ]
- }
- ]
-}
-```
-
-This topology defines two types of hosts with different coreCount, and coreSpeed.
-Both types of hosts are created twice.
-
-
-### With Units of Measure
-
-Aside from using number to indicate values it is also possible to define values using strings. This allows the user to define the unit of the input parameter.
-```json
-{
- "clusters": [
- {
- "count": 2,
- "hosts" :
- [
- {
- "name": "H01",
- "cpuModel":
- {
- "coreCount": 8,
- "coreSpeed": "3.2 Ghz"
- },
- "memory": {
- "memorySize": "128e3 MiB",
- "memorySpeed": "1 Mhz"
- },
- "powerModel": {
- "modelType": "linear",
- "power": "400 Watts",
- "maxPower": "1 KW",
- "idlePower": "0.4 W"
- }
- }
- ]
- }
- ]
-}
-```
diff --git a/site/docs/documentation/Input/Workload.md b/site/docs/documentation/Input/Workload.md
deleted file mode 100644
index 73f39e60..00000000
--- a/site/docs/documentation/Input/Workload.md
+++ /dev/null
@@ -1,31 +0,0 @@
-Workloads define what tasks in the simulation, when they were submitted, and their computational requirements.
-Workload are defined using two files:
-
-- **[Tasks](#tasks)**: The Tasks file contains the metadata of the tasks
-- **[Fragments](#fragments)**: The Fragments file contains the computational demand of each task over time
-
-Both files are provided using the parquet format.
-
-#### Tasks
-The Tasks file provides an overview of the tasks:
-
-| Metric | Required? | Datatype | Unit | Summary |
-|-----------------|-----------|----------|------------------------------|--------------------------------------------------------|
-| id | Yes | string | | The id of the server |
-| submission_time | Yes | int64 | datetime | The submission time of the server |
-| nature | No | string | [deferrable, non-deferrable] | Defines if a task can be delayed |
-| deadline | No | string | datetime | The latest the scheduling of a task can be delayed to. |
-| duration | Yes | int64 | datetime | The finish time of the submission |
-| cpu_count | Yes | int32 | count | The number of CPUs required to run this task |
-| cpu_capacity | Yes | float64 | MHz | The amount of CPU required to run this task |
-| mem_capacity | Yes | int64 | MB | The amount of memory required to run this task |
-
-#### Fragments
-The Fragments file provides information about the computational demand of each task over time:
-
-| Metric | Required? | Datatype | Unit | Summary |
-|-----------|-----------|----------|---------------|---------------------------------------------|
-| id | Yes | string | | The id of the task |
-| duration | Yes | int64 | milli seconds | The duration since the last sample |
-| cpu_count | Yes | int32 | count | The number of cpus required |
-| cpu_usage | Yes | float64 | MHz | The amount of computational power required. |
diff --git a/site/docs/documentation/Input/_category_.json b/site/docs/documentation/Input/_category_.json
deleted file mode 100644
index e433770c..00000000
--- a/site/docs/documentation/Input/_category_.json
+++ /dev/null
@@ -1,7 +0,0 @@
-{
- "label": "Input",
- "position": 1,
- "link": {
- "type": "generated-index"
- }
-}
diff --git a/site/docs/documentation/M3SA/M3SA.md b/site/docs/documentation/M3SA/M3SA.md
deleted file mode 100644
index 6c97d207..00000000
--- a/site/docs/documentation/M3SA/M3SA.md
+++ /dev/null
@@ -1,92 +0,0 @@
-M3SA is setup using a json file. The Multi-Model is a top-layer applied on top of the
-simulator,
-capable to leverage into a singular tool the prediction of multiple models. The Meta-Model is a model generated from the
-Multi-Model, and predicts using the predictions of individual models.
-
-The Multi-Model's properties can be set using a JSON file. The JSON file must be linked to the scenario file and is
-required
-to follow the structure below.
-
-## Schema
-
-The schema for the scenario file is provided in [schema](M3SASchema.md)
-In the following section, we describe the different components of the schema.
-
-### General Structure
-
-| Variable | Type | Required? | Default | Possible Answers | Description |
-|------------------------|---------|-----------|---------------|-------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| multimodel | boolean | no | true | true, false | Whether or not to build a Multi-Model. If set to false, a Meta-Model will not be computed either. |
-| metamodel | boolean | no | true | true, false | Whether to build a Meta-Model. |
-| metric | string | yes | N/A | N/A | What metric to be analyzed from the computed files. |
-| current_unit | string | no | "" | any string (e.g., "CO2", "Wh") | The international system unit of the metric to be analyzed, without prefixes. e.g., "W" for Watt is ok, "kW" is not. |
-| unit_scaling_magnitude | integer | no | 10 | -9, -6, -3, 1, 3, 6, 9 | The scaling factor to be applied to the metric (10^-9, 10^-6, 10^3, 10^3, 10^6, 10^9). For no scaling, input 1. |
-| window_size | integer | no | 1 | any positive, non-zero, integer | The size of the window, used for aggregating the chunks. |
-| window_function | string | no | "mean" | "mean", "median" | The function used by the window for aggregating the chunks (e.g., for "mean", the window will compute the mean of the samples). |
-| meta_function | string | no | "mean" | "mean", "median" | The function used by the Meta-Model to be generated. For "mean", the Meta-Model takes the mean of the individual models, at the granularity established by the window-size. |
-| samples_per_minute | double | no | N/A | any positive, non-zero, double | The number of samples per minute, in the prediction data (simulator export rate). e.g., "0.2" means 1 sample every 5 minutes, "20" means a 20 samples per minute, or 1 sample every 3 seconds. |
-| seed | integer | no | 0 | any integer >= 0 | The seed of the simulation. This must correspond to the seed from the output folder (from seed=x). |
-| plot_type | string | no | "time_series" | "time_series", "cumulative", "cumulative_time_series" | The type of the plot, generated by the Multi-Model and Meta-Model. |
-| plot_title | string | no | "" | any string | The title of the plot. |
-| x_ticks_count | integer | no | None | any integer, larger than 0 | The number of ticks on x-axis. |
-| y_ticks_count | integer | no | None | any integer, larger than 0 | The number of ticks on y-axis. |
-| x_label | string | no | "Time" | any string | The label for the x-axis of the plot. |
-| y_label | string | no | "Metric Unit" | any string | The label for the y-axis of the plot. |
-| y_min | double | no | None | any positive, non-zero, double | The minimum value for the vertical axis of the plot. |
-| y_max | double | no | None | any positive, non-zero, double | The maximum value for the vertical axis of the plot. |
-| x_min | double | no | None | any positive, non-zero, double | The minimum value for the horizontal axis of the plot. |
-| x_max | double | no | None | any positive, non-zero, double | The maximum value for the horizontal axis of the plot. |
-
-## Examples
-
-In the following section, we discuss several examples of M3SA setup files. Any setup file can be verified
-using the JSON schema defined in [schema](M3SASchema.md).
-
-### Simple
-
-The simplest M3SA setup that can be provided to OpenDC is shown below:
-
-```json
-{
- "metric": "power_draw"
-}
-```
-
-This configuration creates a Multi-Model and Meta-Model on the power_draw. All the other parameters are handled by the
-default values, towards reducing the complexity of the setup.
-
-### Complex
-
-A more complex M3SA setup, where the user has more control on teh generated output, is show below:
-
-```json
-{
- "multimodel": true,
- "metamodel": false,
- "metric": "carbon_emission",
- "window_size": 10,
- "window_function": "median",
- "metamodel_function": "mean",
- "samples_per_minute": 0.2,
- "unit_scaling_magnitude": 1000,
- "current_unit": "gCO2",
- "seed": 0,
- "plot_type": "cumulative_time_series",
- "plot_title": "Carbon Emission Prediction",
- "x_label": "Time [days]",
- "y_label": "Carbon Emission [gCO2/kWh]",
- "x_min": 0,
- "x_max": 200,
- "y_min": 500,
- "y_max": 1000,
- "x_ticks_count": 3,
- "y_ticks_count": 3
-}
-```
-
-This configuration creates a Multi-Model and a Meta-Model which predicts the carbon_emission. The window size is 10, and
-the aggregation function (for the window) is median. The Meta-Model function is mean. The data has been exported at a
-rate of 0.2 samples per minute (i.e., a sample every 5 minutes). The plot type is cummulative_time_series, which starts
-from a y-axis value of 500 and goes up to 1000. Therefore, the Multi-Model and the Meta-Model will show only
-the values greater than y_min (500) and smaller than y_max (1000). Also, the x-axis will start from 0 and go up to 200,
-with 3 ticks on the x-axis and 3 ticks on the y-axis.
diff --git a/site/docs/documentation/M3SA/M3SASchema.md b/site/docs/documentation/M3SA/M3SASchema.md
deleted file mode 100644
index 5a3503ca..00000000
--- a/site/docs/documentation/M3SA/M3SASchema.md
+++ /dev/null
@@ -1,115 +0,0 @@
-Below is the schema for the MultiMetaModel JSON file. This schema can be used to validate a MultiMetaModel setup file.
-A setup file can be validated using a JSON schema validator, such as https://www.jsonschemavalidator.net/.
-
-```json
-{
- "$schema": "http://json-schema.org/draft-07/schema#",
- "type": "object",
- "properties": {
- "multimodel": {
- "type": "boolean",
- "default": true,
- "description": "Whether or not to build a Multi-Model. If set to false, a Meta-Model will not be computed either."
- },
- "metamodel": {
- "type": "boolean",
- "default": true,
- "description": "Whether to build a Meta-Model."
- },
- "metric": {
- "type": "string",
- "description": "What metric to be analyzed from the computed files."
- },
- "current_unit": {
- "type": "string",
- "default": "",
- "description": "The international system unit of the metric to be analyzed, without prefixes. e.g., 'W' for Watt is ok, 'kW' is not."
- },
- "unit_scaling_magnitude": {
- "type": "integer",
- "default": 10,
- "enum": [-9, -6, -3, 1, 3, 6, 9],
- "description": "The scaling factor to be applied to the metric (10^-9, 10^-6, 10^3, 10^3, 10^6, 10^9). For no scaling, input 1."
- },
- "seed": {
- "type": "integer",
- "default": 0,
- "minimum": 0,
- "description": "The seed of the simulation. This must correspond to the seed from the output folder (from seed=x)."
- },
- "window_size": {
- "type": "integer",
- "default": 1,
- "minimum": 1,
- "description": "The size of the window, used for aggregating the chunks."
- },
- "window_function": {
- "type": "string",
- "default": "mean",
- "enum": ["mean", "median"],
- "description": "The function used by the window for aggregating the chunks (e.g., for 'mean', the window will compute the mean of the samples)."
- },
- "meta_function": {
- "type": "string",
- "default": "mean",
- "enum": ["mean", "median"],
- "description": "The function used by the Meta-Model to be generated. For 'mean', the Meta-Model takes the mean of the individual models, at the granularity established by the window-size."
- },
- "samples_per_minute": {
- "type": "number",
- "minimum": 0.0001,
- "description": "The number of samples per minute, in the prediction data (simulator export rate). e.g., '0.2' means 1 sample every 5 minutes, '20' means 20 samples per minute, or 1 sample every 3 seconds."
- },
- "plot_type": {
- "type": "string",
- "default": "time_series",
- "enum": ["time_series", "cumulative", "cumulative_time_series"],
- "description": "The type of the plot, generated by the Multi-Model and Meta-Model."
- },
- "plot_title": {
- "type": "string",
- "default": "",
- "description": "The title of the plot."
- },
- "x_label": {
- "type": "string",
- "default": "Time",
- "description": "The label for the x-axis of the plot."
- },
- "y_label": {
- "type": "string",
- "default": "Metric Unit",
- "description": "The label for the y-axis of the plot."
- },
- "y_min": {
- "type": "number",
- "description": "The minimum value for the vertical axis of the plot."
- },
- "y_max": {
- "type": "number",
- "description": "The maximum value for the vertical axis of the plot."
- },
- "x_min": {
- "type": "number",
- "description": "The minimum value for the horizontal axis of the plot."
- },
- "x_max": {
- "type": "number",
- "description": "The maximum value for the horizontal axis of the plot."
- },
- "x_ticks_count": {
- "type": "integer",
- "minimum": 1,
- "description": "The number of ticks on x-axis."
- },
- "y_ticks_count": {
- "type": "integer",
- "minimum": 1,
- "description": "The number of ticks on y-axis."
- }
- },
- "required": [
- "metric"
- ]
-}
-```
diff --git a/site/docs/documentation/Output.md b/site/docs/documentation/Output.md
deleted file mode 100644
index 584b0702..00000000
--- a/site/docs/documentation/Output.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-Running OpenDC results in five output files:
-1. [Task](#task) contains metrics related to the jobs being executed.
-2. [Host](#host) contains all metrics related to the hosts on which jobs can be executed.
-3. [Power Source](#power-source) contains all metrics related to the power sources that power the hosts.
-4. [Battery](#battery) contains all metrics related to the batteries that power the hosts.
-5. [Service](#service) contains metrics describing the overall performance.
-
-User can define which files, and features are to be included in the output in the experiment file (see [ExportModel](/docs/documentation/Input/ExportModel.md)).
-
-### Task
-The task output file, contains all metrics of related to the tasks that are being executed.
-
-| Metric | Datatype | Unit | Summary |
-|--------------------|----------|-----------|-----------------------------------------------------------------------------|
-| timestamp | int64 | ms | Timestamp of the sample since the start of the workload. |
-| timestamp_absolute | int64 | ms | The absolute timestamp based on the given workload. |
-| task_id | binary | string | The id of the task determined during runtime. |
-| task_name | binary | string | The name of the task provided by the Trace. |
-| host_name | binary | string | The id of the host on which the task is hosted or `null` if it has no host. |
-| mem_capacity | int64 | Mb | The memory required by the task. |
-| cpu_count | int32 | count | The number of CPUs required by the task. |
-| cpu_limit | double | MHz | The capacity of the CPUs of Host on which the task is running. |
-| cpu_usage | double | MHz | The cpu capacity provided by the CPU to the task. |
-| cpu_demand | double | MHz | The cpu capacity demanded of the CPU by the task. |
-| cpu_time_active | int64 | ms | The duration that a CPU was active in the task. |
-| cpu_time_idle | int64 | ms | The duration that a CPU was idle in the task. |
-| cpu_time_steal | int64 | ms | The duration that a vCPU wanted to run, but no capacity was available. |
-| cpu_time_lost | int64 | ms | The duration of CPU time that was lost due to interference. |
-| uptime | int64 | ms | The uptime of the host since last sample. |
-| downtime | int64 | ms | The downtime of the host since last sample. |
-| num_failures | int64 | count | How many times was a task interrupted due to machine failure. |
-| num_pauses | int64 | ms | How many times was a task interrupted due to the TaskStopper. |
-| submission_time | int64 | ms | The time for which the task was enqueued for the scheduler. |
-| schedule_time | int64 | ms | The time at which task got booted. |
-| finish_time | int64 | ms | The time at which the task was finished (either completed or terminated). |
-| task_state | String | TaskState | The current state of the Task. |
-
-### Host
-The host output file, contains all metrics of related to the hosts that are running.
-
-| Metric | DataType | Unit | Summary |
-|--------------------|----------|------------|-----------------------------------------------------------------------------------------------------|
-| timestamp | int64 | ms | Timestamp of the sample. |
-| timestamp_absolute | int64 | ms | The absolute timestamp based on the given workload. |
-| host_name | binary | string | The name of the host. |
-| cluster_name | binary | string | The name of the cluster that this host is part of. |
-| cpu_count | int32 | count | The number of cores in this host. |
-| mem_capacity | int64 | Mb | The amount of available memory. |
-| tasks_terminated | int32 | count | The number of tasks that are in a terminated state. |
-| tasks_running | int32 | count | The number of tasks that are in a running state. |
-| tasks_error | int32 | count | The number of tasks that are in an error state. |
-| tasks_invalid | int32 | count | The number of tasks that are in an unknown state. |
-| cpu_capacity | double | MHz | The total capacity of the CPUs in the host. |
-| cpu_usage | double | MHz | The total CPU capacity provided to all tasks on this host. |
-| cpu_demand | double | MHz | The total CPU capacity demanded by all tasks on this host. |
-| cpu_utilization | double | ratio | The CPU utilization of the host. This is calculated by dividing the cpu_usage, by the cpu_capacity. |
-| cpu_time_active | int64 | ms | The duration that a CPU was active in the host. |
-| cpu_time_idle | int64 | ms | The duration that a CPU was idle in the host. |
-| cpu_time_steal | int64 | ms | The duration that a vCPU wanted to run, but no capacity was available. |
-| cpu_time_lost | int64 | ms | The duration of CPU time that was lost due to interference. |
-| power_draw | double | Watt | The current power draw of the host. |
-| energy_usage | double | Joule (Ws) | The total energy consumption of the host since last sample. |
-| embodied_carbon | double | gram | The total embodied carbon emitted since the last sample. |
-| uptime | int64 | ms | The uptime of the host since last sample. |
-| downtime | int64 | ms | The downtime of the host since last sample. |
-| boot_time | int64 | ms | The time a host got booted. |
-| boot_time_absolute | int64 | ms | The absolute time a host got booted. |
-
-### Power Source
-The power source output file, contains all metrics of related to the power sources.
-
-| Metric | DataType | Unit | Summary |
-|--------------------|----------|------------|-------------------------------------------------------------------|
-| timestamp | int64 | ms | Timestamp of the sample. |
-| timestamp_absolute | int64 | ms | The absolute timestamp based on the given workload. |
-| source_name | binary | string | The name of the power source. |
-| cluster_name | binary | string | The name of the cluster that this power source is part of. |
-| power_draw | double | Watt | The current power draw of the host. |
-| energy_usage | double | Joule (Ws) | The total energy consumption of the host since last sample. |
-| carbon_intensity | double | gCO2/kW | The amount of carbon that is emitted when using a unit of energy. |
-| carbon_emission | double | gram | The amount of carbon emitted since the previous sample. |
-
-### Battery
-The host output file, contains all metrics of related batteries.
-
-| Metric | DataType | Unit | Summary |
-|--------------------|----------|--------------|-------------------------------------------------------------------|
-| timestamp | int64 | ms | Timestamp of the sample. |
-| timestamp_absolute | int64 | ms | The absolute timestamp based on the given workload. |
-| battery_name | binary | string | The name of the battery. |
-| cluster_name | binary | string | The name of the cluster that this battery is part of. |
-| power_draw | double | Watt | The current power draw of the host. |
-| energy_usage | double | Joule (Ws) | The total energy consumption of the host since last sample. |
-| carbon_intensity | double | gCO2/kW | The amount of carbon that is emitted when using a unit of energy. |
-| embodied_carbon | double | gram | The total embodied carbon emitted since the last sample. |
-| charge | double | Joule | The current charge of the battery. |
-| capacity | double | Joule | The total capacity of the battery. |
-| battery_state | String | BatteryState | The current state of the battery. |
-
-### Service
-The service output file, contains metrics providing an overview of the performance.
-
-| Metric | DataType | Unit | Summary |
-|--------------------|----------|-------|-------------------------------------------------------|
-| timestamp | int64 | ms | Timestamp of the sample |
-| timestamp_absolute | int64 | ms | The absolute timestamp based on the given workload |
-| hosts_up | int32 | count | The number of hosts that are up at this instant. |
-| hosts_down | int32 | count | The number of hosts that are down at this instant. |
-| tasks_total | int32 | count | The number of tasks seen by the service. |
-| tasks_pending | int32 | count | The number of tasks that are pending to be scheduled. |
-| tasks_active | int32 | count | The number of tasks that are currently active. |
-| tasks_terminated | int32 | count | The number of tasks that were terminated. |
-| tasks_completed | int32 | count | The number of tasks that finished successfully |
diff --git a/site/docs/documentation/_category_.json b/site/docs/documentation/_category_.json
deleted file mode 100644
index 0776466b..00000000
--- a/site/docs/documentation/_category_.json
+++ /dev/null
@@ -1,7 +0,0 @@
-{
- "label": "Documentation",
- "position": 5,
- "link": {
- "type": "generated-index"
- }
-}
diff --git a/site/docs/getting-started/0-installation.md b/site/docs/getting-started/0-installation.md
deleted file mode 100644
index 76ffd015..00000000
--- a/site/docs/getting-started/0-installation.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-description: How to install OpenDC locally, and start experimenting in no time.
----
-
-# Installation
-
-This page describes how to set up and configure a local single-user OpenDC installation so that you can quickly get your
-experiments running. You can also use the [hosted version of OpenDC](https://app.opendc.org) to get started even
-quicker (The web server is however missing some more complex features).
-
-
-## Prerequisites
-
-1. **Supported Platforms**
- OpenDC is actively tested on Windows, macOS and GNU/Linux.
-2. **Required Software**
- A Java installation of version 21 or higher is required for OpenDC. You may download the
- [Java distribution from Oracle](https://www.oracle.com/java/technologies/downloads/) or use the distribution provided
- by your package manager.
-
-## Download
-
-To get an OpenDC distribution, download a recent version from our [Releases](https://github.com/atlarge-research/opendc/releases) page on GitHub.
-For basic usage, the OpenDCExperimentRunner is all that is needed.
-
-## Setup
-
-Unpack the downloaded OpenDC distribution. Opening OpenDCExperimentRunner results in two folders, `bin` and `lib`.
-`lib` contains all `.jar` files needed to run OpenDC. `bin` two executable versions of the OpenDCExperimentRunner.
-In the following pages, we discuss how to run an experiment using the executables.
-
diff --git a/site/docs/getting-started/1-start-using-intellij.md b/site/docs/getting-started/1-start-using-intellij.md
deleted file mode 100644
index 6aec91f1..00000000
--- a/site/docs/getting-started/1-start-using-intellij.md
+++ /dev/null
@@ -1,172 +0,0 @@
-
-
-# In this How-To we explain how you setup IntelliJ IDEA
-
-First of all you can download IntelliJ here: https://lp.jetbrains.com/intellij-idea-promo/
-
-# Basic steps
-
-```
-git clone git@github.com:atlarge-research/opendc
-```
-
-Check if you have a compatible java version available. Make sure to have one of these versions available: [21]
-
-If not install a supported version!
-
-On a MAC
-
-```
-/usr/libexec/java_home -V
-```
-
-On Debian
-
-```
-update-alternatives --list java
-```
-
-On Redhat/Centos
-
-```
-yum list installed | grep java
-```
-
-
-Open the project in IntelliJ
-
-![Intellij Open Project](img/intellij_open_project.png)
-
-Now fix the settings so that you use the correct java version. (In the example the java version is set to "21")
-Navigation path in the settings pannel: "Build, Execution, Deployment" -> "Build Tools" -> "Gradle"
-
-![Intellij Settings](img/intellij_settings.png)
-
-Now navigate in the file menu to and open the file: "gradle"/"libs.versions.toml"
-
-Make sure the java version is set to the same version as previously cofigured in the settings.
-
-![Intellij Libs Versions Toml](img/intellij_libs_versions_toml.png)
-
-
-Now open the Gradle panel on the right-hand side of the editor (1) and hit the refresh button at the top of the panel (2).
-
-![Intellij Gradle Panel](img/intellij_gradle_panel.png)
-
-
-# Setup your first experiment and run it from source
-
-
-Create a directory where you are going to put the files for your first experiment.
-
-File structure:
-
-![Experiment File Structure](img/experiment_file_structure.png)
-
-You can download the example workload trace (bitbrains-small-9d2e576e6684ddc57c767a6161e66963.zip) [here](https://atlarge-research.github.io/opendc/assets/files/bitbrains-small-9d2e576e6684ddc57c767a6161e66963.zip)
-
-Now unzip the trace.
-
-The content of "topology.json"
-
-```
-{
- "clusters":
- [
- {
- "name": "C01",
- "hosts" :
- [
- {
- "name": "H01",
- "cpu":
- {
- "coreCount": 32,
- "coreSpeed": 3200
- },
- "memory": {
- "memorySize": 256000
- }
- }
- ]
- },
- {
- "name": "C02",
- "hosts" :
- [
- {
- "name": "H02",
- "count": 6,
- "cpu":
- {
- "coreCount": 8,
- "coreSpeed": 2930
- },
- "memory": {
- "memorySize": 64000
- }
- }
- ]
- },
- {
- "name": "C03",
- "hosts" :
- [
- {
- "name": "H03",
- "count": 2,
- "cpu":
- {
- "coreCount": 16,
- "coreSpeed": 3200
- },
- "memory": {
- "memorySize": 128000
- }
- }
- ]
- }
- ]
-}
-```
-
-The content of "experiment.json"
-
-The paths in the "experiment.json" file are relative to the "working directory" which is configured next.
-
-
-```
-{
- "name": "simple",
- "topologies": [{
- "pathToFile": "topology.json"
- }],
- "workloads": [{
- "pathToFile": "bitbrains-small",
- "type": "ComputeWorkload"
- }]
-}
-```
-
-In the project file structure on the left open the following file:
-
-"opendc-experiments"/"opendc-experiments-base"/"src"/"main"/"kotlin"/"org.opendc.experiment.base"/"runner"/"ExperimentCLi.kt"
-
-![Intellij Experimentcli](img/Intellij_experimentcli.png)
-
-Now open the "Run/Debug" configuration (top right).
-
-![Intellij Open Run Config](img/intellij_open_run_config.png)
-
-We need to edit two settings:
-
-"Program arguments": --experiment-path experiment.json
-
-"Working Directory": a path where you have put the experiment files
-
-![Intellij Edit The Run Config](img/intellij_edit_the_run_config.png)
-
-Now you can click "Run" and start your first experiment.
-
-In the working directory a "output" direcotry is created with the results of the experiment.
-
diff --git a/site/docs/getting-started/2-first-experiment.md b/site/docs/getting-started/2-first-experiment.md
deleted file mode 100644
index 79fd6424..00000000
--- a/site/docs/getting-started/2-first-experiment.md
+++ /dev/null
@@ -1,211 +0,0 @@
----
-description: Designing a simple experiment
----
-
-# First Experiment
-Now that you have downloaded OpenDC, we will start creating a simple experiment.
-In this experiment we will compare the performance of a small, and a big data center on the same workload.
-
-
-[//]: # (:::tip Answer)
-
-[//]: # (<details>)
-
-[//]: # (<summary>Expand for the Answer</summary>)
-
-[//]: # (</details>)
-
-[//]: # (:::)
-
-:::info Learning goal
-During this tutorial, we will learn how to create and execute a simple experiment in OpenDC.
-:::
-
-## Designing a Data Center
-
-The first requirement to run an experiment in OpenDC is a **topology**.
-A **topology** defines the hardware on which a **workload** is executed.
-Larger topologies will be capable of running more workloads, and will often quicker.
-
-A **topology** is defined using a JSON file. A **topology** contains one or more _clusters_.
-_clusters_ are groups of _hosts_ on a specific location. Each cluster consists of one or more _hosts_.
-A _host_ is a machine on which one or more tasks can be executed. _hosts_ are composed of a _cpu_ and a _memory_ unit.
-
-### Simple Data Center
-in this experiment, we are comparing two data centers. Below is an example of the small **topology** file:
-
-```json
-{
- "clusters":
- [
- {
- "name": "C01",
- "hosts" :
- [
- {
- "name": "H01",
- "cpu":
- {
- "coreCount": 12,
- "coreSpeed": 3300
- },
- "memory": {
- "memorySize": 140457600000
- }
- }
- ]
- }
- ]
-}
-```
-
-This **topology** consist of a single _cluster_, with a single _host_.
-
-:::tip
-To use this **topology** in experiment copy the content to a new JSON file, or download it [here](documents/topologies/small.json "download")
-:::
-
-### Simple Data Center
-in this experiment, we are comparing two data centers. Below is an example of the bigger **topology** file:
-
-```json
-{
- "clusters":
- [
- {
- "name": "C01",
- "hosts" :
- [
- {
- "name": "H01",
- "cpu":
- {
- "coreCount": 32,
- "coreSpeed": 3200
- },
- "memory": {
- "memorySize": 256000
- }
- }
- ]
- },
- {
- "name": "C02",
- "hosts" :
- [
- {
- "name": "H02",
- "count": 6,
- "cpu":
- {
- "coreCount": 8,
- "coreSpeed": 2930
- },
- "memory": {
- "memorySize": 64000
- }
- }
- ]
- },
- {
- "name": "C03",
- "hosts" :
- [
- {
- "name": "H03",
- "count": 2,
- "cpu":
- {
- "coreCount": 16,
- "coreSpeed": 3200
- },
- "memory": {
- "memorySize": 128000
- }
- }
- ]
- }
- ]
-}
-```
-
-Compared to the small topology, the big topology consist of three clusters, all consisting of a single host.
-
-:::tip
-To use this **topology** in experiment copy the content to a new JSON file, or download it [here](documents/topologies/big.json "download")
-:::
-
-:::info
-For more in depth information about Topologies, see [Topology](../documentation/Input/Topology)
-:::
-
-## Workloads
-
-Next to the topology, we need a workload to simulate on the data center.
-In OpenDC, workloads are defined as a bag of tasks. Each task is accompanied by one or more fragments.
-These fragments define the computational requirements of the task over time.
-For this experiment, we will use the bitbrains-small workload. This is a small workload of 50 tasks,
-spanning over a bit more than a month time. You can download the workload [here](documents/workloads/bitbrains-small.zip "download")
-
-:::info
-For more in depth information about Workloads, see [Workload](../documentation/Input/Workload.md)
-:::
-
-## Executing an experiment
-
-To run an experiment, we need to create an **experiment** file. This is a JSON file, that defines what should be executed
-by OpenDC, and how. Below is an example of a simple **experiment** file:
-
-```json
-{
- "name": "simple",
- "topologies": [{
- "pathToFile": "topologies/small.json"
- },
- {
- "pathToFile": "topologies/big.json"
- }],
- "workloads": [{
- "pathToFile": "traces/bitbrains-small",
- "type": "ComputeWorkload"
- }]
-}
-```
-
-In this **experiment**, three things are defined. First, is the `name`. This defines how the experiment is called
-in the output folder. Second, is the `topologies`. This defines where OpenDC can find the topology files.
-Finally, the `workloads`. This defines which workload OpenDC should run. You can download the experiment file [here](documents/experiments/simple_experiment.json "download")
-
-As you can see, `topologies` defines two topologies. In this case OpenDC will run two simulations, one with the small
-topology, and one with the big topology.
-
-:::info
-For more in depth information about Experiments, see [Experiment](../documentation/Input/Experiment)
-:::
-
-## Running OpenDC
-At this point, we should have all components to run an experiment. To make sure every file can be used by OpenDC,
-please create an experiment folder such as the one shown below:
-```
-── {simulation-folder-name} 📁 🔧
- ├── topologies 📁 🔒
- │ └── small.json 📄 🔧
- │ └── big.json 📄 🔧
- ├── experiments 📁 🔒
- │ └── simple_experiment.json 📄 🔧
- ├── workloads 📁 🔒
- │ └── bitbrains-small 📁 🔒
- │ └── fragments.parquet 📄 🔧
- │ └── tasks.parquet 📄 🔧
- ├── OpenDCExperimentRunner 📁 🔒
- │ └── lib 📁 🔒
- │ └── bin 📁 🔒
- ├── output 📁 🔒
-```
-
-Executing the experiment can be done directly from the terminal.
-Execute the following code from the terminal in simulation-folder-name
-
-```
-$ ./OpenDCExperimentRunner/bin/OpenDCExperimentRunner.sh --experiment-path "experiments/simple_experiment.json"
-```
diff --git a/site/docs/getting-started/3-whats-next.md b/site/docs/getting-started/3-whats-next.md
deleted file mode 100644
index b7598022..00000000
--- a/site/docs/getting-started/3-whats-next.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-description: How to supercharge your designs and experiments with OpenDC.
----
-
-# What's next?
-
-Congratulations! You have just learned how to design and experiment with a (virtual) datacenter in OpenDC. What's next?
-
-- Follow one of the [tutorials](/docs/category/tutorials) using OpenDC.
-- Read about [existing work using OpenDC](/community/research).
-- Get involved in the [OpenDC Community](/community/support).
-- If you are interested in contributing to OpenDC you can find a How-To here [4-start-using-intellij](1-start-using-intellij.md), please also read https://github.com/atlarge-research/opendc/blob/master/CONTRIBUTING.md.
diff --git a/site/docs/getting-started/_category_.json b/site/docs/getting-started/_category_.json
deleted file mode 100644
index 169f7a27..00000000
--- a/site/docs/getting-started/_category_.json
+++ /dev/null
@@ -1,8 +0,0 @@
-{
- "label": "Getting Started",
- "position": 2,
- "link": {
- "type": "generated-index",
- "description": "10 minutes to learn the most important concepts of OpenDC."
- }
-}
diff --git a/site/docs/getting-started/documents/experiments/simple_experiment.json b/site/docs/getting-started/documents/experiments/simple_experiment.json
deleted file mode 100644
index 74429fdb..00000000
--- a/site/docs/getting-started/documents/experiments/simple_experiment.json
+++ /dev/null
@@ -1,13 +0,0 @@
-{
- "name": "simple",
- "topologies": [{
- "pathToFile": "topologies/small.json"
- },
- {
- "pathToFile": "topologies/big.json"
- }],
- "workloads": [{
- "pathToFile": "traces/bitbrains-small",
- "type": "ComputeWorkload"
- }]
-}
diff --git a/site/docs/getting-started/documents/topologies/big.json b/site/docs/getting-started/documents/topologies/big.json
deleted file mode 100644
index c3a060cc..00000000
--- a/site/docs/getting-started/documents/topologies/big.json
+++ /dev/null
@@ -1,59 +0,0 @@
-{
- "clusters":
- [
- {
- "name": "C01",
- "hosts" :
- [
- {
- "name": "H01",
- "cpu":
- {
- "coreCount": 32,
- "coreSpeed": 3200
- },
- "memory": {
- "memorySize": 256000
- }
- }
- ]
- },
- {
- "name": "C02",
- "hosts" :
- [
- {
- "name": "H02",
- "count": 6,
- "cpu":
- {
- "coreCount": 8,
- "coreSpeed": 2930
- },
- "memory": {
- "memorySize": 64000
- }
- }
- ]
- },
- {
- "name": "C03",
- "hosts" :
- [
- {
- "name": "H03",
- "count": 2,
- "cpu":
- {
- "coreCount": 16,
- "coreSpeed": 3200
- },
- "memory": {
- "memorySize": 128000
- }
- }
- ]
- }
- ]
-}
-
diff --git a/site/docs/getting-started/documents/topologies/small.json b/site/docs/getting-started/documents/topologies/small.json
deleted file mode 100644
index 54e3c6fc..00000000
--- a/site/docs/getting-started/documents/topologies/small.json
+++ /dev/null
@@ -1,22 +0,0 @@
-{
- "clusters":
- [
- {
- "name": "C01",
- "hosts" :
- [
- {
- "name": "H01",
- "cpu":
- {
- "coreCount": 12,
- "coreSpeed": 3300
- },
- "memory": {
- "memorySize": 140457600000
- }
- }
- ]
- }
- ]
-}
diff --git a/site/docs/getting-started/documents/workloads/bitbrains-small.zip b/site/docs/getting-started/documents/workloads/bitbrains-small.zip
deleted file mode 100644
index f128e636..00000000
--- a/site/docs/getting-started/documents/workloads/bitbrains-small.zip
+++ /dev/null
Binary files differ
diff --git a/site/docs/getting-started/img/Intellij_experimentcli.png b/site/docs/getting-started/img/Intellij_experimentcli.png
deleted file mode 100644
index fceed499..00000000
--- a/site/docs/getting-started/img/Intellij_experimentcli.png
+++ /dev/null
Binary files differ
diff --git a/site/docs/getting-started/img/experiment_file_structure.png b/site/docs/getting-started/img/experiment_file_structure.png
deleted file mode 100644
index 8b0b8f3a..00000000
--- a/site/docs/getting-started/img/experiment_file_structure.png
+++ /dev/null
Binary files differ
diff --git a/site/docs/getting-started/img/intellij_edit_the_run_config.png b/site/docs/getting-started/img/intellij_edit_the_run_config.png
deleted file mode 100644
index fae35b5c..00000000
--- a/site/docs/getting-started/img/intellij_edit_the_run_config.png
+++ /dev/null
Binary files differ
diff --git a/site/docs/getting-started/img/intellij_edit_the_run_config.psd b/site/docs/getting-started/img/intellij_edit_the_run_config.psd
deleted file mode 100644
index b178fdb2..00000000
--- a/site/docs/getting-started/img/intellij_edit_the_run_config.psd
+++ /dev/null
Binary files differ
diff --git a/site/docs/getting-started/img/intellij_gradle_panel.png b/site/docs/getting-started/img/intellij_gradle_panel.png
deleted file mode 100644
index c3c98e10..00000000
--- a/site/docs/getting-started/img/intellij_gradle_panel.png
+++ /dev/null
Binary files differ
diff --git a/site/docs/getting-started/img/intellij_gradle_panel.psd b/site/docs/getting-started/img/intellij_gradle_panel.psd
deleted file mode 100644
index a52f0c9d..00000000
--- a/site/docs/getting-started/img/intellij_gradle_panel.psd
+++ /dev/null
Binary files differ
diff --git a/site/docs/getting-started/img/intellij_libs_versions_toml.png b/site/docs/getting-started/img/intellij_libs_versions_toml.png
deleted file mode 100644
index a27f7cc0..00000000
--- a/site/docs/getting-started/img/intellij_libs_versions_toml.png
+++ /dev/null
Binary files differ
diff --git a/site/docs/getting-started/img/intellij_libs_versions_toml.psd b/site/docs/getting-started/img/intellij_libs_versions_toml.psd
deleted file mode 100644
index ae27af25..00000000
--- a/site/docs/getting-started/img/intellij_libs_versions_toml.psd
+++ /dev/null
Binary files differ
diff --git a/site/docs/getting-started/img/intellij_open_project.png b/site/docs/getting-started/img/intellij_open_project.png
deleted file mode 100644
index c04f5368..00000000
--- a/site/docs/getting-started/img/intellij_open_project.png
+++ /dev/null
Binary files differ
diff --git a/site/docs/getting-started/img/intellij_open_run_config.png b/site/docs/getting-started/img/intellij_open_run_config.png
deleted file mode 100644
index a9c4436f..00000000
--- a/site/docs/getting-started/img/intellij_open_run_config.png
+++ /dev/null
Binary files differ
diff --git a/site/docs/getting-started/img/intellij_settings.png b/site/docs/getting-started/img/intellij_settings.png
deleted file mode 100644
index 6bbda7e7..00000000
--- a/site/docs/getting-started/img/intellij_settings.png
+++ /dev/null
Binary files differ
diff --git a/site/docs/getting-started/img/intellij_settings.psd b/site/docs/getting-started/img/intellij_settings.psd
deleted file mode 100644
index f9affd86..00000000
--- a/site/docs/getting-started/img/intellij_settings.psd
+++ /dev/null
Binary files differ
diff --git a/site/docs/intro.mdx b/site/docs/intro.mdx
deleted file mode 100644
index 840ae343..00000000
--- a/site/docs/intro.mdx
+++ /dev/null
@@ -1,27 +0,0 @@
----
-sidebar_position: 1
----
-
-# Introduction
-
-OpenDC is a free and open-source platform for cloud datacenter simulation aimed at both research and education.
-
-<div className="container">
- <div className="row">
- <div className="col col-3 text--center">
- <img src={require("@site/src/components/HomepageFeatures/screenshot-construction.png").default} alt="Constructing a cloud datacenter with OpenDC" />
- </div>
- <div className="col col-3 text--center">
- <img src={require("@site/src/components/HomepageFeatures/screenshot-results.png").default} alt="Analysis of results reported by OpenDC" />
- </div>
- </div>
-</div>
-
-Users can construct new datacenter designs and define portfolios of scenarios (experiments) to explore how their designs
-perform under different workloads, schedulers, and phenomena (e.g., failures or performance interference).
-
-OpenDC is accessible both as a ready-to-use platform hosted by us online at [app.opendc.org](https://app.opendc.org), and as
-source code that users can run locally on their own machine or via Docker.
-
-To learn more about OpenDC, have a look through our paper on [OpenDC 2.0](https://atlarge-research.com/pdfs/ccgrid21-opendc-paper.pdf)
-or on our [vision](https://atlarge-research.com/pdfs/opendc-vision17ispdc_cr.pdf).
diff --git a/site/docs/tutorials/M3SA-integration-tutorial.mdx b/site/docs/tutorials/M3SA-integration-tutorial.mdx
deleted file mode 100644
index c09011c7..00000000
--- a/site/docs/tutorials/M3SA-integration-tutorial.mdx
+++ /dev/null
@@ -1,188 +0,0 @@
----
-sidebar_position: 2
-title: M3SA Integration
-hide_title: true
-sidebar_label: M3SA Integration
-description: M3SA Integration
----
-
-# M3SA integration tutorial
-
-M3SA is a tool able to perform "Multi-Meta-Model Simulation Analysis". The tool is designed to analyze the output of
-simulations, by leveraging predictions, generate Multi-Model graphs, novel models, and more. M3SA can integrate with any
-simulation infrastructure, as long as integration steps are followed.
-
-We build our tool towards performance, scalability, and **universality**. In this document, we present the steps to
-integrate our tool into your simulation infrastructure.
-
-If you are using OpenDC, none of adaptation steps are necessary, yet they can be useful to understand the structure
-of the tool. Step 3 is still necessary.
-
-## Step 1: Adapt the simulator output folder structure
-
-The first step is to adapt the I/O of your simulation to the format of our tool. The output folder structure should have
-the
-following format:
-
-```
-[1] ── {simulation-folder-name} 📁 🔧
-[2] ├── inputs 📁 🔒
-[3] │ └── {m3sa-config-file}.json 📄 🔧
-[4] │ └── {other input files / folders} 🔧
-[5] ├── outputs 📁 🔒
-[6] │ ├── raw-output 📁 🔒
-[7] │ │ ├── 0 📁 🔒
-[8] │ │ │ └── seed={your_seed}🔒
-[9] │ │ │ └── {simulation_data_file}.parquet 📄 🔧
-[10] │ │ │ └── {any other files / folders} ⚪
-[11] │ │ ├── 1 📁 ⚪ 🔒
-[12] │ │ │ └── seed={your_seed} 📁 ⚪ 🔒
-[13] │ │ │ └── {simulation_data_file}.parquet 📄 ⚪ 🔧
-[14] │ │ │ └── {any other files / folders} ⚪󠁪
-[15] │ │ ├── metamodel 📁 ⚪
-[16] │ │ └── seed={your_seed} 📁 ⚪
-[17] │ │ └── {your_metric_name}.parquet 📄 ⚪
-[18] │ │ └── {any other files / folders} ⚪
-[19] │ └── {any other files / folders} 📁 ⚪
-[20]| └──{any other files / folders} 📁 ⚪
-```
-
-📄 = file <br />
-📁 = folder <br />
-🔒 = fixed, the name of the folder/file must be the same.<br />
-🔧 = flexible, the name of the folder/file can differ. However, the item must be present.<br />
-⚪ = optional and flexible. The item can be absent. <br />
-
-- [1] = the name of the analyzed folder.
-- [2] = the _inputs_ folder, containing various inputs / configuration files.
-- [3] = the configuration file for M3SA, flexible naming, but needs to be a JSON file
-- [4],[10],[14],[18],[19],[20] = any other input files or folders.
-- [5] = the _outputs_ folder, containing the raw-output. can contain any other files or folders, besides the raw-output
-folder.
-After running a simulation, also a "simulation-analysis" folder will be generated in this folder.
-- [6] = raw-output folder, containing the raw output of the simulation.
-- [7],[11] = the IDs of the models. Must always start from zero. Possible values are 0, 1, 2 ... n, and "metamodel". The
-id
-of "metamodel" is reserved for the Meta-Model. Any simulation data in the respective folder will be treated as
-Meta-Model data.
-- [8],[12] = the seed of the simulation. the seed must be the same for both [8], [12], and other equivalent, further
-files.
-- [9],[13] = the file in which the simulation data is stored. The name of the file can differ, but it must be a parquet
-file.
-- [15] = the Meta-Model folder, optional. If the folder is present, its data will be treated as Meta-Model data.
-- [16] = the Meta-Model seed folder. The seed must be the same as the seed of the simulation.
-- [17] = the Meta-Model output. The name of the file is of the type ```{your_metric_name}.parquet```. For example, if
-you analyze CO2 emissions, the file will be named ```co2_emissions.parquet```.
-
----
-
-## Step 2: Adapt the simulation file format
-
-The simulator data file must be a 🪵 _parquet_ 🪵 file.
-
-The file must contain (at least) the columns:
-
-- timestamp: the timestamp, in miliseconds, of the data point (e.g., 30000, 60000, 90000) - the time unit is flexible.
-- {metric_name}: the value of the metric at the given timestamp. This is the metric analyzed (e.g., CO2_emissions,
-energy_usage).
-
-e.g., if you are analyzing the CO2 emissions of a datacenter, for a timeperiod of 5 minutes, and the data is sampled
-every 30 seconds, the file will look like this:
-
-| timestamp | co2_emissions |
-|-----------|---------------|
-| 30000 | 31.2 |
-| 60000 | 31.4 |
-| 90000 | 28.5 |
-| 120000 | 31.8 |
-| 150000 | 51.5 |
-| 180000 | 51.2 |
-| 210000 | 51.4 |
-| 240000 | 21.5 |
-| 270000 | 21.8 |
-| 300000 | 21.2 |
-
----
-
-## Step 3: Running M3SA
-
-### 3.1 Setup the Simulator Specifics
-
-Update the simulation folder name ([9], [13], [17] from Step 1), in the
-file ```simulator_specifics.py```, from ```opendc/src/python/simulator_specifics.py```.
-
-### 3.2 Setup the python program arguments
-
-### Arguments for Main.py Setup
-Main.py takes two arguments:
-
-1. Argument 1 is the path to the output directory where M3SA output files will be stored.
-2. Argument 2 is the path to the input file that contains the configuration of M3SA.
-
-e.g.,
-
-```json
-"simulation-123/outputs/" "simulation-123/inputs/m3sa-configurator.json"
-```
-
-### 3.3 Working directory Main.py Setup
-
-Make sure to set the working directory to the directory where the main.py file is located.
-
-e.g.,
-
-```
-/your/path/to-analyzer/src/main/python
-```
-
-If you are using OpenDC, you can set the working directory to the following path:
-
-```
-/your/path/opendc/opendc-analyze/src/main/python
-```
-
----
-
-## Optional: Step 4: Simulate and analyze, with one click
-
-The simulation and analysis can be executed as a single command; if no errors are encountered, from the user
-perspective,
-this operation is atomic. We integrated M3SA into OpenDC to facilitate this process.
-
-To further integrate M3SA into any simulation infrastructure, M3SA needs to called from
-the simulation infrastructure, and provided the following running setup:
-
-1. script language: Python
-2. argument 1: the path of the output directory, in which M3SA output files will be stored
-3. argument 2: the path of the input file, containing the configuration of M3SA
-4. other language-specific setup
-
-For example, the integration of the M3SA into OpenDC can be found
-in ```Analyzr.kt``` from ```opendc-analyze/src/main/kotlin/Analyzr.kt```.
-Below, we provide a snippet of the code:
-
-```kotlin
-val ANALYSIS_SCRIPTS_DIRECTORY: String = "./opendc-analyze/src/main/python"
-val ABSOLUTE_SCRIPT_PATH: String =
- Path("$ANALYSIS_SCRIPTS_DIRECTORY/main.py").toAbsolutePath().normalize().toString()
-val SCRIPT_LANGUAGE: String = "python3"
-
-fun analyzeResults(outputFolderPath: String, analyzerSetupPath: String) {
- val process = ProcessBuilder(
- SCRIPT_LANGUAGE,
- ABSOLUTE_SCRIPT_PATH,
- outputFolderPath, // argument 1
- analyzerSetupPath // argument 2
- )
- .directory(Path(ANALYSIS_SCRIPTS_DIRECTORY).toFile())
- .start()
-
- val exitCode = process.waitFor()
- if (exitCode == 0) {
- println("[Analyzr.kt says] Analysis completed successfully.")
- } else {
- val errors = process.errorStream.bufferedReader().readText()
- println("[Analyzr.kt says] Exit code ${exitCode}; Error(s): $errors")
- }
-}
-```
diff --git a/site/docs/tutorials/_category_.json b/site/docs/tutorials/_category_.json
deleted file mode 100644
index 5d3c1ca0..00000000
--- a/site/docs/tutorials/_category_.json
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "label": "Tutorials",
- "position": 3,
- "link": {
- "type": "generated-index",
- "description": "Tutorials demonstrating how to conduct experiments with OpenDC."
-
- }
-}
diff --git a/site/docs/tutorials/img/cpu-usage.png b/site/docs/tutorials/img/cpu-usage.png
deleted file mode 100644
index 86955b6a..00000000
--- a/site/docs/tutorials/img/cpu-usage.png
+++ /dev/null
Binary files differ
diff --git a/site/docs/tutorials/img/resource-distribution.png b/site/docs/tutorials/img/resource-distribution.png
deleted file mode 100644
index b371a07a..00000000
--- a/site/docs/tutorials/img/resource-distribution.png
+++ /dev/null
Binary files differ