| Age | Commit message (Collapse) | Author |
|
(#342)
* renamed performance counter to distinguish different resource types
* added GPU, modelled similar to CPU
* added GPUs to machine model
* list of GPUs instead of single instance
* renamed memory speed to bandwidth
* enabled parsing of GPU resources
* split powermodel into cpu and GPU powermodel
* added gpu parsing tests
* added idea of host level scheduling
* added tests for multi gpu parsing
* renamed powermodel to cpupowermodel
* clarified naming of cpu and gpu components
* added resource type to flow suplier and edge
* added resourcetype
* added GPU components and resource type to fragments
* added GPU to workload and updated resource usage retrieval
* implemented first version of multi resource
* added name to workload
* renamed perfomance counters
* removed commented out code
* removed deprecated comments
* included demand and supply into calculations
* resolving rebase mismatches
* moved resource type from flowedge class to common package
* added available resources to machinees
* cleaner separation if workload is started of simmachine or vm
* Replaced exception with dedicated enum
* Only looping over resources that are actually used
* using hashmaps to handle resourcetype instead of arrays for readability
* fixed condition
* tracking finished workloads per resource type
* removed resource type from flowedge
* made supply and demand distribution resource specific
* added power model for GPU
* removed unused test setup
* removed depracated comments
* removed unused parameter
* added ID for GPU
* added GPUs and GPU performance counters (naively)
* implemented capturing of GPU statistics
* added reminders for future implementations
* renamed properties for better identification
* added capturing GPU statistics
* implemented first tests for GPUs
* unified access to performance counters
* added interface for general compute resource handling
* implemented multi resource support in simmachine
* added individual edge to VM per resource
* extended compute resource interface
* implemented multi-resource support in PSU
* implemented generic retrieval of computeresources
* implemented mult-resource suppport in vm
* made method use more resource specific
* implemented simple GPU tests
* rolled back frquency and demand use
* made naming independent of used resource
* using workloads resources instead of VMs to determine available resource
* implemented determination of used resources in workload
* removed logging statements
* implemented reading from workload
* fixed naming for host-level allocation
* fixed next deadline calculation
* fixed forwarding supply
* reduced memory footprint
* made GPU powermodel nullable
* maded Gpu powermodel configurable in topology
* implemented tests for basic gpu scheduler
* added gpu properties
* implemented weights, filter and simple cpu-gpu scheduler
* spotless apply
* spotless apply pt. 2
* fixed capitalization
* spotless kotlin run
* implemented coloumn export
* todo update
* removed code comments
* Merged PerformanceCounter classes into one & removed interface
* removed GPU specific powermodel
* Rebase master: kept both versions of TopologyFactories
* renamed CpuPowermodel to resource independent Powermodel
Moved it from Cpu package to power package
* implementated default of getResourceType & removed overrides if possible
* split getResourceType into Consumer and Supplier
* added power as resource type
* reduced supply demand from arrayList to single value
* combining GPUs into one large GPU, until full multi-gpu support
* merged distribution policy enum with corresponding factory
* added comment
* post-rebase fixes
* aligned naming
* Added GPU metrics to task output
* Updates power resource type to uppercase.
Standardizes the `ResourceType.Power` enum to `ResourceType.POWER`
for consistency with other resource types and improved readability.
* Removes deprecated test assertions
Removes commented-out assertions in GPU tests.
These assertions are no longer needed and clutter the test code.
* Renames MaxMinFairnessStrategy to Policy
Renames MaxMinFairnessStrategy to MaxMinFairnessPolicy for
clarity and consistency with naming conventions. This change
affects the factory and distributor to use the updated name.
* applies spotless
* nulls GPUs as it is not used
|
|
* Updated tests
Changed all floats into doubles to have consistency over the whole framework
Made a small update to the multiplexer to better push through supply and demand
Fixed small typo
Updated M3SA paths.
fixed merge conflicts
Removed unused components. Updated tests.
Improved checkpointing model
Improved model, started with SimPowerSource
implemented FailureModels and Checkpointing
First working version
midway commit
first update
All simulation are now run with a single CPU and single MemoryUnit. multi CPUs are combined into one. This is for performance and explainability.
* Updated test memory
|
|
* Removed unused components. Updated tests.
Improved checkpointing model
Improved model, started with SimPowerSource
implemented FailureModels and Checkpointing
First working version
midway commit
first update
All simulation are now run with a single CPU and single MemoryUnit. multi CPUs are combined into one. This is for performance and explainability.
* fixed merge conflicts
* Updated M3SA paths.
* Fixed small typo
|
|
CPUs are combined into one. This is for performance and explainability. (#255)
|
|
* Updated the checkpointing system to use SimTrace. The checkpoint model can now also scale, which means the interval between checkpoints can increase or decrease over time.
* spotless kotlin
* Fixed tests
* spotless apply
|
|
* Started on reimplementing the SimTrace implementation
* updated trace format. Fragments now do not have a deadline, but a duration. The Fragments are executed in order.
|
|
* Fixed a problem which caused the CPU limit to be much lower than it should be.
AllocationPolicy is now properly exposed to the user
* Fixed tests
* spotless kotlin
|
|
|
|
* Updated the power models and added tests
* Updated test topologies
|
|
* Updated all package versions including kotlin. Updated all web-server tests to run.
* Changed the java version of the tests. OpenDC now only supports java 19.
* small update
* test update
* new update
* updated docker version to 19
* updated docker version to 19
|
|
* Updated metrics and parquet output
* fixed typos
|
|
* made sure all tests run
* fixed typo
* executed spotlessApply
* added back web-server tests
* updated SimTraceWorkloadTest
* commented CapelinRunneer and GreenifierRunner tests
* commented one SimTraceWorkloadTest
* altered codecov execution
* changed codecov
|
|
This change replaces the use of `CoroutineContext` for passing the
`SimulationDispatcher` across the different modules of OpenDC by the
lightweight `Dispatcher` interface of the OpenDC common module.
|
|
This change updates the `SimulationScheduler` class to implement the
`Dispatcher` interface from the OpenDC Common module, so that OpenDC
modules only need to depend on the common module for dispatching future
task (possibly in simulation).
|
|
This change updates the interface of `SimWorkload` to support
snapshotting workloads. We introduce a new method `snapshot()` to this
interface which returns a new `SimWorkload` that can be started at a
later point in time and on another `SimMachine`, which continues
progress from the moment the workload was snapshotted.
|
|
This change updates the implementation of `SimMachineContext` to report
exceptions thrown in `onStop` as suppressed exceptions if an exception
caused the workload to stop.
|
|
This change adds a new static method `chain` to `SimWorkloads` to chain
multiple workloads sequentially.
|
|
This change introduces a new class SimWorkloads which provides
construction methods for the standard workloads available in OpenDC.
|
|
This change re-implements the OpenDC compute simulator framework using
the new flow2 framework for modelling multi-edge flow networks. The
re-implementation is written in Java and focusses on performance and
clean API surface.
|
|
This change updates the build configuration to use Spotless for code
formating of both Kotlin and Java.
|
|
This change updates the repository to remove the use of wildcard imports
everywhere. Wildcard imports are not allowed by default by Ktlint as
well as Google's Java style guide.
|
|
This change renames the method `runBlockingSimulation` to
`runSimulation` to put more emphasis on the simulation part of the
method. The blocking part is not that important, but this behavior is
still described in the method documentation.
|
|
This change updates the implementation of `SimulationDispatcher` to use
a (possibly user-provided) `SimulationScheduler` for managing the
execution of the simulation and future tasks.
|
|
This change simplifies the SimHypervisor class into a single
implementation. Previously, it was implemented as an abstract class with
multiple implementations for each multiplexer type. We now pass the
multiplexer type as parameter to the SimHypervisor constructor.
|
|
This change updates the virtual machine performance interference model
so that the interference domain can be constructed independently of the
interference profile. As a consequence, the construction of the topology
now does not depend anymore on the interference profile.
|
|
This change moves the Random dependency outside the interference model,
to allow the interference model to be completely immutable and passable
between different simulations.
|
|
This change updates the design of the VM interference model, where we
move more of the logic into the `VmInterferenceMember` interface. This
removes the dependency on the `VmInterferenceModel` for the hypervisor
interface.
|
|
This change updates the signature of the `SimHypervisor` interface to
accept a `VmInterferenceKey` when creating a new virtual machine,
instead of providing a string identifier. This is in preparation for
removing the dependency on the `VmInterferenceModel` in the
`SimAbstractHypervisor` class.
|
|
This change removes the convergence listener parameter in for the
`SimBareMetalMachine` and the hypervisors. This parameter was not used
in the code-base and is being removed with the introduction of the new
flow2 module.
|
|
This change moves the core of the VM interference model from the flow
module into the compute simulator. This logic can be contained in the
compute simulator and does not need to leak into the flow-level
simulator.
|
|
This change updates the SimMachine interface to drop the coroutine
requirement for running a workload on a machines. Users can now
asynchronously start a workload and receive notifications via the
workload callbacks.
Users still have the possibility to suspend execution during workload
execution by using the new `runWorkload` method, which is implemented on
top of the new `startWorkload` primitive.
|
|
This change redesigns the virtual machine interference algorithm to have
a fixed memory usage per `VmInterferenceModel` instance. Previously, for
every interference domain, a copy of the model would be created, leading
to OutOfMemory errors when running multiple experiments at the same
time.
|
|
This change improves the performance of the SimTraceWorkload class by
changing the way trace fragments are read and processed by the CPU
consumers.
|
|
This change adds a new interface to the SimHypervisor interface that
exposes the CPU time counters directly. These are derived from the flow
counters and will be used by SimHost to expose them via telemetry.
|
|
This change renames the `opendc-simulator-resources` module into the
`opendc-simulator-flow` module to indicate that the core simulation
model of OpenDC is based around modelling and simulating flows.
Previously, the distinction between resource consumer and provider, and
input and output caused some confusion. By switching to a flow-based
model, this distinction is now clear (as in, the water flows from source
to consumer/sink).
|
|
This change removes the distributor and aggregator interfaces in favour
of a single switch interface. Since the switch interface is as powerful
as both the distributor and aggregator, we don't need the latter two.
|
|
This change removes the work and deadline properties from the
SimResourceCommand.Consume class and introduces a new property duration.
This property is now used in conjunction with the limit to compute the amount
of work processed by a resource provider.
Previously, we used both work and deadline to compute the duration and
the amount of remaining work at the end of a consumption. However, with
this change, we ensure that a resource consumption always runs at the
same speed once establishing, drastically simplifying the computation
for the amount of work processed during the consumption.
|
|
This change removes the dependency on SnakeYaml for the simulator. It
was only required for a very small component of the simulator and
therefore does not justify bringing in such a dependency.
|
|
This change standardizes the metrics emitted by SimHost instances and
their guests based on the OpenTelemetry semantic conventions. We now
also report CPU time as opposed to CPU work as this metric is more
commonly used.
|
|
This change removes the usage and speed fields from SimMachine. We
currently use other ways to capture the usage and speed and these fields
cause an additional maintenance burden and performance impact. Hence the
removal of these fields.
|
|
This change eliminates unnecessary double to long conversions in the
simulator. Previously, we used longs to denote the amount of work.
However, in the mean time we have switched to doubles in the lower
stack.
|
|
This change fixes an issue with the simulator where trace fragments with
zero cores to execute would give a NaN amount of work.
|
|
This change refactors the trace workload in the OpenDC simulator to
track execute a fragment based on the fragment's timestamp. This makes
sure that the trace is replayed identically to the original execution.
|
|
This change updates reimplements the performance interference model to
work on top of the universal resource model in
`opendc-simulator-resources`. This enables us to model interference and
performance variability of other resources such as disk or network in
the future.
|
|
This change adds initial support for storage devices in the OpenDC
simulator. Currently, we focus on local disks attached to the machine.
In the future, we plan to support networked storage devices using the
networking support in OpenDC.
|
|
This change bridges the compute and network simulation module by
adding support for network adapters in the compute module. With these
network adapters, compute workloads can communicate over the network
that the adapters are connected to.
|
|
This change re-organizes the classes of the compute simulator module to
make a clearer distinction between the hardware, firmware and software
interfaces in this module.
|
|
This change adds the CPU frequency scaling governors including the conservative and on-demand governors that are found in the Linux kernel.
# Implementation Notes
* A `ScalingPolicy` has been added to aid the frequency scaling process.
|
|
This change adds the CPU frequency scaling governors that are found in
the Linux kernel, which include the conservative and on-demand governor.
|
|
This pull request adds a subsystem to OpenDC for modelling power components in datacenters,
such as UPSes, PDUs and PSUs.
These components also take into account electrical losses that occur in real-world scenarios.
- Add module for datacenter power components (UPS, PDU)
- Integrate power subsystem with compute subsystem (PSU)
- Model power loss in power components
**Breaking API Changes**
1. `SimBareMetalMachine.powerDraw` is replaced by `SimBareMetalMachine.psu.powerDraw`
|