| Age | Commit message (Collapse) | Author |
|
This change updates the SimResourceContextImpl to lazily push changes to
the resource context instead of applying them directly. The change is
picked up after the resource is updated again.
|
|
This change updates the simulator implementation to always invoke the
`SimResourceConsumer.onNext` callback when the resource context is
invalidated. This allows users to update the resource counter or do some
other work if the context has changed.
|
|
This change simplifies the implementation of the
SimResourceAggregatorMaxMin class by utilizing the new push method.
This approach should offer better performance than the previous version,
since we can directly push changes to the source.
|
|
This change removes unnecessary allocations in the SimResourceInterpreter
caused by the way timers were allocated for the resource context.
|
|
This change adds a new method to `SimResourceContext` called `push`
which allows users to change the requested flow rate directly without
having to interrupt the consumer.
|
|
This change removes the work and deadline properties from the
SimResourceCommand.Consume class and introduces a new property duration.
This property is now used in conjunction with the limit to compute the amount
of work processed by a resource provider.
Previously, we used both work and deadline to compute the duration and
the amount of remaining work at the end of a consumption. However, with
this change, we ensure that a resource consumption always runs at the
same speed once establishing, drastically simplifying the computation
for the amount of work processed during the consumption.
|
|
This change updates the SimResourceDistributorMaxMin implementation to
use direct field accesses in the perf-sensitive code.
|
|
This change updates the JMH benchmarks to use longer traces in order to
measure the overhead of running the flow simulation as opposed to
setting up the benchmark.
|
|
This change updates the project to use jmh-gradle for benchmarking as
opposed to kotlinx-benchmark. Both plugins use JMH under the hood, but
jmh-gradle offers more options for profiling and seems to be beter
maintained.
|
|
This change removes the dependency on SnakeYaml for the simulator. It
was only required for a very small component of the simulator and
therefore does not justify bringing in such a dependency.
|
|
This change allows workloads that require more CPUs than available on
the machine to still function properly.
|
|
This change standardizes the metrics emitted by SimHost instances and
their guests based on the OpenTelemetry semantic conventions. We now
also report CPU time as opposed to CPU work as this metric is more
commonly used.
|
|
This change moves the fault injection logic directly into the
opendc-compute-simulator module, so that it can operate at a higher
abstraction. In the future, we might again split the module if we can
re-use some of its logic.
|
|
This change adds support for specifying the distribution of the
failures, group size and duration for the fault injector.
|
|
This change updates the SimHost implementation to measure the power draw
of the machine without PSU overhead to make the results more realistic.
|
|
This change removes the usage and speed fields from SimMachine. We
currently use other ways to capture the usage and speed and these fields
cause an additional maintenance burden and performance impact. Hence the
removal of these fields.
|
|
This change eliminates unnecessary double to long conversions in the
simulator. Previously, we used longs to denote the amount of work.
However, in the mean time we have switched to doubles in the lower
stack.
|
|
This change upgrades the OpenTelemetry dependency to version 1.5, which
contains various breaking changes in the metrics API.
|
|
This change adds support to the simulator for reporting the work lost
due to performance interference.
|
|
|
|
This change fixes an issue with the simulator where trace fragments with
zero cores to execute would give a NaN amount of work.
|
|
This change fixes an issue with the simulator where it would record
overcomitted work if the output was updated before the deadline was
reached.
|
|
This change refactors the trace workload in the OpenDC simulator to
track execute a fragment based on the fragment's timestamp. This makes
sure that the trace is replayed identically to the original execution.
|
|
This change updates the FilterScheduler implementation to follow more
closely the scheduler implementation in OpenStack's Nova. We now
normalize the weights, support many of the filters and weights in
OpenStack and support overcommitting resources.
|
|
This change implements a performance improvement by preventing updates
on the resource counters in case no work was performed in the last
cycle.
|
|
This change updates the Kotlin dependencies used by OpenDC to their
latest version.
|
|
This change updates reimplements the performance interference model to
work on top of the universal resource model in
`opendc-simulator-resources`. This enables us to model interference and
performance variability of other resources such as disk or network in
the future.
|
|
This change introduces an interface for modelling performance
variability due to resource interference in systems where resources are
shared across multiple consumers.
|
|
This change adds initial support for storage devices in the OpenDC
simulator. Currently, we focus on local disks attached to the machine.
In the future, we plan to support networked storage devices using the
networking support in OpenDC.
|
|
This change adds a virtual network switch to the OpenDC networking
module. Currently, the switch bridges the traffic equally across all
ports. In the future, we'll also add routing support to the switch.
|
|
This change bridges the compute and network simulation module by
adding support for network adapters in the compute module. With these
network adapters, compute workloads can communicate over the network
that the adapters are connected to.
|
|
This change re-organizes the classes of the compute simulator module to
make a clearer distinction between the hardware, firmware and software
interfaces in this module.
|
|
This change adds a basic framework as basis for network simulation support
in OpenDC. It is modelled similarly to the power system that has been
added recently.
|
|
This change updates the resources module to reduce the number of object
allocations in the interpreter's hot path. This in turn should reduce
the GC pressure.
|
|
This change optimizes the internal flag management used in the
SimResourceContextImpl to use bitwise flags instead of enums. This
approach simplifies the implementation immensely and reduces the number
of branches.
|
|
This change removes the AutoCloseable interface from the
SimResourceProvider and removes the concept of a resource lifecycle.
Instead, resource providers are now either active (running a resource
consumer) or in-active (being idle), which simplifies implementation.
|
|
This change updates the SimResourceInterpreter implementation to pool
the allocations of the Update objects. This reduces the amount of
allocations necessary in the hot path of the simulator.
|
|
This change updates the SimResourceContextImpl to optimize the access to
the remainingWork property, which is required by many calls in the hot
path.
|
|
This change adds the CPU frequency scaling governors including the conservative and on-demand governors that are found in the Linux kernel.
# Implementation Notes
* A `ScalingPolicy` has been added to aid the frequency scaling process.
|
|
This change adds the CPU frequency scaling governors that are found in
the Linux kernel, which include the conservative and on-demand governor.
|
|
This pull request adds a subsystem to OpenDC for modelling power components in datacenters,
such as UPSes, PDUs and PSUs.
These components also take into account electrical losses that occur in real-world scenarios.
- Add module for datacenter power components (UPS, PDU)
- Integrate power subsystem with compute subsystem (PSU)
- Model power loss in power components
**Breaking API Changes**
1. `SimBareMetalMachine.powerDraw` is replaced by `SimBareMetalMachine.psu.powerDraw`
|
|
This change introduces power loss to the PSU component.
|
|
This change adds a new model for the UPS to the OpenDC simulator power
subsystem.
|
|
This change integrates the power subsystem of the simulator with the
compute subsystem by exposing a new field on a SimBareMetalMachine, psu,
which provides access to the machine's PSU, which in turn can be
connected to a SimPowerOutlet.
|
|
This change adds a model for power loss to the Power Distribution Unit
(PDU) model in OpenDC.
|
|
This change adds a new module for simulating power components in
datacenters such as PDUs and UPSes. This module will serve as the basis
for the power monitoring framework in OpenDC and will future integrate
with the other simulation components (such as compute).
|
|
This change introduces a memory resource which can be used to model
memory usage. The SimMachineContext now exposes a memory field of type
SimMemory which provides access to this resource and allows workloads to
start a consumer on this resource.
|
|
This change moves the CPU frequency scaling governors from the
bare-metal/firmware layer (SimBareMetalMachine) to the OS/Hypervisor
layer (SimHypervisor) where it can make more informed decisions about
the CPU frequency based on the load of the operating system or
hypervisor.
|
|
This change splits the functionality present in the CPUFreq subsystem of
the compute simulation. Currently, the DVFS functionality is embedded in
SimBareMetalMachine. However, this functionality should not exist within
the firmware layer of a machine. Instead, the operating system should
perform this logic (in OpenDC this should be the hypervisor).
Furthermore, this change moves the scaling driver into the power
package. The power driver is a machine/firmware specific implementation
that computes the power consumption of a machine.
|
|
This change updates the SimWorkload interfaces to allow
implementations to start consumers for the machine resource providers
directly.
|