| Age | Commit message (Collapse) | Author |
|
This change migrates Dokka, the documentation generation tool for
Kotlin, to version 1.4.32. This is a significant upgrade over the
previous version which should support multiple modules as well as
multiple output formats.
|
|
This change configures the Distribution plugin for the root project and
aggregates the artifacts of the other projects to generate a single
distribution file containing all libraries and binaries.
|
|
This change updates the project documentation by moving most of the
documentation to the docs directory.
|
|
This change updates the project structure to become flattened.
Previously, the simulator, frontend and API each lived into their own directory.
With this change, all modules of the project live in the top-level directory of
the repository.
|
|
This change updates the references to the API and Simulator modules of
the projects for Codecov.
|
|
This change updates several points in the README that were outdated or
incorrect.
|
|
This change adds a description of the software license under which
OpenDC is distributed to the README.md file.
|
|
This change adjusts the docker-compose configuration to support the
re-organized project structure.
|
|
This change fixes the references to the frontend and API modules that
were invalidated due to the restructuring of project in the previous
commit.
|
|
This change updates the project structure to become flattened.
Previously, the simulator, frontend and API each lived into their own
directory.
With this change, all modules of the project live in the top-level
directory of the repository. This should improve discoverability of
modules of the project.
|
|
This change adds a power model for optimizing the mean squared error
to the available power models in OpenDC.
|
|
This change introduces the SimulationCoroutineDispatcher implementation which replaces the TestCoroutineDispatcher for running single-threaded simulations.
|
|
This change migrates the remainder of the codebase to the
SimulationCoroutineDispatcher implementation.
|
|
This change introduces the SimulationCoroutineDispatcher implementation
which replaces the TestCoroutineDispatcher for running single-threaded
simulations.
Previously, we used the TestCoroutineDispatcher from the
kotlinx-coroutines-test modules for running simulations. However, this
module is aimed at coroutine tests and not at simulations.
In particular, having to construct a Clock object each time for the
TestCoroutineDispatcher caused a lot of unnecessary lines. With the new
approach, the SimulationCoroutineDispatcher automatically exposes a
usable Clock object.
In addition to ergonomic benefits, the SimulationCoroutineDispatcher is
much faster than the TestCoroutineDispatcher due to the assumption that
simulations run in only a single thread. As a result, the dispatcher
does not need to perform synchronization and can use the fast
PriorityQueue implementation.
|
|
Bumps [py](https://github.com/pytest-dev/py) from 1.9.0 to 1.10.0.
- [Release notes](https://github.com/pytest-dev/py/releases)
- [Changelog](https://github.com/pytest-dev/py/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/py/compare/1.9.0...1.10.0)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
|
This change adds a quick workaround for getting kotlinx-benchmark to
work again with Gradle 7.
See https://github.com/Kotlin/kotlinx-benchmark/issues/39.
|
|
This is a second pull request to address several issues that were present
in the web runner and the associated experiments:
* Re-use topology across repeats
* Disallow re-use of `SimTraceWorkload`
* Construct new `SimTraceWorkload` for each simulation run
|
|
This change fixes and issue where a SimWorkload was being re-used across
simulation runs. Given that SimWorkload is stateless, this may cause
strange issues.
|
|
This change modifies the web runner to construct the topology only once
per repeat, given that the construction does not depend on the repeat
number.
|
|
This pull request addresses several issues that were present
in the web runner and the associated experiments:
* Enable failures only when user requests it
* Simplify power usage calculation (directly from J to Wh)
* Fix issue with multiple socket machines.
* Fix filter scheduler weights
**Breaking API Changes**
* `ScalingContext` now only exposes a `SimProcessingUnit`
instead of a `ProcessingUnit` and `SimResourceSource`.
|
|
This change fixes an issue where incorrect scheduling weights were
applied to the filter scheduler.
|
|
This change introduces the SimProcessingUnit which represents a
simulated processing unit which the user can control during the workload
execution.
|
|
This change simplifies the conversion from power to energy consumption
used in the web runner. Now, we convert straight from J to Wh.
|
|
This changes fixes a bug where the simulator obtained the incorrect
failure frequency causing failures to be enabled even when the user
disabled it.
|
|
This pull request updates the Kotlin project to build with Gradle 7.0.
This is necessary to support building the project with Java 16.
|
|
This change adds the asymptotic power model that is used in GreenCloud
to the available power models in OpenDC.
|
|
This pull request implements the filter scheduler modeled after the scheduler
from [OpenStack](https://docs.openstack.org/nova/latest/user/filter-scheduler.html).
The scheduler is functionally equivalent to the old allocation policies, but is more
flexible and allows policies to be combined.
* A new interface, `ComputeScheduler` is introduced, which is used by the
`ComputeServiceImpl` to pick hosts to schedule on.
* `FilterScheduler` is implemented, which works by filtering and weighing the available hosts.
**Breaking API Changes**
* Removal of the `AllocationPolicy` interface and its implementations.
Users should migrate to the filter scheduler which offers the same functionality and more.
|
|
This pull request is the first in a series of pull request to add the serverless experiments
from Soufiane Jounaid's BSc thesis to the main OpenDC repository.
In this pull request, we add the serverless experiment and trace reader.
* Add `opendc-experiments-serverless20` which will contain the serverless experiments.
* Add `ServerlessTraceReader` which reads the traces from Soufiane's work.
* Add support for cold start delays
* Expose metrics per function.
|
|
|
|
This change adds metrics that are tracked per function instance, which
includes the runtime of the invocations, the number of invocations
(total, warm, cold, failed).
|
|
This change adds an experiments testing the OpenDC Serverless module.
|
|
This change adds the trace reader for the serverless experiments as
described in #48.
|
|
|
|
This change exposes several metrics from the Serverless service, which
are needed for the experiments.
|
|
This change migrates the OpenDC codebase to use the new FilterScheduler
for scheduling virtual machines. This removes the old allocation
policies as well.
|
|
This change adds an implementation of the filter scheduler to the OpenDC
Compute module. This is modeled after the filter scheduler
implementation in OpenStack and should allow for more flexible
scheduling policies.
See: https://docs.openstack.org/nova/latest/user/filter-scheduler.html
|
|
This pull requests adds an experiment to the repository for the OpenDC Energy project.
This experiment currently runs the Solvinity traces and tests how different energy
models perform.
* Add new experiment `opendc-experiments-energy21`
* Link experiment to `ConsoleRunner` so that the experiment can be run from the
command line.
* `BatchRecorder` is now used to emit all metrics at once after the hypervisor
finishes a slice.
|
|
This change adds an experiment for the OpenDC Energy project, which
tests various energy models that have been implemented in OpenDC.
|
|
This change fixes an issue in the metric exporter for summary metrics,
where instead of some average value, the sum value was reported.
|
|
This change updates the SimHost implementation to use BatchRecorder to
batch record multiple metrics at once.
|
|
This change fixes an issue in SimTraceWorkload where the CPU usage was
not divided across the cores, but was instead requested for all cores.
|
|
This change fixes an issue in the RandomAllocationPolicy where it would
incorrectly obtain the required memory for the server.
|
|
This change simplifies the way metrics are reported to the monitor.
Previously, power draw was collected separately from the other metrics.
However, with the migration to OpenTelemetry, we collect all metrics
every 5 minutes, which drastically simplifies the metric gathering
logic.
|
|
This change updates the logic in SimAbstractMachine to only propagate
usages when the value has changed.
|
|
This pull request addresses several bottlenecks that were present in the
`opendc-simulator-resources` layer and `TimerScheduler`.
These changes result into a 4x performance improvement for the energy experiments
we are currently doing.
* The use of `StateFlow` has been removed where possible. Profiling shows that
emitting changes to `StateFlow` becomes a bottleneck in a single-thread context.
* `SimSpeedConsumerAdapter` is an alternative for obtaining the changes in
speed of a resource.
**Breaking API Changes**
* `SimResourceSource` does not expose `speed` as `StateFlow` anymore. To monitor speed changes, use `SimSpeedConsumerAdapter`.
* Power draw in `SimBareMetalMachine` is not exposed as `StateFlow` anymore.
|
|
This change changes the TimerScheduler implementation to prevent calling
Intrinsics.areEqual in the hot path. Profiling shows that especially
this call has a high overhead.
|
|
This change removes the StateFlow speed property on the
SimResourceSource, as the overhead of emitting changes to the StateFlow
is too high in a single-thread context. Our new approach is to use
direct callbacks and counters.
|
|
This change updates the TimerScheduler implementation to cache several
variables in the hot paths of the implementation.
|
|
This pull request adds a CPUFreq subsystem to the simulator module. This subsystem allows a simulated machine to perform frequency scaling, which in turn should reduce energy consumption.
|
|
|