| Age | Commit message (Collapse) | Author |
|
|
|
This change hides the SimResourceState from public API since it is not
actively used outside of the `SimResourceContextImpl` class.
|
|
This change removes the `onUpdate` callback from the
`SimResourceProviderLogic` interface. Instead, users should now update
counters using either `onConsume` or `onConverge`.
|
|
This change updates the SimResourceContextImpl to lazily push changes to
the resource context instead of applying them directly. The change is
picked up after the resource is updated again.
|
|
This change updates the simulator implementation to always invoke the
`SimResourceConsumer.onNext` callback when the resource context is
invalidated. This allows users to update the resource counter or do some
other work if the context has changed.
|
|
This change simplifies the implementation of the
SimResourceAggregatorMaxMin class by utilizing the new push method.
This approach should offer better performance than the previous version,
since we can directly push changes to the source.
|
|
This change removes unnecessary allocations in the SimResourceInterpreter
caused by the way timers were allocated for the resource context.
|
|
This change adds a new method to `SimResourceContext` called `push`
which allows users to change the requested flow rate directly without
having to interrupt the consumer.
|
|
This change removes the work and deadline properties from the
SimResourceCommand.Consume class and introduces a new property duration.
This property is now used in conjunction with the limit to compute the amount
of work processed by a resource provider.
Previously, we used both work and deadline to compute the duration and
the amount of remaining work at the end of a consumption. However, with
this change, we ensure that a resource consumption always runs at the
same speed once establishing, drastically simplifying the computation
for the amount of work processed during the consumption.
|
|
This change updates the SimResourceDistributorMaxMin implementation to
use direct field accesses in the perf-sensitive code.
|
|
This change updates the JMH benchmarks to use longer traces in order to
measure the overhead of running the flow simulation as opposed to
setting up the benchmark.
|
|
This change updates the project to use jmh-gradle for benchmarking as
opposed to kotlinx-benchmark. Both plugins use JMH under the hood, but
jmh-gradle offers more options for profiling and seems to be beter
maintained.
|
|
This change updates the Gradle configuration to target Java 11 (instead
of Java 8) as the lowest denominator when building OpenDC. Since the
project has not yet been adopted by (many) other applications, we should
not restrict the project to such an old Java version.
|
|
This pull request addresses some issues with the current implementation of
the `ComputeMetricExporter` class.
In particular, the construction of `ComputeMetricExporter` does not require a `Clock` anymore.
- Ensure shutdown of exporter is called
- Do not require clock for ComputeMetricExporter
- Do not recover guests in non-error state
- Write null values explicitly in Parquet exporter
- Report cause of compute exporter failure
**Breaking API Changes**
- `ComputeMetricExporter` is now an abstract class that can be extended to collect metrics
- `ParquetComputeMonitor` has been renamed to `ParquetComputeMetricExporter` and extends `ComputeMetricExporter`
|
|
|
|
|
|
|
|
This change drops the requirement for a clock parameter when
constructing a ComputeMetricExporter, since it will now derive the
timestamp from the recorded metrics.
|
|
This change updates the CoroutineMetricReader to ensure that the
exporter is shutdown when the metric reader fails or is shutdown.
|
|
Bumps [semver-regex](https://github.com/sindresorhus/semver-regex) from 3.1.2 to 3.1.3.
- [Release notes](https://github.com/sindresorhus/semver-regex/releases)
- [Commits](https://github.com/sindresorhus/semver-regex/commits)
---
updated-dependencies:
- dependency-name: semver-regex
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
|
This pull request extends the trace API to support writing new traces.
- Unify columns of different tables
- Support column lookup via index
- Use index lookup in trace loader
- Add property for describing partition keys
- Simplify TraceFormat SPI interface
- Add support for writing traces
**Breaking API Changes**
- `TraceFormat` SPI interface has been redesigned.
|
|
This change adds a new API for writing traces in a trace format.
Currently, writing is only supported by the OpenDC VM format, but over
time the other formats will also have support for writing added.
|
|
This change simplifies the TraceFormat SPI interface by reducing the
number of interfaces that implementors need to implement to only
TraceFormat.
|
|
|
|
This change updates the ComputeWorkloadLoader to use index column
lookups in order to prevent having to lookup the index for every row.
|
|
This change adds support for looking up the column value through the
column index. This enables faster lookup when processing very large
traces.
|
|
This change unifies columns of different tables used by trace formats.
This concretely means that instead of having columns specific per table
(e.g., RESOURCE_ID and RESOURCE_STATE_ID), with this changes these
columns are shared between the tables with a single definition
(RESOURCE_ID).
|
|
This pull request enables re-use of virtual machine workload helpers by extracting the helpers into a
separate module which may be used by other experiments.
- Support workload/machine CPU count mismatch
- Extract common code out of Capelin experiments
- Support flexible topology creation
- Add option for optimizing SimHost simulation
- Support creating CPU-optimized topology
- Make workload sampling model extensible
- Add support for extended Bitbrains trace format
- Add support for Azure VM trace format
- Add support for internal OpenDC VM trace format
- Optimize OpenDC VM trace format
- Add tool for converting workload traces
- Remove dependency on SnakeYaml
**Breaking API Changes**
- `RESOURCE_NCPU` and `RESOURCE_STATE_NCPU` are renamed to `RESOURCE_CPU_COUNT` and `RESOURCE_STATE_CPU_COUNT` respectively.
|
|
This change removes the dependency on SnakeYaml for the simulator. It
was only required for a very small component of the simulator and
therefore does not justify bringing in such a dependency.
|
|
This change adds an initial implementation to the trace library for
converting between workload trace formats. Currently the tool supports
only converting to the OpenDC VM trace format. However, in the future,
we will add support for converting between other formats as well.
|
|
This change optimizes the OpenDC VM trace format by removing
unnecessary columns as well as optimizing the writer settings.
The new implementation still supports reading the old trace format in
case users run OpenDC with older workload traces.
|
|
This change adds official support to the trace library for the internal
VM trace format used by OpenDC for its experiments. This is a compact
format that uses Parquet to store the virtual machine trace data in two
Parquet files.
|
|
This change adds support in the trace library for the Azure VM trace
format.
|
|
This change adds support in the trace library for the extended Bitbrains
format. This format is slightly different than the CSV format used by
the original Bitbrains traces and contains more fields.
|
|
This change updates the workload sampling implementation to be more
flexible in the way the workload is constructed. Users can now sample
multiple workloads at the same time using multiple samplers and use them
as a single workload to simulate.
|
|
This change adds support for creating a topology that is CPU-optimized
for simulation. This means that all the CPU resources of a machine are
merged into a single large CPU in order to reduce simulation time.
|
|
This change adds an option for optimizing SimHost simulation by
combining all the CPUs of a machine into a single large CPU. For most
workloads, this does not significantly affect the simulation results,
but does improve the simulation time by a lot.
|
|
This change adds support for creating flexible topologies by creating a
TopologyFactory interface that is responsible for configuring the hosts
of a compute service.
|
|
This change creates a new module for doing simulations with virtual
machine workloads. We have found that a lot of code in the Capelin
experiments code is being re-used by non-experiment modules.
|
|
This change allows workloads that require more CPUs than available on
the machine to still function properly.
|
|
This pull request standardizes the metrics emitted by the simulator based on OpenTelemetry conventions.
From now on, all metrics exposed by the simulator are exported through OpenTelemetry
following the recommended practices for naming, collection, etc.
**Implementation Notes**
- Improve ParquetDataWriter implementation
- Simplify CoroutineMetricReader
- Create separate MeterProvider per service/host
- Standardize compute scheduler metrics
- Standardize SimHost metrics
- Use logical types for Parquet output columns
**External Dependencies**
- Update to OpenTelemetry 1.6.0
**Breaking API Changes**
- Instead of supplying a `Meter` instances, key classes are now responsible for constructing
a `Meter` instance from the supplied `MeterProvider`.
- Export format has been changed to suit the outputted metrics
- Energy experiments shell has been removed
|
|
This change updates the output schema for the experiment data to use
logical types where possible. This adds additional context for the
writer and the reader on how to process the column (efficiently).
|
|
This change standardizes the metrics emitted by SimHost instances and
their guests based on the OpenTelemetry semantic conventions. We now
also report CPU time as opposed to CPU work as this metric is more
commonly used.
|
|
This change updates the OpenDC compute service implementation with
multiple meters that follow the OpenTelemetry conventions.
|
|
This change refactors the telemetry implementation by creating a
separate MeterProvider per service or host. This means we have to keep
track of multiple metric producers, but that we can attach resource
information to each of the MeterProviders like we would in a real world
scenario.
|
|
This change simplifies the CoroutineMetricReader implementation by
removing the seperation of reader and exporter jobs.
|
|
This change improves the ParquetDataWriter class to support more complex
use-cases. It now allows subclasses to modify the writer options.
In addition to this, a subclass for writing server data is added.
|
|
This change removes the energy experiments. The experiments only
provided a setup for the original experiments and is not able to
reproduce the results without further worker.
|
|
This change updates the opentelemetry-java library to version 1.6.0.
|
|
This pull request updates the trace API with the addition of several new trace formats.
- Add support for Materna traces from GWA
- Keep reader state in own class
- Parse last column in Solvinity trace format
- Add support Azure VM traces
- Add support for WfCommons (WorkflowHub) traces
- Add API for accessing available table columns
- Add synthetic resource table for Bitbrains format
- Support dynamic resolving of trace formats
**Breaking API Changes**
- Replace `isSupported` by a list of `TableColumns`
|