| Age | Commit message (Collapse) | Author |
|
* Removed unused components. Updated tests.
Improved checkpointing model
Improved model, started with SimPowerSource
implemented FailureModels and Checkpointing
First working version
midway commit
first update
All simulation are now run with a single CPU and single MemoryUnit. multi CPUs are combined into one. This is for performance and explainability.
* fixed merge conflicts
* Updated M3SA paths.
* Fixed small typo
|
|
* (feat) demo files are now ignored
* integrating m3sa changes with opendc
* gitignore ignores demo
* m3sa linked, tested, works 🎉🎆
* linting & checks fully pass
* m3sa documentation (re...)added
* package.json added, a potentail solution for Build Docker Images workflow
* (fix) opendc-m3sa renamed to opendc-experiments-m3sa
* (feat) Model is now a dataclass
* (fix) package and package-lock reverted as before the PR, now they mirror the opendc master branch
* (fix) Experiments renamed to experiment
* branch updated with changes from master branch
* trying to fix the build docker image failed workflow
* trying to fix the build docker image failed workflow
* All simulation are now run with a single CPU and single MemoryUnit. multi CPUs are combined into one. This is for performance and explainability. (#255) (#37)
Co-authored-by: Dante Niewenhuis <d.niewenhuis@hotmail.com>
* All simulation are now run with a single CPU and single MemoryUnit. multi CPUs are combined into one. This is for performance and explainability. (#255) (#38)
Co-authored-by: Dante Niewenhuis <d.niewenhuis@hotmail.com>
* All simulation are now run with a single CPU and single MemoryUnit. multi CPUs are combined into one. This is for performance and explainability. (#255) (#39)
Co-authored-by: Dante Niewenhuis <d.niewenhuis@hotmail.com>
* [TEMP](feat) m3saCli decoupled from experimentCli
* spotless and minor refactoring
* (feat)[TEMP] decoupling m3sa from experiment
* spotless applied
* documentation resolved
* requirements.txt added
* path to M3SA is now provided as a parameter to M3SACLI
* spotless applied
* (fix) python environment variables solved, output analysis folder solved
* documentation changed and matching the master branch doc
* package-lock reverted
* package-lock reverted
---------
Co-authored-by: Dante Niewenhuis <d.niewenhuis@hotmail.com>
|
|
|
|
* sync with the master branch
* rebase
* multimodel - simulation is currently run as many times as you can see a model
* factory method - handles models without given params
* removed redundant flags
* modelType
* flags removed
* implemented output into a folder
* multimodel ipynb setup - to be implemented and also ran as a python script, when the simulation occurs
* towards a mutimodel python implementation - issue observed - the saved files have same data?
* json parsing handles now lists for topology, workloads, allocaitonPolicies, powerModels
* scenarioFile inputs lists, and creates multiple combinations of scenarios
* multi-model prediction repaired, now we predict using multiple models
* commit before removing powerModel from scenario
* commit after removing powerModel from scenario
* commit after removing powerModel from scenario (and actually running)
* powermodels now can output their name and full name (with min and max)
* now we can select where to output (seed or output folder)
* input files - clear naming + output naming improved
* minimal changes
* all tests passing + json files from tests updated to the new json format
* json files from topology now accept only one power model (instead of list)
* json files from topology now accept only one power model (instead of list)
* multi and single input from tests updated to match the format
* tests passed locally
* spotless applies
* demo folder removed
|
|
* Started with the carbon trace implementation
* Moved the carbon trace system to the proper folders
|
|
* Revamped the trace system. All TraceFormat files are now in the api module. This fixes some problems with not being able to use types of traces
* applied spotless
|
|
* Initial commit
* Implemented a new systems of defining and running scenarios / portfolios. Scenarios and Portfolios can now be defined using JSON files similar to topologies. This allows user to define experiments without changing any KotLin code.
* Ran spotlessApply
|
|
* removed experiment-compute and integrated all components into opendc-compute
* updated workflow gradle file
* removed unneeded code
|
|
|
|
This change integrates the classes from the old
`opendc-compute-workload` module into the `opendc-experiments-compute`
module. This new module contains helper classes for setting up
experiments with the OpenDC compute service.
|
|
This change adds a new module `opendc-experiments-faas` that provides
provisioner implementations for experiments to use for setting up the
FaaS service of OpenDC.
|
|
This change adds a new module `opendc-experiments-workflow` that provides
provisioner implementations for experiments to use for setting up and
using the workflow engine in OpenDC.
|
|
This change adds a new module `opendc-experiments-compute` that provides
provisioner implementations for experiments to use for setting up the
compute service of OpenDC and provisioning (simulated) hosts.
|
|
This change adds a new module called opendc-experiments-base which will
provide a base for doing experiments with OpenDC. The initial feature we
introduce is the service registry which acts as DNS for services to
register during experimentation.
|
|
This change updates the Quarkus configuration of the OpenDC web server
to serve as a fully standalone distribution that is capable of serving
the web UI, web API, and experiment runner. Such an approach vastly
simplifies local deployments.
For Docker deployments, we create a custom Quarkus profile that uses
PostgreSQL and disables the web UI.
|
|
This change adds a re-usable test suite for the interface of the OpenDC
trace API, so implementors can verify whether they match the
specification of the interfaces.
|
|
This change adds a Quarkus extension that hosts the OpenDC web runner
for a (potentially local) OpenDC API instance. This functionality
enables a simplified developer experience by allowing users to spawn the
complete OpenDC stack with a single command.
|
|
This change updates the OpenDC web UI Quarkus extension to live
completely in the `opendc-web` directory, as opposed to adding another
level of nesting. This also allows us to properly name the artifacts of
the Quarkus extension modules.
|
|
This change splits the command line interface from the OpenDC web runner
into a separate configuration. We plan to re-use the runner code for a Quarkus
extension that integrates the runner in development mode.
|
|
This change removes the OpenDC Harness modules from the main repository.
We have made the decision to take a different direction regarding the
specification and execution of experiments. The design of the current
harness does not integrate well with the specification of experiments in
the web interface. The new version focuses on proper integration with
the web interface, as well as via the command line interface.
|
|
This change adds a new module, opendc-faas-workload that contains
helper code for constructing simulations of FaaS-based workloads
using OpenDC. In addition, we add an integration test that demonstrates
the capabilities of the helper tool and the FaaS platform of OpenDC.
|
|
This change removes the dependency on the OpenTelemetry SDK. Instead,
we'll only expose metrics via the OpenTelemetry API in the future via
adapter classes.
|
|
This change removes the OpenTelemetry integration from the OpenDC
Compute modules. Previously, we chose to integrate OpenTelemetry to
provide a unified way to report metrics to the users.
Although this worked as expected, the overhead of the OpenTelemetry when
collecting metrics during simulation was considerable and lacked more
optimization opportunities (other than providing a separate API
implementation). Furthermore, since we were tied to OpenTelemetry's SDK
implementation, we experienced issues with throttling and registering
multiple instruments.
We will instead use another approach, where we expose the core metrics
in OpenDC via specialized interfaces (see the commits before) such that
access is fast and can be done without having to interface with
OpenTelemetry. In addition, we will provide an adapter to that is able
to forward these metrics to OpenTelemetry implementations, so we can
still integrate with the wider ecosystem.
|
|
This change adds support for querying workload trace formats implemented
using the OpenDC API through Apache Calcite. This allows users to write
SQL queries to explore the workload traces.
|
|
This change adds a new Quarkus extension that is able to serve the
OpenDC web interface via the Quarkus deployment of OpenDC.
|
|
This change implements a simple client for the OpenDC REST API into a
separate module. This allows other users to use this module as well.
|
|
This change adds a unified communication protocol in form of the module
opendc-web-proto which contains the classes that form the communication
protocol of OpenDC's API v2.
By having the protocol in a separate module, we can utilize the classes
in both server and client.
|
|
This change moves build dependencies used by Gradle into the version catalog
to ensure a single location for all dependency versions.
|
|
This change removes the opendc-platform module from the project. This
module represented a Java platform which was previously used for sharing
a set of dependency versions between subprojects. However, with the
version catalogue that was added by Gradle, we currently do not use the
platform anymore.
|
|
This change adds a new module, opendc-common, that contains
functionality that is shared across OpenDC's modules.
We move the existing utils module into this new module.
|
|
This change adds a new module, opendc-workflow-workload that contains
helper code for constructing workflow simulations using OpenDC.
|
|
This change renames the `opendc-simulator-resources` module into the
`opendc-simulator-flow` module to indicate that the core simulation
model of OpenDC is based around modelling and simulating flows.
Previously, the distinction between resource consumer and provider, and
input and output caused some confusion. By switching to a flow-based
model, this distinction is now clear (as in, the water flows from source
to consumer/sink).
|
|
This change adds an initial implementation to the trace library for
converting between workload trace formats. Currently the tool supports
only converting to the OpenDC VM trace format. However, in the future,
we will add support for converting between other formats as well.
|
|
This change adds official support to the trace library for the internal
VM trace format used by OpenDC for its experiments. This is a compact
format that uses Parquet to store the virtual machine trace data in two
Parquet files.
|
|
This change adds support in the trace library for the Azure VM trace
format.
|
|
This change creates a new module for doing simulations with virtual
machine workloads. We have found that a lot of code in the Capelin
experiments code is being re-used by non-experiment modules.
|
|
This change removes the energy experiments. The experiments only
provided a setup for the original experiments and is not able to
reproduce the results without further worker.
|
|
This change adds support for reading WfCommons workflow traces in
OpenDC. This functionality is available in the new
`opendc-trace-wfformat` module.
|
|
This change moves the fault injection logic directly into the
opendc-compute-simulator module, so that it can operate at a higher
abstraction. In the future, we might again split the module if we can
re-use some of its logic.
|
|
This change moves the metric collection outside the Capelin codebase in
a separate module so other modules can also benefit from the compute
metric collection code.
|
|
This change updates the WTF trace reader to support the new streaming
trace API.
|
|
This change updates the SWF trace reader to support the new streaming
trace API.
|
|
This change moves Bitbrains trace support into a separate module and
adds support for the new trace api.
|
|
This change extracts the Parquet helpers outside format module into a
new module, in order to improve re-usability of these helpers.
|
|
This change starts the process of moving the different trace formats into
separate modules. This change in particular moves the GWF trace format
into a new module, opendc-trace-gwf.
Furthermore, this change also implements the trace API for the GWF
module.
|
|
This change introduces a new OpenDC API for reading various trace
formats in a streaming manner.
|
|
This change adds a basic framework as basis for network simulation support
in OpenDC. It is modelled similarly to the power system that has been
added recently.
|
|
This change renames the opendc-serverless module to opendc-faas to
better distinguish between the two terms (Serverless and FaaS) and be
clearer about the intent of the module.
The opendc-faas module holds the code for the FaaS platform on top of
OpenDC. Although this is one approach of doing serverless, serverless
can also entail other services that will not be covered by this module.
|
|
This change adds a new module for simulating power components in
datacenters such as PDUs and UPSes. This module will serve as the basis
for the power monitoring framework in OpenDC and will future integrate
with the other simulation components (such as compute).
|
|
This change adds the initial experiment setup for the TensorFlow
on OpenDC experiments.
|