| Age | Commit message (Collapse) | Author |
|
This change updates the project structure to become flattened.
Previously, the simulator, frontend and API each lived into their own
directory.
With this change, all modules of the project live in the top-level
directory of the repository. This should improve discoverability of
modules of the project.
|
|
This change migrates the remainder of the codebase to the
SimulationCoroutineDispatcher implementation.
|
|
This change introduces the SimulationCoroutineDispatcher implementation
which replaces the TestCoroutineDispatcher for running single-threaded
simulations.
Previously, we used the TestCoroutineDispatcher from the
kotlinx-coroutines-test modules for running simulations. However, this
module is aimed at coroutine tests and not at simulations.
In particular, having to construct a Clock object each time for the
TestCoroutineDispatcher caused a lot of unnecessary lines. With the new
approach, the SimulationCoroutineDispatcher automatically exposes a
usable Clock object.
In addition to ergonomic benefits, the SimulationCoroutineDispatcher is
much faster than the TestCoroutineDispatcher due to the assumption that
simulations run in only a single thread. As a result, the dispatcher
does not need to perform synchronization and can use the fast
PriorityQueue implementation.
|
|
This change introduces the SimProcessingUnit which represents a
simulated processing unit which the user can control during the workload
execution.
|
|
This change adds an experiment for the OpenDC Energy project, which
tests various energy models that have been implemented in OpenDC.
|
|
This change updates the SimHost implementation to use BatchRecorder to
batch record multiple metrics at once.
|
|
This change fixes an issue in SimTraceWorkload where the CPU usage was
not divided across the cores, but was instead requested for all cores.
|
|
This change removes the StateFlow speed property on the
SimResourceSource, as the overhead of emitting changes to the StateFlow
is too high in a single-thread context. Our new approach is to use
direct callbacks and counters.
|
|
This change adds a model implementing Dynamic Voltage Frequency Scaling
(DVFS) to OpenDC.
|
|
This change updates the compute service simulator to use OpenTelemetry
for reporting metrics of the (simulated) hosts as opposed to using
custom event flows.
This approach is more generic, flexible and possibly offers better
performance as we can collect metrics of all services in a single sweep,
as opposed to listening to several services and each invoking the
handlers.
|
|
This change integrates the OpenTelemetry Metrics API in the OpenDC
Compute Service implementation. This replaces the old infrastructure for
gathering metrics.
|
|
This change changes the compute service and users of the compute service
to not rely on the internals of `ComputeServiceImpl` and instead use its
public API.
|
|
This change moves the power models from the `opendc-compute-simulator`
to the `opendc-simulator-compute` module, since it better fits the scope
of the models and allows them to be re-used for other purposes.
|
|
This change removes the generic resource constraint (e.g., SimResource)
and replaces it by a simple capacity property. In the future, users
should handle the resource properties on a higher level.
This change simplifies compositions of consumers and providers by not
requiring a translation from resource to capacity.
|
|
This change changes the consumer and context interfaces to expose the
provider capacity and remaining work via the context instance as opposed
to only via the callback. This simplifies aggregation of resources.
|
|
This change re-designs the SimResourceConsumer interface to support in
the future capacity negotiation. This basically means that the consumer
will be informed directly when not enough capacity is available, instead
of after the deadline specified by the consumer.
|
|
This change adds support for aggregating code coverage results from the
different modules.
|
|
This change implements the CPU energy model with p-states from iCanCloud/E-mc2:
- Only pushed a portion of the code for discussion as not sure if the idea is
on track.
- Inline comments have been added, and formal documents will follow once the
model is finalized.
- The p-state power consumptions are currently hard-coded in a companion
object, which should be improved in the next PR(s).
**Breaking Changes**
- CpuPowerModel: directly interact with the machine it is measuring.
- SimBareMetalMachine: expose the speeds of its CPU cores and its clock
instant.
|
|
This change moves the hypervisor implementations to the
opendc-simulator-resources module and makes them generic to the resource
type that is being used (e.g., CPU, disk or networking).
|
|
This change adds a generic framework for modeling resource consumptions and
adapts opendc-simulator-compute to model machines and VMs on top of
this framework.
This framework anticipates the addition of additional resource types
such as memory, disk and network to the OpenDC codebase.
|
|
|
|
|
|
|
|
This change removes the opendc-core module. This module was an artifact
of the old codebase and remained mostly unused. This change removes all
usages of the module and if necessary introduces replacement classes.
|
|
|
|
This change adds the ability to define labels and meta-data for
resources. This can be used in the future to identify servers and pass
data between client and server.
|
|
This change adds more methods for controlling the lifecycle of Server instances.
|
|
This change removes the usage of bare-metal provisioning from the OpenDC
Compute module. This significantly simplifies the experiment setup.
|
|
This change moves the bare-metal provisioning packages outside the
compute module since these modules represent different layers in the
ecosystem and should not be mixed.
|
|
This change introduces the ComputeService interface (previously
VirtProvisioningService) and provides a central implementation in
opendc-compute-service.
Previously, the implementation of this interface was bound to the
simulator package, which meant that independent business logic could not
be re-used without importing the simulator code.
|
|
This change extracts the API for the OpenDC Compute service into a separate
module to establish a clearer boundary between the interface meant for
consumers and interfaces meant for the the serve implementation.
|
|
This change converts the Server data class which can be used as a
stateful object to control an instance running in the cloud.
|
|
This change refactors the OpenDC Compute module so that the
VirtProvisioningService is now responsible for managing the lifecycle of
Server objects as opposed to the VirtDriver and BareMetalDriver
previously.
|
|
This change removes the use of ServiceRegistry in the OpenDC compute
module. It was not actually being used by any of the code and we are
moving to another interface in the future.
|
|
This change separates the cloud compute layer in OpenDC (e.g., Server)
from the bare-metal layer (e.g., Node), such that Node and
BareMetalDriver are unaware of the existence of Server and co.
|
|
This change removes the SimWorkloadImage implementation and changes
Image to a data class without workload. Simulation workloads should now
be pased via image metadata as the image storage should be unaware of
any simulation details.
|
|
This commit implements the energy models that are present in CloudSim:
1. Constant
2. Linear
3. Cubic
4. Square root
5. Interpolation based on data.
|
|
This change uses the Java Platform functionality from Gradle to enable
shared dependency constraints across modules.
|
|
This change updates the Gradle configuration to utilize version
constraints to force the same dependency version across modules.
|
|
This change moves the version of the dependencies from buildSrc to
gradle.properties to prevent recompilation when changing dependency
versions.
|
|
This change extracts the configuration for test from the Kotlin library
conventions.
|
|
This change removes unnecessary dependencies on JUnit Platform launcher
from the repository. Previously, the launcher was used to bootstrap
tests for Gradle when it did not natively support JUnit Platform.
Gradle now has native support for JUnit Platform, so the dependency is
not needed anymore.
|
|
This change allows users to select the hypervisor scheduler to use when
deploying hypervisors onto bare-metal machines.
|
|
This change adds a new hypervisor implementation that supports virtual
machine that have exclusive access to resources (e.g., CPU).
|
|
This change converts the low-level workload model to be pull-based. This
reduces the overhead that we experienced with our previous co-routine
based approach.
|
|
This change updates the workflow service to delegate the resource
scheduling logic to the virtualized resource provisioner.
|
|
|
|
|
|
|
|
|