| Age | Commit message (Collapse) | Author |
|
* Started on reimplementing the SimTrace implementation
* updated trace format. Fragments now do not have a deadline, but a duration. The Fragments are executed in order.
|
|
* Updated SimTrace to use a single ArrayDeque instead of three separate lists for deadline, cpuUsage, and coreCount
* Renamed input files to tasks.parquet and fragments.parquet. Renamed server to task. OpenDC nows exports tasks.parquet instead of server.parquet
|
|
|
|
* sync with the master branch
* rebase
* multimodel - simulation is currently run as many times as you can see a model
* factory method - handles models without given params
* removed redundant flags
* modelType
* flags removed
* implemented output into a folder
* multimodel ipynb setup - to be implemented and also ran as a python script, when the simulation occurs
* towards a mutimodel python implementation - issue observed - the saved files have same data?
* json parsing handles now lists for topology, workloads, allocaitonPolicies, powerModels
* scenarioFile inputs lists, and creates multiple combinations of scenarios
* multi-model prediction repaired, now we predict using multiple models
* commit before removing powerModel from scenario
* commit after removing powerModel from scenario
* commit after removing powerModel from scenario (and actually running)
* powermodels now can output their name and full name (with min and max)
* now we can select where to output (seed or output folder)
* input files - clear naming + output naming improved
* minimal changes
* all tests passing + json files from tests updated to the new json format
* json files from topology now accept only one power model (instead of list)
* json files from topology now accept only one power model (instead of list)
* multi and single input from tests updated to match the format
* tests passed locally
* spotless applies
* demo folder removed
|
|
* Started with the carbon trace implementation
* Moved the carbon trace system to the proper folders
|
|
* Revamped the trace system. All TraceFormat files are now in the api module. This fixes some problems with not being able to use types of traces
* applied spotless
|
|
* Updated all package versions including kotlin. Updated all web-server tests to run.
* Changed the java version of the tests. OpenDC now only supports java 19.
* small update
* test update
* new update
* updated docker version to 19
* updated docker version to 19
|
|
This change updates the build configuration to use Spotless for code
formating of both Kotlin and Java.
|
|
This change updates the repository to remove the use of wildcard imports
everywhere. Wildcard imports are not allowed by default by Ktlint as
well as Google's Java style guide.
|
|
This change updates the TraceFormat lookup algorithm to prevent caching
the available trace format on first access. Since the result of
ServiceLoader depends on the Thread's context ClassLoader, they may
differ between different threads.
Furthermore, ServiceLoader maintains its own thread-local cache, so we
can instead utilize that cache and always use the results returned by
it.
|
|
This change updates the trace API by introducing a limited type system
for the table columns. Previously, the table columns could have any
possible type representable by the JVM. With this change, we limit the
available types to a small type system.
|
|
This change adds support for projecting certain columns of a table. This
enables faster reading for tables with high number of columns.
Currently, we support projection in the Parquet-based workload formats.
Other formats are text-based and will probably not benefit much from
projection.
|
|
This change adds support for querying workload trace formats implemented
using the OpenDC API through Apache Calcite. This allows users to write
SQL queries to explore the workload traces.
|
|
This change moves the trace conventions (such as table and column names)
in a separate conv package, so that it is separated from the main API.
This also allows for a potential move into a separate module in the
future.
|
|
This change updates the OpenDC VM trace format to incorporate the VM
interference model in the trace format itself. This makes sense since
the model is tightly coupled to the actual trace that is being
simulated.
This approach has as benefit that we can directly load the
interference model from the workload trace, without having to resolve
the model seperately (as we did before).
|
|
This change removes the opendc-platform module from the project. This
module represented a Java platform which was previously used for sharing
a set of dependency versions between subprojects. However, with the
version catalogue that was added by Gradle, we currently do not use the
platform anymore.
|
|
This change updates the implementation of the trace converter and
SimTrace implementation to support cases where there is a gap between
samples in the trace data.
This change allows users to specify what to do in case samples are
missing in the trace. The available options are specified in
`SimTrace.FillMode`. Currently, we support either carrying the previous
value forward or set the usage to zero.
|
|
This change adds support for converting the Azure VM traces into the
OpenDC trace format.
|
|
This change adds a new API for writing traces in a trace format.
Currently, writing is only supported by the OpenDC VM format, but over
time the other formats will also have support for writing added.
|
|
This change simplifies the TraceFormat SPI interface by reducing the
number of interfaces that implementors need to implement to only
TraceFormat.
|
|
|
|
This change adds support for looking up the column value through the
column index. This enables faster lookup when processing very large
traces.
|
|
This change unifies columns of different tables used by trace formats.
This concretely means that instead of having columns specific per table
(e.g., RESOURCE_ID and RESOURCE_STATE_ID), with this changes these
columns are shared between the tables with a single definition
(RESOURCE_ID).
|
|
This change optimizes the OpenDC VM trace format by removing
unnecessary columns as well as optimizing the writer settings.
The new implementation still supports reading the old trace format in
case users run OpenDC with older workload traces.
|
|
This change enables users to open traces of various trace formats by
dynamically specifying the format name. The trace API will use the
service loader to resolve the available trace formats on the classpath.
|
|
This change adds a new API to the Table interface for accessing the
table columns that the table supports. This does not necessarily mean
that the column will have a value for every row, but that the table
format has defined this particular column.
|
|
This change adds support for reading WfCommons workflow traces in
OpenDC. This functionality is available in the new
`opendc-trace-wfformat` module.
|
|
This change removes the external class that holds the state of the
reader and instead puts the state in the reader implementation.
Maintaining a separate class for the state increases the complexity and
has worse performance characteristics due to the bytecode produced by
Kotlin for property accesses.
|
|
|
|
This change updates the trace reading classes in the Capelin experiment
to use the new trace API in order to re-use many of the trace reading
parts.
|
|
This change introduces a new OpenDC API for reading various trace
formats in a streaming manner.
|